[Cerowrt-devel] VDSL

Sebastian Moeller moeller0 at gmx.de
Sat Jan 11 14:15:26 PST 2014


Hi Aaron,


On Jan 11, 2014, at 21:23 , Aaron Wood <woody77 at gmail.com> wrote:

> Rich, Sebastian, (and others),
> 
> First, a hello.  I've been lurking on the bufferbloat mailing list for a bit, and just joined here as well to better follow what's going on, and see if there's any way I can help.
> 
> Next, I have an ADSL+ link in Paris (Free.fr), and am willing to run a number of tests using the various LLA options and overhead estimations.  But 3 hours on a dead-quiet link could be hard to deal with.  

	Well, it takes 3 hours to collect 10000 samples for 100 different packet sizes (creating a maximum transient traffic of < 150kbit/s), you could collect less (your RTT will be dominated by your uplink, so 2000 to 3000 samples per size will be plenty, reducing the time to something more manageable). And as long as your link has 150 kbit/sec reserved on the uplink (since it is more critical there) you should be fine. Unless you saturate your link, additional traffic to the probe will increase the variance of the ping times, but you might be able to get away with, just try to measure for as long as you are comfortable with. (I typically just run this over night… and there is not much going on in our network at that time, but it is not dead-quiet either).

> I'm happy to run an hour's worth of netperf tests to a nearby server, slowly working through the parameter space, and then comparing the results.\

	Again, I recommend to perform the ATM quantization probe against the nearest host (already in the ISPs network, ideally the DSLAM/MSAN) to reduce the variance in the data...

> 
> My ad-hoc comparisons last week with various modes showed that too high of settings for the overhead (coupled with the already reduced bw limit in the shaper) killed the bulk upload/download performance (which I care about on a meager 18Mbps/1Mbps link).  

	Yes, as expected you pay a bandwidth price for getting the overhead too large. The price you pay for specifying the overhead to low is that for some packet sizes there is no difference while some other packet sizes will drag in an additional ATM cell of 43 bytes. A larger packet sizes typical for bulk upload this additional 43 bytes will not really kill your through put, for small packets this can be catastrophic. Think your specify verhead 0 no link layer adjustments and send a 50 byte packet, the shaper thinks this takes 64 bytes, but on an ATM link with PPPoE LLC/SNAP you actually have 40 bytes overhead + 64 byte packet = 104 bytes which require 3 ATM cells of 48 byte totaling 3*48 = 144 bytes on the wire, your packet more than doubled (factor 2.25). And if you specify an overhead that is too small, you still get some packet sizes that drag in an additional cell with the same consequences.
	But note that these effects depend on the size of the packets, which people typically do not vary during bandwidth tests...

> But I found that setting the bw shaper limit to the reported line speed (from the modem), and then adjusting the overhead parameters got me the same bulk performances, with the same latencies (or what appeared to be the same, I need to do more data crunching on the results, and run again in a quieter setting).

	As I tried to indicate above for bulk traffic the effect of 47 bytes in 1500 might be lost in the noise (and 47 bytes per packet is the worst case), think roughly 3 percent worst case link capacity overestimation (and if you shape to 95% this already fits into the shaped margin).

> 
> It would also help if my target server was closer than it is.  The only server I know of is in Germany, 55ms away (unloaded).

	I think you should measure the per racket overhead where it actually matters, on the link to the DSLAM/ISP, any congestion on shared network segments further away from the router cannot really be controlled well by shaping in the router, so let's not try to fix that part of the route :) .


> 
> OTOH, I can say that any changes here over the defaults are still gilding the lily (to an end user).  

	I think I need to come up with a worst case ATM carrier effects test, so people can feel the pain quicker. The dang dependence on packet size really is unfortunate as no bulk speed tester I know tries a decent set of packet sizes...

> But Free.fr's router/modem already uses codel, so it wasn't that bad to begin with (vs. Numericable on a docsis 3 Netgear router/modem).

	If I understand correctly, at least the newer free boxes should be very hard to beat, can you disable the modems AQM for testing?

> 
> ===
> 
> I can also say that I found the current verbiage on that particular page a bit "clear as mud".  

	Which page?

> Even knowing what my network is (to some degree, since the Free.fr modem can tell me), it was difficult to follow, and I quickly found myself at about 3/4 my previous speeds, with no visible improvement in latency (although that could have been a measurement tool issue, as I was doing 60-second runs using the rrul netperf loads).

	So codel/fq_codel typically try to keep the latency increase bound to 5ms (per leg) so the ping times should stay nicely somewhere between 55ms and 65ms, if the rrul ping trace stays in that band you are already golden; instead of reducing the shaped bandwidth I would start increasing it again, until you notice the latencies increase again. Note for RRUL I typically aim for 300 seconds (I think Dave once recommended that) to get over all transient effects.


> 
> The main question I have is:
> 
> - Should we both limit the bandwidth well below the reported line rate (or measured IP rate) AND use the link layer adaption settings? (85-90% of bandwidth)
>  or
> - rely on the LLA settings to do the overhead, and shape to just a tiny bit under the reported line rate? (95-99% of bandwidth)

	I wou;d go for the second option. But note while you have full control over your uplink, on the downlink there might be not expected packets arriving (think you connect to your network via VPN) so the current recommendation is to shape the downlink a bit more aggressively than the uplink. Currently I use 95% on uplink and 90% on downlink (I have not yet found the time to see how far up I can push these, but latency is generally quite pleasent).

Best Regards
	Sebastian

> 
> Thanks,
> 
> - Aaron Wood
> 
> 
> On Sat, Jan 11, 2014 at 8:06 PM, Rich Brown <richb.hanover at gmail.com> wrote:
> Hi Sebastian,
> 
> >>>     Well, that looks like a decent recommendation for the wiki. The SQM configuration page still needs to expose all three values, atm, ethernet, and  none so that people can actually change things...
> >>
> >> So two questions really:
> >>
> >> 1) (From my previous note) What’s the difference between the current “Ethernet” (for VDSL) and “None” link layer adaptations?
> >
> >       Currently, "none" completely disables the link layer adjustments, "ethernet" enables them, but will only use the overhead (unless you specify an tcMPU, but that is truly exotic).
> >
> >>
> >> 2) When we distinguish the Ethernet/VDSL case, I would really like to use a different name from “Ethernet” because it seems confusingly similar to  having a real Ethernet path/link (e.g., direct connection to internet backbone without any ADSL, cable modem, etc.)
> >
> >       On the one hand I agree, but the two options are called "ATM" (well for tc "adsl" is a valid alias for ATM) and "ethernet" if you pass them to tc (what we do), and I would really hate it to hide this under fancy names. I see no chance of renaming those options in tc, so we are sort of stuck with them and adding another layer of indirection seems too opaque to me. This is why I put some explanation behind the option names in the list box…
> 
> Now I see how it works. (I didn’t understand that “None” really meant NONE.) The following choices in the Link Layer Adaptation would have eased my confusion:
> 
> - ATM (almost every type of ADSL or DSL)
> - Ethernet with overhead
> - None (default)
> 
> Then the text can say:
> 
>> You must set the Link Layer Adaptation options so that CeroWrt can perform its best with video and audio chat, gaming, and other protocols that rely on short packets. The general rule for selecting the Link Layer Adaption is:
> 
> * If you use any kind of DSL/ADSL connection to the Internet (that is, if you get your internet service through the telephone line), you should choose the “ATM (almost every type of ADSL or DSL)" item. Set the Per-packet Overhead to 44.
> 
> * If you know you have a VDSL connection, you should choose “Ethernet with overhead" and set the Per-packet Overhead to 8.
> 
> * If you use Cable modem, Fiber, or another kind of connection to the Internet, you should choose “None (default)”. All other parameters will be ignored.
> 
> If you cannot tell what kind of link you have, first try using "None", then run the [[Quick Test for Bufferbloat]]. If the results are good, you’re done. Next,  try the ADSL/ATM choice, then the VDSL choice to see which performs best. Read the **Details** (below) to learn more about tuning the parameters for your link.
>> 
> Would that be better? Thanks.
> 
> Rich
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 



More information about the Cerowrt-devel mailing list