[Cerowrt-devel] been writing all night
Sebastian Moeller
moeller0 at gmx.de
Sun Dec 29 07:37:10 PST 2013
Hi Dave,
On Dec 29, 2013, at 14:28 , Dave Taht <dave.taht at gmail.com> wrote:
> it is now 5:26 am. I have not had an all night writing or coding binge
> since I quit smoking back in july. I bought a pack this afternoon. It
> turns out that chain-smoking has benefits to my writing process... I
> revised the aqm page
>
> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_for_CeroWrt_310
Great, some comments on the link layer situation:
> 3. Link Layer Adaptation
>
> You must set the Link Layer Adaptation options correctly so that CeroWrt can perform its best with VoIP, gaming, and other protocols that rely on short packets. The general rule for selecting the Link Layer Adaption is:
>
> • If you use any kind of DSL/ADSL connection to the Internet (that is, if you get your internet service through the telephone line), you should choose the "ATM" item.
This should read if you use any kind of ADSL line you should use ATM, VDSL users should select ethernet (VDSL retained the ability to be deployed over ATM, but to my knowledge it typically uses PTM, a much saner transport layer for data)
> Leave the Per-packet Overhead set to zero.
For ATM, getting the overhead too small will on average create one half ATM cell of padding per packet size that is not accounted for by HTB, getting it to large will make HTB over judge the transmission time on the wire, wasting a bit of bandwidths (again by half a cell per packet size). Underestimating the overhead has a worse effect than over judging it, so I vote for changing our default to an overhead of 40bytes; with 44 being the absolute maximum and rare, and being 40 the second largest and as far as I know the most likely with PPPoE, LLC/SNAP, RFC2684 encapsulation. (Rant: From a user perspective IP over ATM would be the best, as it makes more of the payed-for link speed useable for actual traffic).
For non-ATM links, often telcos still use PPPoE (in Germany for VDSL2, and even fiber GPON) so a small overhead of 8 bytes, I think would be applicable. But given the typical fiber and VDSL2 link rates, misjudging that will not have a really bad effect. (With ATM we have the fact that the actual overhead is way larger and that it might drag in an additional almost empty padded ATM cell.)
Now, the recommendations in the wiki, should contain the information, that with VDSL there is the slight chance of ATM encapsulation and give a hint of how to diagnose that.
> -- dtaht -- I am so unable to parse the huge email thread on the DSL issue.
>
> • If you use Ethernet, Cable modem, Fiber, or other kind of connection to the Internet, you should choose “none (default)”, and move on.
Even though there might be a small overhead on all of those, I think this is sound advise to keep things simple.
To complicate things further VDSL1 uses HDLC as link layer which would be quite nasty to handle (HDLC wire size is not simply size dependent as in ATM, no it is actually data dependent, with worst case 2fold increase from data size to wire size, that would require to actually search each data packet for occurrence of octets that will be escaped on the wire) if VDSL1 had a significant deployment, that is...
> • [What is the proper description here?] If you use PPPoE (but not over ADSL/DSL link),
Select ethernet and specify a proper overhead, I assume 8 bytes
> PPPoATM,
You are on ATM, hence enable ATM, 40 bytes overhead will waste some bandwidth but retain latency.
> or bridging that isn’t Ethernet,
I have to pass, no idea...
> you should choose [what?] and set the Per-packet Overhead to [what?]
> If you cannot tell what kind of link you have, first try the ATM choice and run the Quick Test for Bufferbloat. If the results are good, you’re done. You can also try the other link layer adaptations to see which performs better.
Mmmh, the ATM link layer adjustments will work on all underlaying carriers, as it will effectively just estimate a wire transmit time > ethernet transmit time. So on non-ATM that just results in more bandwidth wasted, latency stays well. The proof is rather the other way around only with link layer ATM, people on ATM link will be able to set the shaped rates to around 90% at all. Unfortunately this effect is most pronounced for small packet sizes and we have no easy way for people to test the performance with small packets… (that said, maybe reducing the MTU in the router might work, I need to test this...)
>
> (which I'll rename and crosslink to a few other places)
>
> and wrote
>
> http://www.bufferbloat.net/projects/cerowrt/wiki/Wondershaper_Must_Die
Nice.
>
> in addition to all the other emails that came out of me today.
>
> I was unaware btw, that shaperprobe had found a home at mlabs. I've
> been shipping shaperprobe in cerowrt since the bismark days, so
> perhaps with an update to that and some code to detect sfq that I've
> been winging about for ages, perhaps we can use that (and leverage
> mlab's servers to boot)!!
>
> Before I quit again I suppose I should take on a couple RFCs and the
> other stuff in my writing backlog. :cough: :jack: :wheeze:
>
> --
> Dave Täht
>
> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
More information about the Cerowrt-devel
mailing list