[Cerowrt-devel] cerowrt-3.10.32-9 released

Dave Taht dave.taht at gmail.com
Tue Mar 18 10:21:07 EDT 2014


Regrettably the SQM system on the wndr series of hardware maxes out on
CPU at about 50Mbit down, 10Mbit up, or any combination thereof (e.g
25/25 works). If you want to apply this code at higher rates, routing
hardware with more "oomph" is needed.

I would be interested in a rrul test of your 50Mbit system. My tests
of verizon at 25/25 showed them well managed on the up, far less well
managed on the down, so in your 50Mbit design you might want to merely
control the down with SQM.

On Tue, Mar 18, 2014 at 8:12 AM, Sebastian Moeller <moeller0 at gmx.de> wrote:
> Hi Edwin,
>
>
> On Mar 18, 2014, at 11:00 , Török Edwin <edwin at etorok.net> wrote:
>
>> On 03/16/2014 09:58 PM, Dave Taht wrote:
>>> Get it at:
>>>
>>> http://snapon.lab.bufferbloat.net/~cero2/cerowrt/wndr/3.10.32-9/
>>>
>>> I've been running this a few days now with no problems.
>>
>> Can you please add these packages:
>> - p910nd
>> - luci-app-p910nd
>> - wifitoggle
>>
>> Just upgraded from 3.7.5-2, and it looks good so far.
>>
>> I'm not sure about the SQM Link Layer Adaptation, the wiki says that I should leave it as 'none' for Fiber, but how can I test
>> if that is actually the correct setting?
>
>         If you know that you have per packet overhead (more than the pure ethernet header that is handled with 'none') you should select "ethernet with overhead" and specify the overhead on your line (be sure to add the 14 bytes for the ethernet header as the kernel unhelpfully forgets to take this into account when you use the link layer adjustment method tc_stab)
>         For ATM based systems we could use the RTT quantization effects of the ATM cells to deduce the overhead empirically but for links with out quantization that does not work, so I do not know how check which overhead to specify empirically, all you could do is look at the information you have for your link and potentially ask your ISP for more information. Just remember the goal is to supply precise information about the on-wire size of data packets so SQM can calculate the true bandwidth-cost associated with each packet. BTW if anyone in the audience knows how to measure the overhead for ethernet packets, please chime in.
>         From your information below I would estimate:
>         As far as I know GPON, basically is a ethernet hub solution (with one segment shared between several customers) so there is only typical ethernet overhead, plus potential framing and vlan tags, so if you select "ethernet" as link layer option, you should use the following overhead:
>         PPP (2B), PPPoE (6B), ethernet (14B, reguired for tc_stab), potentially VLAN (4B?), potentially ethernet frame check sequence (???B)
>         Your ISP should be able to tell you whether he uses VLAN tags on the bottle neck link (it does not matter whether the VLAN tags are actually visible/existent on your end of the GPON modem)
>         So somewhere in the 22 to 30bytes range should work. Alas the only way to figure this out for good is to snoop packets on the fiber segment, so realistically you need to ask your ISP, or be happy that 22Bytes is as close to the true overhead as you can get with the information at your hand. And the closer to the actual wire size SQMs supplied bandwidths are the preciser the shaping works.
>         That said it looks like each of your packets is like 8bytes larger than the kernel assumes without link layer adjustments or roughly 100*8/64 = 12.5 % for the smallest ethernet packets and 100*8/1500 = 0.5% for the largest, assuming you typically use larger packets than 64 bytes, you should not really notice whether the overhead is set correctly or not. On principle I would recommend to use "ethernet with overhead" but it should not make much of a difference. Especially since you will need to cut the shaper some slack anyways, that is even with link layer adjustments latency will be compromised unless you reduce the bandwidths specified to SQM from the line rates...
>
> Best Regards
>         Sebastian
>
>
>
>>
>> I have this setup with my ISP:
>> cerowrt router <---(Ethernet) ----> (ISP on premise switch for multiple apartments) <----> (ISP device) <--- (fiber optics) ---> ISP
>>
>> I connect using PPPoE, and AFAIK the ISP is using GPON.
>> Currently I have ~50 Mbps up/down speed, but I could upgrade to 1000 Mbps up/down.
>>
>> Thanks,
>> --Edwin
>>
>>
>>
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html



More information about the Cerowrt-devel mailing list