[Cerowrt-devel] Linksys wrt1900acs rrul traces

Sebastian Moeller moeller0 at gmx.de
Fri Apr 8 09:17:51 EDT 2016


Hi Richard, 


On April 8, 2016 1:51:11 PM GMT+02:00, Richard Smith <smithbone at gmail.com> wrote:
>On 04/07/2016 12:16 PM, moeller0 wrote:
>> Hi Richard,
>>
>>> For these tests I had the inbound and outbound limits set to 975000
>>> kbps.  975000 was somewhat arbitrary.  I wanted it below 1Gbps
>>> enough that I could be sure it was the router as the limit but yet
>>> fast enough that I would be able to see the peak transfer rates.
>>
>> All of the following might be old news to you, but please let me
>> elaborate for others on this list (well, most folks here know way
>> more about these things than I do). I believe Gbit ethernet is
>> trickier than one would guess, the 1 Gbit rate contains some overhead
>> that one typically does not account for. Here is the equivalent
>> on-the-wire size of a full MTU non-jumbo ethernet frame: Layer “1+":
>> 1500 (payload pMTU) + 6 (dest MAC) + 6 (src MAC) + 2 (ethertype) + 4
>> (FCS) + 7 (preamble) + 1 (start of frame delimiter) + 12 (interframe
>> gap)) = 1500+6+6+2+4+7+1+12 = 1538 ”Equivalent” in that the
>> interframe gap is not really used, but filled with silence but it has
>> the duration one would need for 12 bytes.
>
>I knew there was a bit of overhead but thanks for laying out the
>details 
>and why it matters.
>
>> 975000 * (1538/1514) = 990455.746367 (which still is below the 1GBit
>> Layer1+ ceiling that GbE has).  At 985000 * (1538/1514) =
>> 1000614.26684 you would already have slowly caused the NIC’s buffer
>> to fill ;)
>
>So by luck I just barely made it. :)the

        Yes, for full MTU-sized packets you made it, but try the same with say 150 Byte packets:
975000 * (183/164) = 1117682.9... That will nicely Show bloated Buffets in  the Ethernet driver, but should still work well for drivers using BQL. IMHO opinion proper shaping requires to take overhead into account.


>
>> Luckily sqm-scripts will allow you to specify any
>> additional per-packet overhead so just set this to 24 and things
>> should just work out I believe.
>
>I knew that there might need to be some overhead accounting and I
>looked 
>at Link Layer Adaption tab in the options when I set up the SQM but the
>
>info in that menu isn't quite descriptive enough for for my setup.
>
>It has a box for "Ethernet with overhead, select for eg VSDL2".  I'm 
>using Ethernet but I don't know about VSDL2.  If I select it then I get
>
>a 2nd box that asks for the per packet overhead.  Even if I had tried
>to 
>fill that out then I would have gotten it wrong. :)

Fair enough, we actually multiplex two related things with that drop down menu, whether to account for per packet overhead or not, and whether to also account for ATMs AAL5 cell wonkiness. In retrospect we should have separated these...

In your case select Ethernet with overhead, and manually put 24 into these packet overhead field, as the kernel already accounted for 14 of the total of 38.


>
>The other option is "ATM" which I know I'm not using.
>
>Not knowing what I really should select I just left it at None.
>
>Perhaps there are too many combinations to add items mol for everything but
>
>a few more options and more descriptions of scenarios might be helpful.
>
>So now that I have that set at 24 is it worth re-running some of the
>tests?


I assume in your case this will not change your results a lot, as you most likely used full MTU packets in the test flows, I believe your low latency under load increases show that you have not much buffer bloat left... I would still recommend to use the correct per packet overhead to be on the right side...

Best Regards
        Sebastian

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


More information about the Cerowrt-devel mailing list