From: Sebastian Moeller <moeller0@gmx.de>
To: Richard Smith <smithbone@gmail.com>
Cc: "cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Cerowrt-devel] Linksys wrt1900acs rrul traces
Date: Fri, 08 Apr 2016 15:17:51 +0200 [thread overview]
Message-ID: <66A42333-0801-45AF-B67E-B7CFAF22FE43@gmx.de> (raw)
In-Reply-To: <57079B2F.8070607@gmail.com>
Hi Richard,
On April 8, 2016 1:51:11 PM GMT+02:00, Richard Smith <smithbone@gmail.com> wrote:
>On 04/07/2016 12:16 PM, moeller0 wrote:
>> Hi Richard,
>>
>>> For these tests I had the inbound and outbound limits set to 975000
>>> kbps. 975000 was somewhat arbitrary. I wanted it below 1Gbps
>>> enough that I could be sure it was the router as the limit but yet
>>> fast enough that I would be able to see the peak transfer rates.
>>
>> All of the following might be old news to you, but please let me
>> elaborate for others on this list (well, most folks here know way
>> more about these things than I do). I believe Gbit ethernet is
>> trickier than one would guess, the 1 Gbit rate contains some overhead
>> that one typically does not account for. Here is the equivalent
>> on-the-wire size of a full MTU non-jumbo ethernet frame: Layer “1+":
>> 1500 (payload pMTU) + 6 (dest MAC) + 6 (src MAC) + 2 (ethertype) + 4
>> (FCS) + 7 (preamble) + 1 (start of frame delimiter) + 12 (interframe
>> gap)) = 1500+6+6+2+4+7+1+12 = 1538 ”Equivalent” in that the
>> interframe gap is not really used, but filled with silence but it has
>> the duration one would need for 12 bytes.
>
>I knew there was a bit of overhead but thanks for laying out the
>details
>and why it matters.
>
>> 975000 * (1538/1514) = 990455.746367 (which still is below the 1GBit
>> Layer1+ ceiling that GbE has). At 985000 * (1538/1514) =
>> 1000614.26684 you would already have slowly caused the NIC’s buffer
>> to fill ;)
>
>So by luck I just barely made it. :)the
Yes, for full MTU-sized packets you made it, but try the same with say 150 Byte packets:
975000 * (183/164) = 1117682.9... That will nicely Show bloated Buffets in the Ethernet driver, but should still work well for drivers using BQL. IMHO opinion proper shaping requires to take overhead into account.
>
>> Luckily sqm-scripts will allow you to specify any
>> additional per-packet overhead so just set this to 24 and things
>> should just work out I believe.
>
>I knew that there might need to be some overhead accounting and I
>looked
>at Link Layer Adaption tab in the options when I set up the SQM but the
>
>info in that menu isn't quite descriptive enough for for my setup.
>
>It has a box for "Ethernet with overhead, select for eg VSDL2". I'm
>using Ethernet but I don't know about VSDL2. If I select it then I get
>
>a 2nd box that asks for the per packet overhead. Even if I had tried
>to
>fill that out then I would have gotten it wrong. :)
Fair enough, we actually multiplex two related things with that drop down menu, whether to account for per packet overhead or not, and whether to also account for ATMs AAL5 cell wonkiness. In retrospect we should have separated these...
In your case select Ethernet with overhead, and manually put 24 into these packet overhead field, as the kernel already accounted for 14 of the total of 38.
>
>The other option is "ATM" which I know I'm not using.
>
>Not knowing what I really should select I just left it at None.
>
>Perhaps there are too many combinations to add items mol for everything but
>
>a few more options and more descriptions of scenarios might be helpful.
>
>So now that I have that set at 24 is it worth re-running some of the
>tests?
I assume in your case this will not change your results a lot, as you most likely used full MTU packets in the test flows, I believe your low latency under load increases show that you have not much buffer bloat left... I would still recommend to use the correct per packet overhead to be on the right side...
Best Regards
Sebastian
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
next prev parent reply other threads:[~2016-04-08 13:17 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-07 13:58 Richard Smith
2016-04-07 15:24 ` Dave Taht
2016-04-08 11:51 ` Richard Smith
2016-04-07 16:16 ` moeller0
2016-04-07 16:40 ` Dave Taht
2016-04-08 11:51 ` Richard Smith
2016-04-08 13:17 ` Sebastian Moeller [this message]
2016-04-08 13:55 ` John Yates
2016-04-08 19:20 ` [Cerowrt-devel] Linksys wrt1900acs rrul traces ko Sebastian Moeller
2016-04-07 17:42 ` [Cerowrt-devel] Linksys wrt1900acs rrul traces Aaron Wood
2016-04-07 17:48 ` Dave Taht
2016-04-07 17:50 ` moeller0
2016-04-07 18:05 ` Dave Taht
2016-04-07 20:36 ` Aaron Wood
2016-04-08 11:51 ` Richard Smith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/cerowrt-devel.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=66A42333-0801-45AF-B67E-B7CFAF22FE43@gmx.de \
--to=moeller0@gmx.de \
--cc=cerowrt-devel@lists.bufferbloat.net \
--cc=smithbone@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox