[Bloat] I am unable to pinpoint the source of bufferbloat

Jonathan Morton chromatix99 at gmail.com
Sun Feb 10 11:55:50 EST 2013


If you are traffic shaping - and thus limiting traffic to a level that the
hardware can always deal with immediately - then the hardware/driver queues
will always be empty except immediately after a packet has been queued. So
then their size doesn't matter.

There is an initiative within Linux called BQL (Byte Queue Limits) which
aims to deal with the driver queue problem (which for a lot of hardware,
but not all, is synonymous with the hardware queue). This requires small
edits to each driver though.

- Jonathan Morton
 On Feb 10, 2013 6:43 PM, "Forums1000" <forums1000 at gmail.com> wrote:

> Firstly, I can confirm uploading to Google Drive saturates my upload (I
> checked the outgoing rate in the router) and  also tried Dropbox and got
> the same result.
>
> Anyway, performing an upload with Windows 7 and then pinging the remote
> router with windows XP, added 50ms to the "unbloated" 20ms roundtrip times.
> So the lowest "most represented" RTT numbers I was getting were around
> 70ms, the worst well over 100ms.
> I said "most represented" in the previous sentence, as there was also a
> significant delay variance ("jitter") in the RTT's. The RTT's regularly
> dived under 70ms, sometimes even hitting an odd 30ms. The was also a large
> swing in the other direction with regular ventures to 110-120ms. Needless
> to say, the balance of those ventures tilted more towards the higher
> numbers.
>
> So Windows 7, being more modern definitely induces more bufferbloat
> (thanks to TCP window scaling:)). I'll be repeating this when running
> Wireshark as it slipped my mind to fire it up.
>
> As a sidenote to the above:
> I still don't see how an AQM-algorithm will combat the buffering in
> drivers and hardware (actually, are the issues pertaining to "drivers" and
> "hardware' distinct or referring to the same?). I understand that feeding
> less packets to a device will prevent the hardware buffer from filling up
> as fast as without AQM, but we cannot actually influence its (the HW
> buffer) size, can we? If so, it can still introduce a bottleneck that
> cannot be prevented...
>
> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>> On Sat, Feb 9, 2013 at 2:06 PM, Forums1000 <forums1000 at gmail.com> wrote:
>>
>>> Excellent information. So an AQM-algorithm will sort things on the OS
>>> level of the router and should make things considerably better. However,
>>> from reading around on the matter, it seems drivers for the network device
>>> and the hardware itself also contain buffers. Since, Dave (and respect for
>>> that) is developing CeroWRT, is there anything that can be done about that?
>>> Do we have any idea on how severe the buffering in drivers and hardware is?
>>>
>>>
>> In linux, it's got a lot better in the past 2 years. Work continues.
>>
>> I have some data on OSX now, and some on windows, but not a lot.
>>
>> In APs, routers, and switches, it's not looking very good.
>>
>>
>> However I note that the second biggest place we see the issue is on the
>> edge home gateways, dslams, cable head-ends, and many of those use software
>> rate limiting, which generally has 1 buffer or less native to it, and the
>> underlying buffering doesn't matter.
>>
>> (then on top of the software rate limiters are big fat dumb drop tails
>> queues. currently. sigh)
>>
>>
>>
>>
>>>  A little test I just performed using Windows XP now, indeed shows that
>>> Netalyzr is showing me a worst case scenario:
>>>
>>>
>> Meh. Try something with window scaling. Or, try a netanalyzer test from
>> some other machine at the same time you do this one.
>>
>>
>>> - a continuous ping (1 ping per second) between 2 routers under my
>>> control has an RTT of 20ms (give or take). The remote router I'm pinging
>>> sits pretty much idle and has nothing better to do than answering the ping.
>>> - uploading a large file to Google drive (thereby saturating my uplink
>>> bandwidth) adds +-10ms of additional latency.
>>>
>>
>> I think this is an invalid assumption without actually measuring your
>> transfer rate to the the gdrive. It would not surprise me if their TCP was
>> pretty sensitive to latency swings, however, they are very on top of the
>> bufferbloat issue.
>>
>> Wouldn't mind a packet capture of that upload while doing the above test.
>>
>>
>>
>>>  Sure it varies a bit between 20 and 30ms and goes to 35ms or even 40ms
>>> regularly. Moreover, every now and then I get a spike to 70-80ms but that
>>> spike never lasts more than 3 pings.
>>>
>>> All in all considerably lower bloat than the 550ms Netalyzr is
>>> indicating. In order to mimic the worst case scenario, I'd have to transfer
>>> using UDP then?
>>>
>>
>> Just run a more modern OS....
>>
>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>
>>
>> --
>> Dave Täht
>>
>> Fixing bufferbloat with cerowrt:
>> http://www.teklibre.com/cerowrt/subscribe.html
>>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20130210/d9979d71/attachment-0002.html>


More information about the Bloat mailing list