[Bloat] Buffer bloat at the sender

Dave Täht d at taht.net
Fri Feb 4 13:43:30 EST 2011


Juliusz Chroboczek <jch at pps.jussieu.fr> writes:

>> Juliusz,  have you thought about the host case at all?
>
>> One of the places we're getting insane buffering is in the operating
>> systems themselves (e.g. the experiment I did with a 100Mbps
>> switch).
>
> Yes.  You have three to four layers of buffering:
>
>   (1) the device driver's buffer;
>   (2) the packet scheduler's buffer;
>   (3) TCP's buffer;
>   (4) the application's buffer.
>
> It will come as no surprise to the readers of this list that (1) and (2)
> are usually too large.  For example, (1) the ath9k driver has a buffer
> of 200 packets; and (2) the default scheduler queue is 1000 packets (!).

The ath9k driver I have has 512 buffers, organized into 10 queues of
various usages that I don't think are actually being used for much (need
a way to gain insight into the queue usage and qdisc interaction), with
TX_RETRIES set to 13. Fairly current openwrt head.

I've put a patch out there to reduce this to (I think) effectively a
queue depth of 3, retries of 4, throughout, and the results are thus far
amazing.

>
>> My intuition is that we have to do AQM in hosts, not just routers.
>
> Hmm... I would argue that the sending host is somewhat easier than the
> intermediate router.  In the sender, the driver/packet scheduler can
> apply backpressure to the transport layer, to cause it to slow down
> without the need for the lengthy feedback loop that dropping/delaying
> a packet in an intermediate router has to rely on [1].
>
> Unfortunately, at least under Linux, most drivers do not apply
> backpressure correctly.  Markus Kittenberger has recently determined [2]
> that among b43-legacy, ath5k, ath9k and madwifi, only the former two do
> the right thing.

I was wondering about madwifi in the context of the mesh potato. Thank you.

I have been looking over the mq qdisc, it's not clear how well it's
being used.

>
> --Juliusz
>
> [1] Now why did we give up on source quench again?
> [2] http://article.gmane.org/gmane.network.olsr.user/4264
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

-- 
Dave Taht
http://nex-6.taht.net



More information about the Bloat mailing list