Combating wireless bufferbloat while maximizing aggregation

Dave Täht d at taht.net
Mon Feb 14 12:18:58 EST 2011


Just forwarding this thread to the list, in the hope that it moves
there, or to Linux-wireless, and out of just our mailboxes.

Nathaniel Smith <njs at pobox.com> writes:

> On Sun, Feb 13, 2011 at 5:34 PM, Felix Fietkau <nbd at openwrt.org> wrote:
>> Hi Nathaniel, Dave,
>
> Hi Felix,
>
>> I'm currently trying to work out a way to combat the bufferbloat issue
>> in ath9k, while still ensuring that aggregation does its job to reduce
>> airtime utilization properly.
>
> Excellent! I'm not sure I have any particularly useful insights to
> give -- my day job is as a linguist, not a network engineer :-) -- but
> I'll throw out some quick thoughts and if they're useful, great, and
> if not, I won't feel bad.
>
>> The nice thing about ath9k compared to Intel drivers is that all of the
>> aggregation related queueing is done inside the driver instead of some
>> opaque hardware queue or firmware. That allows me to be more selective
>> in which packets to drop and which ones to keep.
>
> This is the first place I'm confused. Why would you drop packets
> inside the driver? Shouldn't dropping packets be the responsibility of
> the Qdisc feeding your driver, since that's where all the smart AMQ
> and QoS and user-specified-policy knobs live? My understanding is that
> the driver's job is just to take the minimum number of packets at a
> time (consistent with throughput, etc.) from the Qdisc and send them
> on. Or are you talking about dropping in the sense of spending more
> effort on driver-level retransmit in some cases than others?
>
> For that I have a crazy idea: what if the driver took each potentially
> retransmittable packet and handed it *back* to the Qdisc, who then
> could apply policy to send it to the back of the queue, jump it to the
> front of the queue for immediate retransmission, throw it away if
> higher priority traffic has arrived and the queue is full now, etc.
> You'd probably need to add some API to tell the Qdisc that the packet
> you want to enqueue has already waited once (I imagine the default
> dumb Qdisc would want to enqueue such packets at the head of the queue
> by default). Perhaps also some way to give up on a packet if it's
> waited "too long" (but then again, perhaps not!). But as I think about
> this idea it does grow on me.
>
>> For aggregation I would like to allow at least the maximum number of
>> packets that can fit into one A-MPDU, which depends on the selected
>> rate. Since wireless driver queueing will really only have an effect
>> when we're running short on airtime, we need to make sure that we reduce
>> airtime waste caused by PHY headers, interframe spacing, etc.
>> A-MPDU is a very neat way to do that...
>
> If sending N packets is as cheap (in latency terms) as sending 1, then
> I don't see how queueing up N packets can hurt any!
>
> The iwlwifi patches I just sent do the dumbest possible fix, of making
> the tx queue have a fixed latency instead of a fixed number of
> packets. I found this attractive because I figured I wasn't smart
> enough to anticipate all the different things that might affect
> transmission rate, so better to just measure what was happening and
> adapt. In principle, if A-MPDU is in use, and that lets us send more
> packets for the price of one, then this approach would notice that
> reflected in our packet throughput and the queue size should increase
> to match.
>
> Obviously this could break if the queue size ever dropped too low --
> you might lose throughput because of the smaller queue size, and then
> that would lock in the small queue size, causing loss of throughput...
> but I don't see any major downsides to just setting a minimum
> allowable queue size, so long as it's accurate.
>
> In fact, my intuition is that the only thing way to improve on just
> queueing up a full A-MPDU aggregated packet would be to wait until
> *just before* your transmission time slot rolls around and *then*
> queueing up a full A-MPDU aggregated packet. If you get to transmit
> every K milliseconds, and you always refill your queue immediately
> after transmitting, then in the worst case a high-priority packet
> might have to wait 2*K ms (K ms sitting at the head of the Qdisc
> waiting for you to open your queue, then another K ms in the driver
> waiting to be transmitted). This worst case drops to K ms if you
> always refill immediately before transmitting. But the possible gains
> here are bounded by whatever uncertainty you have about the upcoming
> transmission time, scheduling jitter, and K. I don't know what any of
> those constants look like in practice.
>
> Well, that's my brain dump for now. Make of it what you will!
> -- Nathaniel

-- 
Dave Taht
http://nex-6.taht.net



More information about the Bloat-devel mailing list