[Bloat] sigcomm wifi

David Lang david at lang.hm
Thu Aug 21 04:30:17 EDT 2014


On Thu, 21 Aug 2014, Michael Welzl wrote:

> On 21. aug. 2014, at 08:52, Eggert, Lars wrote:
>
>> On 2014-8-21, at 0:05, Jim Gettys <jg at freedesktop.org> wrote:
>>> ​And what kinds of AP's?  All the 1G guarantees you is that your bottleneck is in the wifi hop, and they can suffer as badly as anything else (particularly consumer home routers).
>>> 
>>> The reason why 802.11 works ok at IETF and NANOG is that:
>>>  o) they use Cisco enterprise AP's, which are not badly over buffered.
>
> I'd like to better understand this particular bloat problem:
>
> 100s of senders try to send at the same time. They can't all do that, so their 
> cards retry a fixed number of times (10 or something, I don't remember, 
> probably configurable), for which they need to have a buffer.
>
> Say, the buffer is too big. Say, we make it smaller. Then an 802.11 sender 
> trying to get its time slot in a crowded network will have to drop a packet, 
> requiring the TCP sender to retransmit the packet instead. The TCP sender will 
> think it's congestion (not entirely wrong) and reduce its window (not entirely 
> wrong either). How appropriate TCP's cwnd reduction is probably depends on how 
> "true" the notion of congestion is ... i.e. if I can buffer only one packet 
> and just don't get to send it, or it gets a CRC error ("collides" in the air), 
> then that can be seen as a pure matter of luck. Then I provoke a sender 
> reaction that's like the old story of TCP mis-interpreting random losses as a 
> sign of congestion. I think in most practical systems this old story is now a 
> myth because wireless equipment will try to buffer data for a relatively long 
> time instead of exhibiting sporadic random drops to upper layers. That is, in 
> principle, a good thing - but buffering too much has of course all the 
> problems that we know. Not an easy trade-off at all I think.

in this case the loss is a direct sign of congestion.

remember that TCP was developed back in the days of 10base2 networks where 
everyone on the network was sharing a wire and it was very possible for multiple 
senders to start transmitting on the wire at the same time, just like with 
radio.

A large part of the problem with high-density wifi is that it just wasn't 
designed for that sort of environment, and there are a lot of things that it 
does that work great for low-density, weak signal environments, but just make 
the problem worse for high-density environements

batching packets together
slowing down the transmit speed if you aren't getting through
retries of packets that the OS has given up on (including the user has closed 
the app that sent them)

Ideally we want the wifi layer to be just like the wired layer, buffer only 
what's needed to get it on the air without 'dead air' (where the driver is 
waiting for the OS to give it more data), at that point, we can do the retries 
from the OS as appropriate.

> I have two questions: 1) is my characterization roughly correct? 2) have 
> people investigated the downsides (negative effect on TCP) of buffering *too 
> little* in wireless equipment? (I suspect so?)  Finding where "too little" 
> begins could give us a better idea of what the ideal buffer length should 
> really be.

too little buffering will reduce the throughput as a result of unused airtime.

But at the low data rates involved, the system would have to be extremely busy 
to be a significant amount of time if even one packet at a time is buffered.

You are also conflating the effect of the driver/hardware buffering with it 
doing retries.

David Lang


More information about the Bloat mailing list