General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: Michael Welzl <michawe@ifi.uio.no>
To: David Lang <david@lang.hm>
Cc: bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] sigcomm wifi
Date: Sat, 23 Aug 2014 01:07:11 +0200	[thread overview]
Message-ID: <4C1661D0-32C6-48E7-BAE6-60C98D7B2D69@ifi.uio.no> (raw)
In-Reply-To: <alpine.DEB.2.02.1408210121020.19685@nftneq.ynat.uz>


On 21. aug. 2014, at 10:30, David Lang <david@lang.hm> wrote:

> On Thu, 21 Aug 2014, Michael Welzl wrote:
> 
>> On 21. aug. 2014, at 08:52, Eggert, Lars wrote:
>> 
>>> On 2014-8-21, at 0:05, Jim Gettys <jg@freedesktop.org> wrote:
>>>> ​And what kinds of AP's?  All the 1G guarantees you is that your bottleneck is in the wifi hop, and they can suffer as badly as anything else (particularly consumer home routers).
>>>> The reason why 802.11 works ok at IETF and NANOG is that:
>>>> o) they use Cisco enterprise AP's, which are not badly over buffered.
>> 
>> I'd like to better understand this particular bloat problem:
>> 
>> 100s of senders try to send at the same time. They can't all do that, so their cards retry a fixed number of times (10 or something, I don't remember, probably configurable), for which they need to have a buffer.
>> 
>> Say, the buffer is too big. Say, we make it smaller. Then an 802.11 sender trying to get its time slot in a crowded network will have to drop a packet, requiring the TCP sender to retransmit the packet instead. The TCP sender will think it's congestion (not entirely wrong) and reduce its window (not entirely wrong either). How appropriate TCP's cwnd reduction is probably depends on how "true" the notion of congestion is ... i.e. if I can buffer only one packet and just don't get to send it, or it gets a CRC error ("collides" in the air), then that can be seen as a pure matter of luck. Then I provoke a sender reaction that's like the old story of TCP mis-interpreting random losses as a sign of congestion. I think in most practical systems this old story is now a myth because wireless equipment will try to buffer data for a relatively long time instead of exhibiting sporadic random drops to upper layers. That is, in principle, a good thing - but buffering too much has of course all the problems that we know. Not an easy trade-off at all I think.
> 
> in this case the loss is a direct sign of congestion.

"this case" - I talk about different buffer lengths. E.g., take the minimal buffer that would just function, and set retransmissions to 0. Then, a packet loss is a pretty random matter - just because you and I contended, doesn't mean that the net is truly "overloaded" ?   So my point is that the buffer creates a continuum from "random loss" to "actual congestion" - we want loss to mean "actual congestion", but how large should it be to meaningfully convey that?


> remember that TCP was developed back in the days of 10base2 networks where everyone on the network was sharing a wire and it was very possible for multiple senders to start transmitting on the wire at the same time, just like with radio.

cable or wireless: is one such occurrence "congestion"?
i.e. is halving the cwnd really the right response to that sort of "congestion"? (contention, really)


> A large part of the problem with high-density wifi is that it just wasn't designed for that sort of environment, and there are a lot of things that it does that work great for low-density, weak signal environments, but just make the problem worse for high-density environements
> 
> batching packets together
> slowing down the transmit speed if you aren't getting through

well... this *should* only happen when there's an actual physical signal quality degradation, not just collisions. at least minstrel does quite a good job at ensuring that, most of the time.


> retries of packets that the OS has given up on (including the user has closed the app that sent them)
> 
> Ideally we want the wifi layer to be just like the wired layer, buffer only what's needed to get it on the air without 'dead air' (where the driver is waiting for the OS to give it more data), at that point, we can do the retries from the OS as appropriate.
> 
>> I have two questions: 1) is my characterization roughly correct? 2) have people investigated the downsides (negative effect on TCP) of buffering *too little* in wireless equipment? (I suspect so?)  Finding where "too little" begins could give us a better idea of what the ideal buffer length should really be.
> 
> too little buffering will reduce the throughput as a result of unused airtime.

so that's a function of, at least: 1) incoming traffic rate; 2) no. retries * ( f(MAC behavior; number of other senders trying) ).


> But at the low data rates involved, the system would have to be extremely busy to be a significant amount of time if even one packet at a time is buffered.



> You are also conflating the effect of the driver/hardware buffering with it doing retries.

because of the "function" i wrote above: the more you retry, the more you need to buffer when traffic continuously arrives because you're stuck trying to send a frame again.

what am I getting wrong? this seems to be just the conversation I was hoping to have ( so thanks!)  - I'd like to figure out if there's a fault in my logic.

Cheers,
Michael


  reply	other threads:[~2014-08-22 23:07 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-19 16:45 Dave Taht
2014-08-20  7:12 ` Eggert, Lars
2014-08-20 14:01   ` Dave Taht
2014-08-20 22:05   ` Jim Gettys
2014-08-21  6:52     ` Eggert, Lars
2014-08-21  7:11       ` Michael Welzl
2014-08-21  8:30         ` David Lang
2014-08-22 23:07           ` Michael Welzl [this message]
2014-08-22 23:50             ` David Lang
2014-08-23 19:26               ` Michael Welzl
2014-08-23 23:29                 ` Jonathan Morton
2014-08-23 23:40                   ` Steinar H. Gunderson
2014-08-23 23:49                     ` Jonathan Morton
2014-08-24  1:33                   ` David Lang
2014-08-24  2:29                     ` Jonathan Morton
2014-08-24  5:12                       ` David Lang
2014-08-24  6:26                         ` Jonathan Morton
2014-08-24  8:24                           ` David Lang
2014-08-24  9:20                             ` Jonathan Morton
2014-08-25  7:25                             ` Michael Welzl
2014-08-30  7:20                             ` Jonathan Morton
2014-08-31 20:46                               ` Simon Barber
2014-08-25  7:35                   ` Michael Welzl
2014-08-24  1:09                 ` David Lang
2014-08-25  8:01                   ` Michael Welzl
2014-08-25  8:19                     ` Sebastian Moeller
2014-08-25  8:33                       ` Michael Welzl
2014-08-25  9:18                         ` Alex Burr
2014-08-31 22:37                       ` David Lang
2014-08-31 23:09                         ` Simon Barber
2014-09-01  0:25                           ` David Lang
2014-09-01  2:14                             ` Simon Barber
2014-08-31 22:35                     ` David Lang
2014-08-21  6:56     ` David Lang
2014-08-21  7:04     ` David Lang
2014-08-21  9:46       ` Jesper Dangaard Brouer
2014-08-21 19:49         ` David Lang
2014-08-21 19:57           ` Steinar H. Gunderson
2014-08-22 17:07             ` Jan Ceuleers
2014-08-22 18:27               ` Steinar H. Gunderson
2014-08-21  8:58     ` Steinar H. Gunderson
2014-08-22 23:34       ` David Lang
2014-08-22 23:41         ` Steinar H. Gunderson
2014-08-22 23:52           ` David Lang
2014-08-22 23:56             ` Steinar H. Gunderson
2014-08-23  0:03               ` Steinar H. Gunderson
2014-08-21  9:23     ` Mikael Abrahamsson
2014-08-21  9:30       ` Steinar H. Gunderson
2014-08-22 23:30         ` David Lang
2014-08-22 23:40           ` Steinar H. Gunderson
2014-08-20  8:30 ` Steinar H. Gunderson
2014-08-21  6:58   ` David Lang
2014-08-24  3:49 Hal Murray
2014-08-24  3:52 ` Jonathan Morton
2014-08-24  5:14 ` David Lang
2014-08-25  7:43   ` Michael Welzl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C1661D0-32C6-48E7-BAE6-60C98D7B2D69@ifi.uio.no \
    --to=michawe@ifi.uio.no \
    --cc=bloat@lists.bufferbloat.net \
    --cc=david@lang.hm \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox