From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from masada.superduper.net (unknown [IPv6:2001:ba8:1f1:f263::2]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id 074D821F848; Mon, 10 Aug 2015 06:48:07 -0700 (PDT) Received: from block9.public.monkeybrains.net ([162.217.75.161] helo=[192.168.128.6]) by masada.superduper.net with esmtpsa (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from ) id 1ZOnQo-0002FV-Lg; Mon, 10 Aug 2015 14:48:04 +0100 Message-ID: <55C8AB94.1000302@superduper.net> Date: Mon, 10 Aug 2015 06:48:04 -0700 From: Simon Barber User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: David Lang , dpreed@reed.com References: <356F5FEE-9FBD-4FF9-AC17-86A642D918A4@gmail.com> <5CC1DC90-DFAF-4A4D-8204-16CD4E20D6E3@gmx.de> <4D24A497-5784-493D-B409-F704804326A7@gmx.de> <1438361254.45977158@apps.rackspace.com> <6E08E48D-5D53-48E5-B088-2D1DB5E566AD@gmail.com> <1438983998.16576420@apps.rackspace.com> In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: -2.9 (--) Cc: make-wifi-fast@lists.bufferbloat.net, cerowrt-devel@lists.bufferbloat.net Subject: Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Aug 2015 13:48:36 -0000 On 8/7/2015 3:31 PM, David Lang wrote: > On Fri, 7 Aug 2015, dpreed@reed.com wrote: > >> On Friday, August 7, 2015 4:03pm, "David Lang" said: >>> >> >>> Wifi is the only place I know of where the transmit bit rate is >>> going to vary >>> depending on the next hop address. >> >> >> This is an interesting core issue. The question is whether >> additional queueing helps or hurts this, and whether the MAC protocol >> of WiFi deals well or poorly with this issue. It is clear that this >> is a peculiarly WiFi'ish issue. >> >> It's not clear that the best transmit rate remains stable for very >> long, or even how to predict the "best rate" for the next station >> since the next station is one you may not have transmitted to for a >> long time, so your "best rate" information is old. > > I wasn't even talking about the stability of the data rate to one > destination. I was talking about the fact that you may have a 1.3Gb > connection to system A (a desktop with a -ac 3x3 radio) and a 1Mb > connection to machine B (an IoT 802.11b thermostat) > > trying to do BQL across 3+ orders of magnatude in speed isn't going to > work wihtout taking the speed into account. > > Even if all you do is estimate with the last known speed, you will do > better than ignorming the speed entirely. I have been a proponent of Time Queue Limits (TQL) for wifi for a long time! Simon > > If the wifi can 'return' data to the queue when the transmission > fails, it can then fetch less data when it 're-transmits' the data at > a lower speed. > >> Queueing makes information about the channel older, by binding it >> too early. Sending longer frames means retransmitting longer frames >> when they don't get through, rather than agilely picking a better >> rate after a few bits. > > As I understand wifi, once a transmission starts, it must continue at > that same data rate, it can't change mid-transmission (and tehre would > be no way of getting feedback in the middle of a transmission to know > that it would need to change) > >> The MAC protocol really should give the receiver some opportunity to >> control the rate of the next packet it gets (which it can do because >> it can measure the channel from the transmitter to itself, by >> listening to prior transmissions). Or at least to signal channel >> changes that might require a new signalling rate. >> >> This suggests that a transmitter might want to "warn" a receiver that >> some packets will be coming its way, so the receiver can preemptively >> change the desired rate. Thus, perhaps an RTS-CTS like mechanism can >> be embedded in the MAC protocol, which requires that the device "look >> ahead" at the packets it might be sending. > > the recipient will receive a signal at any data rate, you don't have > to tell it ahead of time what rate is going to be sent. If it's being > sent with a known encoding, it will be decoded. > > The sender picks the rate based on a number of things > > 1. what the other end said they could do based on the mode that they > are connected with (b vs g vs n vs bonded n vs ac vs 2x2 ac etc) > > 2. what has worked in the past. (with failed transmissions resulting > in dropping the rate) > > there may be other data like last known signal strength in the mix as > well. > > >> On the other hand, that only works if the transmitter deliberately >> congests itself so that it has a queue built up to look at. > > no, the table of associated devices keeps track of things like the > last known signal strength, connection mode, etc. no congestion needed. > >> The tradeoffs are not obvious here at all. On the other hand, one >> could do something much simpler - just have the transmitter slow down >> to the worst-case rate required by any receiving system. > > that's 1Mb/sec. This is the rate used for things like SSID broadcasts. > > Once a system connects, you know from the connection handshake what > speeds could work. no need to limit yourself the the minimum that they > all can know at that point. > >> As the number of stations in range gets larger, though, it seems >> unlikely that "batching" multiple packets to the same destination is >> a good idea at all - because to achieve that, one must have >> n_destinations * batch_size chunks of data queued in the system as a >> whole, and that gets quite large. I suspect it would be better to >> find a lower level way to just keep the packets going out as fast as >> they arrive, so no clog occurs, and to slow down the stuff at the >> source as quickly as possible. > > no, no, no > > you are falling into the hardware designer trap that we just talked > about :-) > > you don't wait for the buffers to fill and always send full buffers, > you oppertunisticaly send data up to the max size. > > you do want to send multiple packets if you have them waiting. Because > if you can send 10 packets to machine A and 10 packets to machine B in > the time that it would take to send one packet to A, one packet to B, > a second packet to A and a second packet to B, you have a substantial > win for both A and B at the cost of very little latency for either. > > If there is so little traffic that sending the packets out one at a > time doesn't generate any congeston, then good, do that [1]. but when > you max out the airtime, getting more data through in the same amount > of airtime by sending larger batches is a win > > [1] if you are trying to share the same channel with others, this may > be a problem as it uses more airtime to send the same amount of data > than always batching. But this is a case of less than optimal network > design ;-) > >> [one should also dive into the reason for maintaining variable rates >> - multipath to a particular destination may require longer symbols >> for decoding without ISI. And when multipath is involved, you may >> have to retransmit at a slower rate. There's usually not much "noise" >> at the receiver compared to the multipath environment. (one of the >> reasons why mesh can be a lot better is that shorter distances have >> much less multipath effect, so you can get higher symbol rates by >> going multi-hop, and of course higher symbol rates compensate for >> more airtime occupied by a packet due to repeating).] > > distance, interference, noise, etc are all variable in wifi. As a > result, you need to adapt. > > The problem is that the adaptation is sometimes doing the wrong thing. > > simlifying things a bit: > > If your data doesn't get through at rate A, is the right thing to drop > to rate A/2 and re-transmit? > > If the reason it didn't go through is that the signal is too weak for > the rateA encoding, then yes. > > If the reason it didn't go through is that your transmission was > stepped on by something you can't hear (and can't hear you), but the > recipient can here, then slowing down means that you take twice the > airtime to get the message through, and you now have twice the chance > of being stepped on again. Repeat and you quickly get to everyone > broadcasting at low rates and nothing getting through. > > > This is the key reason that dense wifi networks 'fall off the cliff' > when they hit saturation, the backoff that is entirely correct for a > weak-signal, low usage situations is entirely wrong in dense > environments. > > David Lang > _______________________________________________ > Make-wifi-fast mailing list > Make-wifi-fast@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/make-wifi-fast