[Make-wifi-fast] [Cerowrt-devel] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

Simon Barber simon at superduper.net
Mon Aug 10 09:48:04 EDT 2015



On 8/7/2015 3:31 PM, David Lang wrote:
> On Fri, 7 Aug 2015, dpreed at reed.com wrote:
>
>> On Friday, August 7, 2015 4:03pm, "David Lang" <david at lang.hm> said:
>>>
>>
>>> Wifi is the only place I know of where the transmit bit rate is 
>>> going to vary
>>> depending on the next hop address.
>>
>>
>> This is an interesting core issue.  The question is whether 
>> additional queueing helps or hurts this, and whether the MAC protocol 
>> of WiFi deals well or poorly with this issue.  It is clear that this 
>> is a peculiarly WiFi'ish issue.
>>
>> It's not clear that the best transmit rate remains stable for very 
>> long, or even how to predict the "best rate" for the next station 
>> since the next station is one you may not have transmitted to for a 
>> long time, so your "best rate" information is old.
>
> I wasn't even talking about the stability of the data rate to one 
> destination. I was talking about the fact that you may have a 1.3Gb 
> connection to system A (a desktop with a -ac 3x3 radio) and a 1Mb 
> connection to machine B (an IoT 802.11b thermostat)
>
> trying to do BQL across 3+ orders of magnatude in speed isn't going to 
> work wihtout taking the speed into account.
>
> Even if all you do is estimate with the last known speed, you will do 
> better than ignorming the speed entirely.
I have been a proponent of Time Queue Limits (TQL) for wifi for a long time!

Simon

>
> If the wifi can 'return' data to the queue when the transmission 
> fails, it can then fetch less data when it 're-transmits' the data at 
> a lower speed.
>
>>  Queueing makes information about the channel older, by binding it 
>> too early.  Sending longer frames means retransmitting longer frames 
>> when they don't get through, rather than agilely picking a better 
>> rate after a few bits.
>
> As I understand wifi, once a transmission starts, it must continue at 
> that same data rate, it can't change mid-transmission (and tehre would 
> be no way of getting feedback in the middle of a transmission to know 
> that it would need to change)
>
>> The MAC protocol really should give the receiver some opportunity to 
>> control the rate of the next packet it gets (which it can do because 
>> it can measure the channel from the transmitter to itself, by 
>> listening to prior transmissions).  Or at least to signal channel 
>> changes that might require a new signalling rate.
>>
>> This suggests that a transmitter might want to "warn" a receiver that 
>> some packets will be coming its way, so the receiver can preemptively 
>> change the desired rate.  Thus, perhaps an RTS-CTS like mechanism can 
>> be embedded in the MAC protocol, which requires that the device "look 
>> ahead" at the packets it might be sending.
>
> the recipient will receive a signal at any data rate, you don't have 
> to tell it ahead of time what rate is going to be sent. If it's being 
> sent with a known encoding, it will be decoded.
>
> The sender picks the rate based on a number of things
>
> 1. what the other end said they could do based on the mode that they 
> are connected with (b vs g vs n vs bonded n vs ac vs 2x2 ac etc)
>
> 2. what has worked in the past. (with failed transmissions resulting 
> in dropping the rate)
>
> there may be other data like last known signal strength in the mix as 
> well.
>
>
>> On the other hand, that only works if the transmitter deliberately 
>> congests itself so that it has a queue built up to look at.
>
> no, the table of associated devices keeps track of things like the 
> last known signal strength, connection mode, etc. no congestion needed.
>
>> The tradeoffs are not obvious here at all.  On the other hand, one 
>> could do something much simpler - just have the transmitter slow down 
>> to the worst-case rate required by any receiving system.
>
> that's 1Mb/sec. This is the rate used for things like SSID broadcasts.
>
> Once a system connects, you know from the connection handshake what 
> speeds could work. no need to limit yourself the the minimum that they 
> all can know at that point.
>
>> As the number of stations in range gets larger, though, it seems 
>> unlikely that "batching" multiple packets to the same destination is 
>> a good idea at all - because to achieve that, one must have 
>> n_destinations * batch_size chunks of data queued in the system as a 
>> whole, and that gets quite large.  I suspect it would be better to 
>> find a lower level way to just keep the packets going out as fast as 
>> they arrive, so no clog occurs, and to slow down the stuff at the 
>> source as quickly as possible.
>
> no, no, no
>
> you are falling into the hardware designer trap that we just talked 
> about :-)
>
> you don't wait for the buffers to fill and always send full buffers, 
> you oppertunisticaly send data up to the max size.
>
> you do want to send multiple packets if you have them waiting. Because 
> if you can send 10 packets to machine A and 10 packets to machine B in 
> the time that it would take to send one packet to A, one packet to B, 
> a second packet to A and a second packet to B, you have a substantial 
> win for both A and B at the cost of very little latency for either.
>
> If there is so little traffic that sending the packets out one at a 
> time doesn't generate any congeston, then good, do that [1]. but when 
> you max out the airtime, getting more data through in the same amount 
> of airtime by sending larger batches is a win
>
> [1] if you are trying to share the same channel with others, this may 
> be a problem as it uses more airtime to send the same amount of data 
> than always batching. But this is a case of less than optimal network 
> design ;-)
>
>> [one should also dive into the reason for maintaining variable rates 
>> - multipath to a particular destination may require longer symbols 
>> for decoding without ISI.  And when multipath is involved, you may 
>> have to retransmit at a slower rate. There's usually not much "noise" 
>> at the receiver compared to the multipath environment. (one of the 
>> reasons why mesh can be a lot better is that shorter distances have 
>> much less multipath effect, so you can get higher symbol rates by 
>> going multi-hop, and of course higher symbol rates compensate for 
>> more airtime occupied by a packet due to repeating).]
>
> distance, interference, noise, etc are all variable in wifi. As a 
> result, you need to adapt.
>
> The problem is that the adaptation is sometimes doing the wrong thing.
>
> simlifying things a bit:
>
> If your data doesn't get through at rate A, is the right thing to drop 
> to rate A/2 and re-transmit?
>
> If the reason it didn't go through is that the signal is too weak for 
> the rateA encoding, then yes.
>
> If the reason it didn't go through is that your transmission was 
> stepped on by something you can't hear (and can't hear you), but the 
> recipient can here, then slowing down means that you take twice the 
> airtime to get the message through, and you now have twice the chance 
> of being stepped on again. Repeat and you quickly get to everyone 
> broadcasting at low rates and nothing getting through.
>
>
> This is the key reason that dense wifi networks 'fall off the cliff' 
> when they hit saturation, the backoff that is entirely correct for a 
> weak-signal, low usage situations is entirely wrong in dense 
> environments.
>
> David Lang
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast




More information about the Make-wifi-fast mailing list