[Make-wifi-fast] [Cerowrt-devel] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

David Lang david at lang.hm
Sun Aug 9 18:09:35 EDT 2015


On Sat, 8 Aug 2015, dpreed at reed.com wrote:

> There's a lot of "folklore" out there about radio systems and WiFi that is 
> quite wrong, and you seem to be quoting some of it - e.g. the idea that the 1 
> Mb/s waveform of 802.11b DSSS is somehow more reliable than the lowest-rate 
> OFDM modulations, which is often false.

I agree with you, but my understanding is that the current algorithms always 
assume that slower == more robust transmissions. My point was that in a weak 
signal environement where you have troble decoding individual bits this is true 
(or close enough to true for "failed transmission" -> "retransmit at a slower 
rate" to be a very useful algorithm, but in a congested environment where your 
biggest problem is being stepped on by other tranmissions, this is close to 
suicide instead.

> The 20 MHz-wide M0 modulation with 800ns GI gives 6.2 Mb/s and typically much 
> more reliable than than the 802.11b standard 1 Mb/sec DSSS signals in normal 
> environments, with typical receiver designs.

Interesting and good to know.

> It's not the case that beacon frames are transmitted at 1 Mb/sec. - 
> that is only true when there are 802.11b stations *associated* with the access 
> point (which cannot happen at 5 GHz).

Also interesting. I wish I knew of a way to disable the 802.11b modes on teh 
wndr3800 or wrt1200 series APs. I've seen some documentation online talking 
about it, but it's never worked when I've tried it.

Dave Taht did some experimentation with cerowrt in increasing the broadcase 
rate, but my understanding is that he had to back out those changes because they 
didnt' work well in the real world.

> Nor is it true that the preamble for ERP 
> frames is wastefully long. The preamble for an ERP (OFDM operation) frame is 
> about 6 microseconds long, except in the odd case on 2.4GHz of 
> compatibility-mode (OFDM-DSSS) operation, where the DSSS preamble is used. 
> The DSSS preamble is 72 usec. long, because 72 bits at 1 Mb/sec takes that 
> long, but the ERP frame's preamble is much shorter.

Is compatibility mode needed for 802.11g or 802.11b compatibility?

> In any case, my main points were about the fact that "channel estimation" is 
> the key issue in deciding on a modulation to use (and MIMO settings to use), 
> and the problem with that is that channels change characteristics quite 
> quickly indoors! A spinning fan blade can create significant variation in the 
> impulse response over a period of a couple milliseconds.  To do well on 
> channel estimation to pick a high data rate, you need to avoid a backlog in 
> the collection of outbound packets on all stations - which means minimizing 
> queue buildup (even if that means sending shorter packets, getting a higher 
> data rate will minimize channel occupancy).
> 
> Long frames make congested networks work badly - ideally there would only be 
> one frame ready to go when the current frame is transmitted, but the longer 
> the frame, the more likely more than one station will be ready, and the longer 
> the frames will be (if they are being combined).  That means that the penalty 
> due to, and frequency of, collisions where more than one frame are being sent 
> at the same time grows, wasting airtime with collisions.  That's why CTS/RTS 
> is often a good approach (the CTS/RTS frames are short, so a collision will be 
> less wasteful of airtime).

I run the wireless network for the Scale conference where we get a couple 
thousand people showing up with their equipment. I'm gearing up for next year's 
conference (decideing what I'm going to try, what equipment I'm going to need, 
etc). I would love to get any help you can offer on this, and I'm willing to do 
a fair bit of experimentation and a lot of measurements to see what's happening 
in the real world. I haven't been setting anything to specifically enable 
RTS/CTS in the past.

> But due to preamble size, etc., CTS/RTS can't be 
> very short, so an alternative hybrid approach is useful (assume that all 
> stations transmit CTS frames at the same time, you can use the synchronization 
> acquired during the CTS to mitigate the need for a preamble on the packet sent 
> after the RTS).  (One of the papers I did with my student Aggelos Bletsas on 
> Cooperative Diversity uses CTS/RTS in this clever way - to measure the channel 
> while acquiring it).

how do you get the stations synchronized?

David Lang

> 
> 
>
> On Friday, August 7, 2015 6:31pm, "David Lang" <david at lang.hm> said:
>
>
>
>> On Fri, 7 Aug 2015, dpreed at reed.com wrote:
>> 
>> > On Friday, August 7, 2015 4:03pm, "David Lang" <david at lang.hm> said:
>> >>
>> >
>> >> Wifi is the only place I know of where the transmit bit rate is going to
>> vary
>> >> depending on the next hop address.
>> >
>> >
>> > This is an interesting core issue. The question is whether additional
>> > queueing helps or hurts this, and whether the MAC protocol of WiFi deals well
>> > or poorly with this issue. It is clear that this is a peculiarly WiFi'ish
>> > issue.
>> >
>> > It's not clear that the best transmit rate remains stable for very long, or
>> > even how to predict the "best rate" for the next station since the next
>> > station is one you may not have transmitted to for a long time, so your "best
>> > rate" information is old.
>> 
>> I wasn't even talking about the stability of the data rate to one destination. I
>> was talking about the fact that you may have a 1.3Gb connection to system A (a
>> desktop with a -ac 3x3 radio) and a 1Mb connection to machine B (an IoT 802.11b
>> thermostat)
>> 
>> trying to do BQL across 3+ orders of magnatude in speed isn't going to work
>> wihtout taking the speed into account.
>> 
>> Even if all you do is estimate with the last known speed, you will do better
>> than ignorming the speed entirely.
>> 
>> If the wifi can 'return' data to the queue when the transmission fails, it can
>> then fetch less data when it 're-transmits' the data at a lower speed.
>> 
>> > Queueing makes information about the channel older,
>> > by binding it too early. Sending longer frames means retransmitting longer
>> > frames when they don't get through, rather than agilely picking a better rate
>> > after a few bits.
>> 
>> As I understand wifi, once a transmission starts, it must continue at that same
>> data rate, it can't change mid-transmission (and tehre would be no way of
>> getting feedback in the middle of a transmission to know that it would need to
>> change)
>> 
>> > The MAC protocol really should give the receiver some opportunity to control
>> > the rate of the next packet it gets (which it can do because it can measure
>> > the channel from the transmitter to itself, by listening to prior
>> > transmissions). Or at least to signal channel changes that might require a
>> > new signalling rate.
>> >
>> > This suggests that a transmitter might want to "warn" a receiver that some
>> > packets will be coming its way, so the receiver can preemptively change the
>> > desired rate. Thus, perhaps an RTS-CTS like mechanism can be embedded in the
>> > MAC protocol, which requires that the device "look ahead" at the packets it
>> > might be sending.
>> 
>> the recipient will receive a signal at any data rate, you don't have to tell it
>> ahead of time what rate is going to be sent. If it's being sent with a known
>> encoding, it will be decoded.
>> 
>> The sender picks the rate based on a number of things
>> 
>> 1. what the other end said they could do based on the mode that they are
>> connected with (b vs g vs n vs bonded n vs ac vs 2x2 ac etc)
>> 
>> 2. what has worked in the past. (with failed transmissions resulting in dropping
>> the rate)
>> 
>> there may be other data like last known signal strength in the mix as well.
>> 
>> 
>> > On the other hand, that only works if the transmitter deliberately congests
>> > itself so that it has a queue built up to look at.
>> 
>> no, the table of associated devices keeps track of things like the last known
>> signal strength, connection mode, etc. no congestion needed.
>> 
>> > The tradeoffs are not obvious here at all. On the other hand, one could do
>> > something much simpler - just have the transmitter slow down to the
>> worst-case
>> > rate required by any receiving system.
>> 
>> that's 1Mb/sec. This is the rate used for things like SSID broadcasts.
>> 
>> Once a system connects, you know from the connection handshake what speeds could
>> work. no need to limit yourself the the minimum that they all can know at that
>> point.
>> 
>> > As the number of stations in range gets larger, though, it seems unlikely
>> that
>> > "batching" multiple packets to the same destination is a good idea at all -
>> > because to achieve that, one must have n_destinations * batch_size chunks of
>> > data queued in the system as a whole, and that gets quite large. I suspect
>> it
>> > would be better to find a lower level way to just keep the packets going out
>> > as fast as they arrive, so no clog occurs, and to slow down the stuff at the
>> > source as quickly as possible.
>> 
>> no, no, no
>> 
>> you are falling into the hardware designer trap that we just talked about :-)
>> 
>> you don't wait for the buffers to fill and always send full buffers, you
>> oppertunisticaly send data up to the max size.
>> 
>> you do want to send multiple packets if you have them waiting. Because if you
>> can send 10 packets to machine A and 10 packets to machine B in the time that it
>> would take to send one packet to A, one packet to B, a second packet to A and a
>> second packet to B, you have a substantial win for both A and B at the cost of
>> very little latency for either.
>> 
>> If there is so little traffic that sending the packets out one at a time doesn't
>> generate any congeston, then good, do that [1]. but when you max out the
>> airtime, getting more data through in the same amount of airtime by sending
>> larger batches is a win
>> 
>> [1] if you are trying to share the same channel with others, this may be a
>> problem as it uses more airtime to send the same amount of data than always
>> batching. But this is a case of less than optimal network design ;-)
>> 
>> > [one should also dive into the reason for maintaining variable rates -
>> > multipath to a particular destination may require longer symbols for decoding
>> > without ISI. And when multipath is involved, you may have to retransmit at a
>> > slower rate. There's usually not much "noise" at the receiver compared to the
>> > multipath environment. (one of the reasons why mesh can be a lot better is
>> > that shorter distances have much less multipath effect, so you can get higher
>> > symbol rates by going multi-hop, and of course higher symbol rates compensate
>> > for more airtime occupied by a packet due to repeating).]
>> 
>> distance, interference, noise, etc are all variable in wifi. As a result, you
>> need to adapt.
>> 
>> The problem is that the adaptation is sometimes doing the wrong thing.
>> 
>> simlifying things a bit:
>> 
>> If your data doesn't get through at rate A, is the right thing to drop to rate
>> A/2 and re-transmit?
>> 
>> If the reason it didn't go through is that the signal is too weak for the rateA
>> encoding, then yes.
>> 
>> If the reason it didn't go through is that your transmission was stepped on by
>> something you can't hear (and can't hear you), but the recipient can here, then
>> slowing down means that you take twice the airtime to get the message through,
>> and you now have twice the chance of being stepped on again. Repeat and you
>> quickly get to everyone broadcasting at low rates and nothing getting through.
>> 
>> 
>> This is the key reason that dense wifi networks 'fall off the cliff' when they
>> hit saturation, the backoff that is entirely correct for a weak-signal, low
>> usage situations is entirely wrong in dense environments.
>> 
>> David Lang
>>



More information about the Make-wifi-fast mailing list