[Make-wifi-fast] the hidden station problem

David Lang david at lang.hm
Thu Apr 21 13:57:38 EDT 2016


On Thu, 21 Apr 2016, Dave Taht wrote:

> I was watching myself do then make-wifi-fast Q&A and henning mentioned
> the hidden station problem and it's interaction with minstrel...
>
> https://www.youtube.com/watch?v=Rb-UnHDw02o
>
> Since we are doing up some better testbeds, I am curious as to what
> might be a good (simplified) setup (bench or air) for it, and/or if
> there has been a paper that shows the interaction problems with
> minstrel in particular.

the basic way to see this is to take two stations and move them far enough 
apart, or put shielding between them so that they cannot talk to each other.

Then position a third station so that it can see both of the first two.

If you really turn the power down, you may be able to get away with them fairly 
near each other with a metal sheet next to one of them.

You will see that you can talk to either of them quite nicely if the other is 
pretty idle, but if you have them both sending a lot of data at the same time, 
disaster strikes.



If you are writing a simulator, add a probability that a packet transmitted from 
an edge station to the central station doesn't get through. Ramp up this 
probability and watch what happens. A better simulator would scale the 
probability up based on the amount of airtime needed, so that as the sender 
slowes down, the probability goes up.




This is one of the hardest problems for wifi to deal with. It manifests as 
massive amounts of lost packets when the first two are sending to the third one, 
and no amount of backoff helps. Slowing down the transmit rate just makes things 
worse as it takes longer to transmit each bundle and so it's more likely to be 
stepped on.



Reading up on Minstrel at 
https://wireless.wiki.kernel.org/en/developers/documentation/mac80211/ratecontrol/minstrel 
there is a comment

> Inspection of the code in different rate algorithms left us bewildered. Why 
> did all the code bases we looked at contain the assumption that packets sent 
> at slow data rates are more likely to succeed than packets sent at higher 
> datarates? The physics behind this assumption baffled us. A slow data rate 
> packet has the highest possibility of being “shot down” by some other node 
> sending a packet.

the answer to this is that the higher data rates require a better signal to 
noise ratio, and so if the problem is that the stations are too far apart, or 
there is a wall between them that makes the signal weaker, or that there is just 
a lot of low-volume noise in the area, the slower data rates are far more likely 
to be understandable than the faster data rates. Since Wifi was designed long 
before anyone imagined how common it would become (I remember when the pcmcia 
cards were >$1000 each rather than the current <$10 for a much faster USB 
adapter), they designed the protocol to fall back to lower rates if the packets 
don't get through.

This works well if you are out in the boonies and trying for range. It fails 
horribly in very high density environments (this is why most conference wifi is 
worthless for example)


This is why it's a good idea to disable the lowest data rates if you know that 
you don't need them.



reading the minsrel page, it seems intuitively obvious to me that this random 
packet drop would really mess with their moving average and thus the decisions 
they end up making.

David Lang


More information about the Make-wifi-fast mailing list