[Cerowrt-devel] Coping with wireless-n [#305]
david at lang.hm
david at lang.hm
Thu Dec 8 08:16:48 EST 2011
On Thu, 8 Dec 2011, Dave Taht wrote:
> On Thu, Dec 8, 2011 at 1:25 PM, <david at lang.hm> wrote:
>> On Thu, 8 Dec 2011, Dave Taht wrote:
>>> On Thu, Dec 8, 2011 at 12:51 PM, <david at lang.hm> wrote:
>>>> On Thu, 8 Dec 2011, Dave Taht wrote:
>>>> but I don't understand why there is a big problem with G and N sharing
>>>> same SSID.
>>> Because you can fully FQ G, and if you do that to N, it messes up
>> I don't recognize the term "FQ".
> Fair Queue
> http://info.iet.unipi.it/~luigi/qfq/ in this case.
thanks, I''ll read up on that
>> when you say it messes up aggregation, do you mean combining two channels
>> togeather for higher throughput or something else?
> No, the way the driver is structured it swallows as many packets as are
> aggregatable for a given destination, then ships them. If you instead
> try to do the right thing - which is to break up packet bursts into
> as tiny pieces as possible, aggregation goes to heck. What you want
> to do is aggregate fair queued packets for a given destination, at a
> size that will fit (up to 64 packets or 64kbytes) at the rate the wireless
> interface is running at.
> As a result nobody does FQ, nor AQM, on wireless n, where it is
> so desparately needed.
this doesn't sound like a unique problem for N. there are other networks
where the per-transmission overhead is significant and there are benefits
in combining multiple packets in one transmission.
even on ethernet you have a fairly significant per-packet overhead and I
remember reading of some cases where people have implemented things that
combine multiple packets into a larger packet to send at once just to
avoid the inter-packet gap overhead
this is not a problem to abandon, but one to track down. The result of
this problem is that you need to be able to control
1. the aggregation parameters (how long will the subsystem wait for more
traffic, and how much is it willing to combine).
2. the amount of data that is sent to the driver at once. You want to send
enough data in as short a time period as possible to fill the aggregation
chunk, even if that means sending packets that are futher up in your queue
and skipping over more 'deserving' packets. The reason for this is that
the additional packets included in the aggregation are MUCH cheaper to
the ideal situation is that the stack aggredation doesn't actually wait
for additional packets going to this destination, it just looks in it's
queue and sends all (up to a limit) of the packets going to this
destination when it sends the first one.
going slightly off topic with a relavent example.
rsyslog has the ability to write log messages to a database. It has the
ability to do so using transactions so that it gets confirmation that the
logs are really safe (not just in some memory buffer somewhere that can be
lost in a power failure)
with a database, the overhead of a transaction is large enough that you
can frequently insert 100 or 1000 entries in the same wall clock time that
it takes you to insert one entry (within about a 10% error margin, at
least at the high end of the range). Rsyslog gained the ability to batch
message delivery by taking the approach:
each time you look at the queue, grab up to N (configurable) messages and
deliver them all at once
how this works in the real world is that the first message that arrives
gets delivered by itself, but while that message is getting delivered
inefficently, the queue builds up a bit. when it goes back to look for
more work, there is now a set of multiple messages to be delivered.
this approch 'wastes' bandwidth, but it results in all messages being
delivered with the minimum possible latency (yes, waiting a little bit to
get new messages would reduce the latency of a few messages, but it would
increase the latency for far more messages), and is self-tuning as the
rate of new messages arriving in the queue and the available bandwidth for
outputting the messages varies (because, just like with the RF airtime,
the disk I/O bandwidth is a scarce resource that you are in competition
If the kernel stack uses a similar approach, then what you want to do is
to try to do the same and feed it chunks of the right size. If it doesn't
use this sort of algorithm, it may be that it needs to be changed to
eliminate the timeout-based mechanism that it has.
>> I was assuming that if you are running a mized network you only use a single
>> channel for N. If you are using multiple channels for N they should be a
>> separate SSID, just from the fact that you are using two channels for N but
>> only one for G (which one would be the question)
> One channel for both N and G in this case. Only one radio for 5ghz.
many APs with only one radio in a band can still configure that radio to
use a 40MHz wide channel for N rather than the standard 20MHz wide channel
that G uses. Many people try to do this on 2.4Ghz (which is only about
60MHz wide for the entire band)
>>>> there is some
>> some, but it's an unavoidable feature of wireless communication. You can
>> consider turning off some modulation types, but since the clients
>> automatically fall back to slower modulation types when there is a problem,
>> the result will be failed connections.
> To give you an idea, at 5ghz I'm capable with cerowrt at achieving 150Mbits
> with TCP - in the clean air here.
> At 2.4, it's rare I can get more than 20, and fairly often much less than that.
> Any given test I run regarding wireless simply is not repeatable if I do it
> on 2.4ghz.
> I can usually 'hear' more than 30 access points at my apt, as another
>> now, this may still be the right thing to do, because the failback to a
>> slower modulation type works well for weak signal situations, but in a high
>> density situation (which is basically every 2.4G deployment in the real
>> world nowdays), taking longer to send the same data just means that you are
>> more vunerable to another transmitter clobbering you, so it actually
>> decreases reliability.
> yes, minstrel rocks.
More information about the Cerowrt-devel