[Bloat] [Make-wifi-fast] the future belongs to pacing
Toke Høiland-Jørgensen
toke at redhat.com
Mon Jul 6 13:01:33 EDT 2020
Daniel Sterling <sterling.daniel at gmail.com> writes:
> On Mon, Jul 6, 2020 at 10:09 AM Luca Muscariello <muscariello at ieee.org> wrote:
>> If BBR can fix that by having a unique model for all these cases that would make deprecation, as intended in the paper,
>> likely to happen.
>
> Interesting! Thank you all for helping a layperson like me understand.
>
> Obviously getting CC / latency control "correct" under wifi is a
> difficult problem.
>
> I am wondering if you (the experts) have confidence we can solve it --
> that is, can end-users eventually see low latency by default with
> standard gear?
>
> Or are shared transmission mediums like wifi doomed to require large
> buffers for throughput, which means low latency can't be something we
> can have "out of the box" -- ? Is sacrificing throughput for latency
> required for "always low" latency on wifi?
To a certain extent, yes. However, this is orthogonal to the congestion
control being used: WiFi gets its high throughput due to large
aggregates (i.e., 802.11ac significantly increases the maximum allowed
aggregation size compared to 802.11n). Because there's a fixed overhead
for each transmission, the only way you can achieve the maximum
theoretical throughput is by filling the aggregates, and if you do that
while there are a lot of users contending for the medium, you will end
up hurting latency.
So really, the right thing to do in a busy network is to lower the
maximum aggregate size: if you have 20 stations waiting to transmit and
each only transmits for 1ms each instead of the maximum 4ms, you only
add 20ms of delay while waiting for the other stations, instead of 80ms
(best-case, not counting any backoff from collisions, queueing delay,
etc).
-Toke
More information about the Bloat
mailing list