[Bloat] Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes

Bjørn Ivar Teigen bjorn at domos.no
Tue Nov 2 10:02:47 EDT 2021


Thanks for the feedback! Please find my answers below.

> A direct consequence is that we need AQMs at all points in the internet
> > where congestion is likely to happen, even for short periods, to mitigate
> > the impact of latency spikes. Here I am assuming we ultimately want an
> > Internet without lag-spikes, not just low latency on average.
>
> This was something I was wondering when reading your paper. How will
> AQMs help? When the rate drops the AQM may be able to react faster, but
> it won't be able to affect the flow xmit rate any faster than your
> theoretical "perfect" propagation time...


> So in effect, your paper seems to be saying "a flow that saturates the
> link cannot avoid latency spikes from self-congestion when the link rate
> drops, and the only way we can avoid this interfering with *other* flows
> is by using FQ"? Or?
>

Yes, I agree, and that's a very nice way to put it. I would phrase the AQMs
role as "mitigating the negative effects of transient congestion".
Isolating flows from each other, for instance with FQ, is an important part
of that in my opinion.

I'm sure this is familiar to people on this list, but I'll summarize my
views anyway.
Whenever a queue forms the options are very limited: We can choose to drop
packets, and we can choose the order in which the queue is emptied.
FIFO service is one option, but we can also choose some other scheduling
and/or packet loss scheme. FQ is just a specific choice here where latency
is divided close to equally among different flows.

I really like the following analogy to make this point very clear: "If you
have a bag of 100 balls and withdraw one every second, how long does it
take to empty the bag? Now, we can color half the balls red and the other
half blue, and then pick the red balls first. It still takes 100 seconds to
empty the bag." The same principle holds for packet scheduling, only we can
drop packets as well (and thus not pay the delay cost of forwarding them).
Once a queue has formed, the latency and packet loss *must* be divided
among the different packets in the queue, and it's up to the scheduling
part of the AQM to make that choice. What the correct choice is will depend
on many things.


> Also, another follow-on question that might be worth looking into is
> short flows: Many flows fit entirely in an IW, or at least never exit
> slow start. So how does that interact with what you're describing? Is it
> possible to quantify this effect?
>

Thanks, this seems interesting! I'll have a think about this and get back
to you.

Cheers,
-- 
Bjørn Ivar Teigen
Head of Research
+47 47335952 | bjorn at domos.no <name at domos.no> | www.domos.no
WiFi Slicing by Domos
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20211102/f150d1a1/attachment.html>


More information about the Bloat mailing list