* [Bloat] Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes
@ 2021-11-02 10:46 Bjørn Ivar Teigen
2021-11-02 12:14 ` Toke Høiland-Jørgensen
2021-11-02 14:07 ` Dave Taht
0 siblings, 2 replies; 5+ messages in thread
From: Bjørn Ivar Teigen @ 2021-11-02 10:46 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 857 bytes --]
Hi everyone,
I've recently published a paper on Arxiv which is relevant to the
Bufferbloat problem. I hope it will be helpful in convincing AQM doubters.
Discussions at the recent IAB workshop inspired me to write a detailed
argument for why end-to-end methods cannot avoid latency spikes. I couldn't
find this argument in the literature.
Here is the Arxiv link: https://arxiv.org/abs/2111.00488
A direct consequence is that we need AQMs at all points in the internet
where congestion is likely to happen, even for short periods, to mitigate
the impact of latency spikes. Here I am assuming we ultimately want an
Internet without lag-spikes, not just low latency on average.
Hope you find this interesting!
--
Bjørn Ivar Teigen
Head of Research
+47 47335952 | bjorn@domos.no <name@domos.no> | www.domos.no
WiFi Slicing by Domos
[-- Attachment #2: Type: text/html, Size: 3037 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Bloat] Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes
2021-11-02 10:46 [Bloat] Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes Bjørn Ivar Teigen
@ 2021-11-02 12:14 ` Toke Høiland-Jørgensen
2021-11-02 14:02 ` Bjørn Ivar Teigen
2021-11-02 14:07 ` Dave Taht
1 sibling, 1 reply; 5+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-11-02 12:14 UTC (permalink / raw)
To: Bjørn Ivar Teigen, bloat
Bjørn Ivar Teigen <bjorn@domos.no> writes:
> Hi everyone,
>
> I've recently published a paper on Arxiv which is relevant to the
> Bufferbloat problem. I hope it will be helpful in convincing AQM doubters.
> Discussions at the recent IAB workshop inspired me to write a detailed
> argument for why end-to-end methods cannot avoid latency spikes. I couldn't
> find this argument in the literature.
>
> Here is the Arxiv link: https://arxiv.org/abs/2111.00488
I found this a very approachable paper expressing a phenomenon that
should be no surprise to anyone on this list: when flow rate drops,
latency spikes.
> A direct consequence is that we need AQMs at all points in the internet
> where congestion is likely to happen, even for short periods, to mitigate
> the impact of latency spikes. Here I am assuming we ultimately want an
> Internet without lag-spikes, not just low latency on average.
This was something I was wondering when reading your paper. How will
AQMs help? When the rate drops the AQM may be able to react faster, but
it won't be able to affect the flow xmit rate any faster than your
theoretical "perfect" propagation time...
So in effect, your paper seems to be saying "a flow that saturates the
link cannot avoid latency spikes from self-congestion when the link rate
drops, and the only way we can avoid this interfering with *other* flows
is by using FQ"? Or?
Also, another follow-on question that might be worth looking into is
short flows: Many flows fit entirely in an IW, or at least never exit
slow start. So how does that interact with what you're describing? Is it
possible to quantify this effect?
-Toke
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Bloat] Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes
2021-11-02 12:14 ` Toke Høiland-Jørgensen
@ 2021-11-02 14:02 ` Bjørn Ivar Teigen
0 siblings, 0 replies; 5+ messages in thread
From: Bjørn Ivar Teigen @ 2021-11-02 14:02 UTC (permalink / raw)
To: Toke Høiland-Jørgensen, bloat
[-- Attachment #1: Type: text/plain, Size: 2747 bytes --]
Thanks for the feedback! Please find my answers below.
> A direct consequence is that we need AQMs at all points in the internet
> > where congestion is likely to happen, even for short periods, to mitigate
> > the impact of latency spikes. Here I am assuming we ultimately want an
> > Internet without lag-spikes, not just low latency on average.
>
> This was something I was wondering when reading your paper. How will
> AQMs help? When the rate drops the AQM may be able to react faster, but
> it won't be able to affect the flow xmit rate any faster than your
> theoretical "perfect" propagation time...
> So in effect, your paper seems to be saying "a flow that saturates the
> link cannot avoid latency spikes from self-congestion when the link rate
> drops, and the only way we can avoid this interfering with *other* flows
> is by using FQ"? Or?
>
Yes, I agree, and that's a very nice way to put it. I would phrase the AQMs
role as "mitigating the negative effects of transient congestion".
Isolating flows from each other, for instance with FQ, is an important part
of that in my opinion.
I'm sure this is familiar to people on this list, but I'll summarize my
views anyway.
Whenever a queue forms the options are very limited: We can choose to drop
packets, and we can choose the order in which the queue is emptied.
FIFO service is one option, but we can also choose some other scheduling
and/or packet loss scheme. FQ is just a specific choice here where latency
is divided close to equally among different flows.
I really like the following analogy to make this point very clear: "If you
have a bag of 100 balls and withdraw one every second, how long does it
take to empty the bag? Now, we can color half the balls red and the other
half blue, and then pick the red balls first. It still takes 100 seconds to
empty the bag." The same principle holds for packet scheduling, only we can
drop packets as well (and thus not pay the delay cost of forwarding them).
Once a queue has formed, the latency and packet loss *must* be divided
among the different packets in the queue, and it's up to the scheduling
part of the AQM to make that choice. What the correct choice is will depend
on many things.
> Also, another follow-on question that might be worth looking into is
> short flows: Many flows fit entirely in an IW, or at least never exit
> slow start. So how does that interact with what you're describing? Is it
> possible to quantify this effect?
>
Thanks, this seems interesting! I'll have a think about this and get back
to you.
Cheers,
--
Bjørn Ivar Teigen
Head of Research
+47 47335952 | bjorn@domos.no <name@domos.no> | www.domos.no
WiFi Slicing by Domos
[-- Attachment #2: Type: text/html, Size: 5409 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Bloat] Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes
2021-11-02 10:46 [Bloat] Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes Bjørn Ivar Teigen
2021-11-02 12:14 ` Toke Høiland-Jørgensen
@ 2021-11-02 14:07 ` Dave Taht
2021-11-03 16:13 ` Bjørn Ivar Teigen
1 sibling, 1 reply; 5+ messages in thread
From: Dave Taht @ 2021-11-02 14:07 UTC (permalink / raw)
To: Bjørn Ivar Teigen; +Cc: bloat
I am very pre-coffee. Something that could build on this would involve
FQ. More I cannot say, til more coffee.
On Tue, Nov 2, 2021 at 3:56 AM Bjørn Ivar Teigen <bjorn@domos.no> wrote:
>
> Hi everyone,
>
> I've recently published a paper on Arxiv which is relevant to the Bufferbloat problem. I hope it will be helpful in convincing AQM doubters.
> Discussions at the recent IAB workshop inspired me to write a detailed argument for why end-to-end methods cannot avoid latency spikes. I couldn't find this argument in the literature.
>
> Here is the Arxiv link: https://arxiv.org/abs/2111.00488
>
> A direct consequence is that we need AQMs at all points in the internet where congestion is likely to happen, even for short periods, to mitigate the impact of latency spikes. Here I am assuming we ultimately want an Internet without lag-spikes, not just low latency on average.
>
> Hope you find this interesting!
>
> --
> Bjørn Ivar Teigen
> Head of Research
> +47 47335952 | bjorn@domos.no | www.domos.no
> WiFi Slicing by Domos
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
I tried to build a better future, a few times:
https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Bloat] Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes
2021-11-02 14:07 ` Dave Taht
@ 2021-11-03 16:13 ` Bjørn Ivar Teigen
0 siblings, 0 replies; 5+ messages in thread
From: Bjørn Ivar Teigen @ 2021-11-03 16:13 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 3243 bytes --]
Jonathan Newton over at Vodafone Group made some interesting observations
about what happens when two of the optimal congestion controllers interact
through a shared FIFO queue:
If we take two flows E and F; E is 90% of bandwidth and F 10%; the time
> for the congestion signal to reach the sender for each flow is dE and dF
> where dE is 10ms and dF 100ms. We assume no prioritisation so they must
> share the same buffer at X, and therefore share the same peak transient
> delay.
>
> We have an event at t0 where the bandwidth is halved.
>
> For time t0 to t0+dE (the first 10ms), the total rate transmitted by both
> sources is twice (90%*10%)/50% the output rate, so the component of the
> peak delay of this part is (2-1)*dE = 10ms
>
> For time t0+dE to t0+Fd (the next 90ms), the total rate transmitted by
> both sources is (45%+10%)/50% of the output rate, so the component of the
> peak delay of this part is (1.1-1)*dF-dE = 9ms
>
> Making the peak transient delay 19ms.
>
This implies that moving some senders closer to the edge (e.g. CDNs) might
help reduce lag spikes for everyone. It also implies that slow-responding
flows can have a very big impact on the peak transient latency seen by
rapidly responding flows. If the bandwidth sharing ratio in the above
example is 50/50, then the peak transient delay will be 55 ms, seen by both
flows. For flow E that's a big increase from the 10 ms we'd expect if flow
E was alone. For C=3 the increase is even worse, with flow E going from
20ms to 100ms when sharing the link with flow F.
- Bjørn
On Tue, 2 Nov 2021 at 14:08, Dave Taht <dave.taht@gmail.com> wrote:
> I am very pre-coffee. Something that could build on this would involve
> FQ. More I cannot say, til more coffee.
>
> On Tue, Nov 2, 2021 at 3:56 AM Bjørn Ivar Teigen <bjorn@domos.no> wrote:
> >
> > Hi everyone,
> >
> > I've recently published a paper on Arxiv which is relevant to the
> Bufferbloat problem. I hope it will be helpful in convincing AQM doubters.
> > Discussions at the recent IAB workshop inspired me to write a detailed
> argument for why end-to-end methods cannot avoid latency spikes. I couldn't
> find this argument in the literature.
> >
> > Here is the Arxiv link: https://arxiv.org/abs/2111.00488
> >
> > A direct consequence is that we need AQMs at all points in the internet
> where congestion is likely to happen, even for short periods, to mitigate
> the impact of latency spikes. Here I am assuming we ultimately want an
> Internet without lag-spikes, not just low latency on average.
> >
> > Hope you find this interesting!
> >
> > --
> > Bjørn Ivar Teigen
> > Head of Research
> > +47 47335952 | bjorn@domos.no | www.domos.no
> > WiFi Slicing by Domos
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> I tried to build a better future, a few times:
> https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org
>
> Dave Täht CEO, TekLibre, LLC
>
--
Bjørn Ivar Teigen
Head of Research
+47 47335952 | bjorn@domos.no <name@domos.no> | www.domos.no
WiFi Slicing by Domos
[-- Attachment #2: Type: text/html, Size: 6586 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2021-11-03 16:13 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-02 10:46 [Bloat] Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes Bjørn Ivar Teigen
2021-11-02 12:14 ` Toke Høiland-Jørgensen
2021-11-02 14:02 ` Bjørn Ivar Teigen
2021-11-02 14:07 ` Dave Taht
2021-11-03 16:13 ` Bjørn Ivar Teigen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox