From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com [IPv6:2a00:1450:4864:20::131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id BD10E3CB38 for ; Wed, 3 Nov 2021 12:13:34 -0400 (EDT) Received: by mail-lf1-x131.google.com with SMTP id y26so6146283lfa.11 for ; Wed, 03 Nov 2021 09:13:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=domos-no.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=6m9S9TirwlPBL/nD+mnvVONxdjykhU6pIABO4FgliHk=; b=VSoDRXRN/vtuBh7tEVRYo0E5le6PSQ1magP8vqxSy/TfoVR0LEFeRZbqn5U47rhwbY KkBQ4EmDyBCFmjXAfGlDKWuxls2cNxFD8Ki30NSHA0nji8qmUDAht55X0jmF7dMbzTL7 LhOEGOFKC53CyJom5M0NTCKGFVztxPQZwPq3pw9cLVPsAzkdeDZcamTFmFDgsFkUeAhd vvjtR8n+xF8atXKi7PrW1tNMaXc85ypZ7HboQSDGvrYM6utTpSaptREfB584lhkH/lKx PZ0ELe9J16dvXhWD1iVI7/zqh9Iibq+3EvNw0XVKPG8hdbWIrO3cjqdt0BdhPLvAdRRG D1Bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=6m9S9TirwlPBL/nD+mnvVONxdjykhU6pIABO4FgliHk=; b=bQ+tkk/pa6ZueX0GHd5gXAFVMsSTFso2zkfrbtKxfo4KyK/9VgEVkg8d566yrotPNY jszETWDgKlMuH8yPmiccOD+jzS7yOwuRV4IxWknZsJbAfidfnTDxgTV8fj7NptrmNDDa SbVIcmZdrFZtmo5+KuKFJGlm9+eO7IEeQdHiqyWHGLQohu2NwWYphhvgC51HPs3pr97v E6vR54qnbEjHKH8UrmDczHb2urDADqfniJqSdF6QEUdoqNq71aI6Fv8x7z5yjiOyejRr wAXPY6TYN8GaHabpGOosCGksMMHrRnYYCC4BoJu7clzX3dzb5yGl7CG5JNqJgwNA82M7 jtTw== X-Gm-Message-State: AOAM532fbdQp2J5rOUQ76/E3lIfINJ59fqhoKPFU+miJB2BUlk9DwnEu 7Kw/7PucxV9PFz8WeGwTORdV6rccHI2chJY+4eO0YQ== X-Google-Smtp-Source: ABdhPJxanuiFA2A6zgYFTdDiY7iBtvcgocDQ439mKrypy1lvwHTRYKyx89v1/CRyZVLQgatY3tEwSHnImyNh0eqX/9U= X-Received: by 2002:ac2:4e68:: with SMTP id y8mr43444688lfs.348.1635956013476; Wed, 03 Nov 2021 09:13:33 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: =?UTF-8?Q?Bj=C3=B8rn_Ivar_Teigen?= Date: Wed, 3 Nov 2021 16:13:21 +0000 Message-ID: To: Dave Taht Cc: bloat Content-Type: multipart/alternative; boundary="0000000000000d474c05cfe4b196" Subject: Re: [Bloat] Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Nov 2021 16:13:35 -0000 --0000000000000d474c05cfe4b196 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Jonathan Newton over at Vodafone Group made some interesting observations about what happens when two of the optimal congestion controllers interact through a shared FIFO queue: If we take two flows E and F; E is 90% of bandwidth and F 10%; the time > for the congestion signal to reach the sender for each flow is dE and dF > where dE is 10ms and dF 100ms. We assume no prioritisation so they must > share the same buffer at X, and therefore share the same peak transient > delay. > > We have an event at t0 where the bandwidth is halved. > > For time t0 to t0+dE (the first 10ms), the total rate transmitted by both > sources is twice (90%*10%)/50% the output rate, so the component of the > peak delay of this part is (2-1)*dE =3D 10ms > > For time t0+dE to t0+Fd (the next 90ms), the total rate transmitted by > both sources is (45%+10%)/50% of the output rate, so the component of the > peak delay of this part is (1.1-1)*dF-dE =3D 9ms > > Making the peak transient delay 19ms. > This implies that moving some senders closer to the edge (e.g. CDNs) might help reduce lag spikes for everyone. It also implies that slow-responding flows can have a very big impact on the peak transient latency seen by rapidly responding flows. If the bandwidth sharing ratio in the above example is 50/50, then the peak transient delay will be 55 ms, seen by both flows. For flow E that's a big increase from the 10 ms we'd expect if flow E was alone. For C=3D3 the increase is even worse, with flow E going from 20ms to 100ms when sharing the link with flow F. - Bj=C3=B8rn On Tue, 2 Nov 2021 at 14:08, Dave Taht wrote: > I am very pre-coffee. Something that could build on this would involve > FQ. More I cannot say, til more coffee. > > On Tue, Nov 2, 2021 at 3:56 AM Bj=C3=B8rn Ivar Teigen wr= ote: > > > > Hi everyone, > > > > I've recently published a paper on Arxiv which is relevant to the > Bufferbloat problem. I hope it will be helpful in convincing AQM doubters= . > > Discussions at the recent IAB workshop inspired me to write a detailed > argument for why end-to-end methods cannot avoid latency spikes. I couldn= 't > find this argument in the literature. > > > > Here is the Arxiv link: https://arxiv.org/abs/2111.00488 > > > > A direct consequence is that we need AQMs at all points in the internet > where congestion is likely to happen, even for short periods, to mitigate > the impact of latency spikes. Here I am assuming we ultimately want an > Internet without lag-spikes, not just low latency on average. > > > > Hope you find this interesting! > > > > -- > > Bj=C3=B8rn Ivar Teigen > > Head of Research > > +47 47335952 | bjorn@domos.no | www.domos.no > > WiFi Slicing by Domos > > _______________________________________________ > > Bloat mailing list > > Bloat@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/bloat > > > > -- > I tried to build a better future, a few times: > https://wayforward.archive.org/?site=3Dhttps%3A%2F%2Fwww.icei.org > > Dave T=C3=A4ht CEO, TekLibre, LLC > --=20 Bj=C3=B8rn Ivar Teigen Head of Research +47 47335952 | bjorn@domos.no | www.domos.no WiFi Slicing by Domos --0000000000000d474c05cfe4b196 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Jonathan Newton over at Vodafone Group made some inte= resting observations about what happens when two of the optimal congestion = controllers interact through a shared FIFO queue:

=

If we take two flows E and F; =C2=A0E is 90% of=20 bandwidth and F 10%; =C2=A0the time for the congestion signal to reach the= =20 sender for each flow is dE and dF where dE is 10ms and dF 100ms.=C2=A0=C2= =A0 We=20 assume no prioritisation so they must share the same buffer at X, and therefore share the same=20 peak transient delay.

We have an eve= nt at t0 where the bandwidth is halved.

For time t0 to t0+dE (the first 10ms), the=20 total rate transmitted by both sources is twice (90%*10%)/50% the output rate, so the component of the peak delay of this part is (2-1)*dE =3D=20 10ms

For time t0+dE to t0+Fd (the ne= xt 90ms), the=20 total rate transmitted by both sources is (45%+10%)/50% of the output=20 rate, so the component of the peak delay of this part is (1.1-1)*dF-dE =3D 9ms

Making the peak transient delay= 19ms.


Th= is implies that moving some senders closer to the edge (e.g. CDNs) might he= lp reduce lag spikes for everyone. It also implies that slow-responding flo= ws can have a very big impact on the peak transient latency seen by rapidly= responding flows. If the bandwidth sharing ratio in the above example is 5= 0/50, then the peak transient delay will be 55 ms, seen by both flows. For = flow E that's a big increase from the 10 ms we'd expect if flow E w= as alone. For C=3D3 the increase is even worse, with flow E going from 20ms= to 100ms when sharing the link with flow F.


- Bj=C3=B8rn


On Tue, 2 Nov 2021 at 14:08, Dave Taht <dave.taht@gmail.com> wrote:
I am very pre-coffee. Something t= hat could build on this would involve
FQ. More I cannot say, til more coffee.

On Tue, Nov 2, 2021 at 3:56 AM Bj=C3=B8rn Ivar Teigen <bjorn@domos.no> wrote:
>
> Hi everyone,
>
> I've recently published a paper on Arxiv which is relevant to the = Bufferbloat problem. I hope it will be helpful in convincing AQM doubters.<= br> > Discussions at the recent IAB workshop inspired me to write a detailed= argument for why end-to-end methods cannot avoid latency spikes. I couldn&= #39;t find this argument in the literature.
>
> Here is the Arxiv link: https://arxiv.org/abs/2111.00488 >
> A direct consequence is that we need AQMs at all points in the interne= t where congestion is likely to happen, even for short periods, to mitigate= the impact of latency spikes. Here I am assuming we ultimately want an Int= ernet without lag-spikes, not just low latency on average.
>
> Hope you find this interesting!
>
> --
> Bj=C3=B8rn Ivar Teigen
> Head of Research
> +47 47335952 | bjo= rn@domos.no | www.domos.no
> WiFi Slicing by Domos
> _______________________________________________
> Bloat mailing list
> Bloat= @lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


--
I tried to build a better future, a few times:
https://wayforward.archive.org/?sit= e=3Dhttps%3A%2F%2Fwww.icei.org

Dave T=C3=A4ht CEO, TekLibre, LLC


--
Bj=C3=B8rn Ivar Teigen=
Head of Research
+47 47335952 | = bjorn@domos.no=C2=A0|=C2=A0www.domos.no
WiFi Slicing by Domos
<= /span>
--0000000000000d474c05cfe4b196--