From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 8E7733CB43 for ; Thu, 8 Jul 2021 09:29:23 -0400 (EDT) Received: by mail-wm1-x331.google.com with SMTP id u5-20020a7bc0450000b02901480e40338bso4898071wmc.1 for ; Thu, 08 Jul 2021 06:29:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=xi+bbk89+Z1/HYK0YmkU/jg6ywHzLbPay2GMdi3UeDs=; b=cgSLIXV/Dwp+0yDy73MkPckG4EtEDBc2ElagFJTaUexX6lnNQARwF9xLb4G161jLaq 49keyZ+4VcGPM0wFL6Ri3Dt0sCLem2uSkoZBB/Er/qSftlpc1dKPFPOq1ZdQxfRN3sun Z2/4CCBnWqri4Mu+w49gc2VWeLwjEpGpw9+fOryYqGgT8XOt978ju71R5TthV5oCb+5i atLiC7k3o11L6ao5E7n6uUgJP9ueRxzpQbUC2Pheg8M/pqbJqZ8BH5P4N6ZLxmgegvLs 3dW2oxl8K0VIS0cYaC2hXmGnbg8XosI52t/prwU9n8S57EY+DQUN8h6sKtK9eMWGNxR7 GnJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=xi+bbk89+Z1/HYK0YmkU/jg6ywHzLbPay2GMdi3UeDs=; b=CAhhD+8yHUp1z3fGXDU8ore0+QvI0CuFu7qA/KfcFHFM0pouc9T9jaP2OCCjL4B0JL ySYNsbS4R+cBCHoyqNK2HjLZSgFAR76/Wa/bxwANNANNHTo+9eVJ5Po0RuaVpH73EiGR uBMgqcGtV8NbZfU62hGGvHODTXiy9f+QC4CFgC0akXGwmJOOpW2vvHqOHUP9vumUA2mc vfJdYAOnXcpIvXubI0b91XKa7bJDudjJGLKZLo4aeqj0eXtb/pGumA+sRExwCHJyWc5d p6CHtP+iKK5HWvgDQsVyY4SkeA4Cjv7MSYdqHIUMbQa6p31lthJfw9+17cosnC54aely MBxw== X-Gm-Message-State: AOAM533Bju1Wy4fnTdjXFhLrcI9+BVZD2mJRp0lQDGYaBE5i1YIDIU5z pRU2NsNdTt8rCV08PnFLAOgnucNkOvBrWip1kzyHXQ== X-Google-Smtp-Source: ABdhPJzBGhFy1BsuG0aGbhJvAT22N00pTde6HzCZZgr52KiPsZMvQFSxUMyJ8M/dTlnAJPX378QFVVGw5O/Kzglx82U= X-Received: by 2002:a7b:c258:: with SMTP id b24mr5457608wmj.122.1625750962149; Thu, 08 Jul 2021 06:29:22 -0700 (PDT) MIME-Version: 1.0 References: <55fdf513-9c54-bea9-1f53-fe2c5229d7ba@eggo.org> <871t4as1h9.fsf@toke.dk> <3D32F19B-5DEA-48AD-97E7-D043C4EAEC51@gmail.com> <1465267957.902610235@apps.rackspace.com> <20210702095924.0427b579@hermes.local> <1bab95a0-7904-2807-02fe-62674c19948f@kit.edu> <393e9ca6-f9f3-1826-9fbc-6d36871223d8@kit.edu> In-Reply-To: <393e9ca6-f9f3-1826-9fbc-6d36871223d8@kit.edu> From: Matt Mathis Date: Thu, 8 Jul 2021 06:29:09 -0700 Message-ID: Subject: Re: Abandoning Window-based CC Considered Harmful (was Re: [Bloat] Bechtolschiem) To: "Bless, Roland (TM)" Cc: Dave Taht , "cerowrt-devel@lists.bufferbloat.net" , bloat Content-Type: multipart/alternative; boundary="00000000000098225705c69ca4c4" X-List-Received-Date: Thu, 08 Jul 2021 13:29:23 -0000 --00000000000098225705c69ca4c4 Content-Type: text/plain; charset="UTF-8" I think there is something missing from your model. I just scanned your paper and noticed that you made no mention of rounding errors, nor some details around the drain phase timing, The implementation guarantees that the actual average rate across the combined BW probe and drain is strictly less than the measured maxBW and that the flight size comes back down to minRTT*maxBW before returning to unity pacing gain. In some sense these checks are redundant, but If you don't do them, it is absolutely true that you are at risk of seeing divergent behaviors. That said, it is also true that multi-stream BBR behavior is quite complicated and needs more queue space than single stream. This complicates the story around the traditional workaround of using multiple streams to compensate for Reno & CUBIC lameness at larger scales (ordinary scales today). Multi-stream does not help BBR throughput and raises the queue occupancy, to the detriment of other users. And yes, in my presentation, I described the core BBR algorithms as a framework, which might be extended to incorporate many additional algorithms if they provide optimal control in some settings. And yes, several are present in BBRv2. Thanks, --MM-- The best way to predict the future is to create it. - Alan Kay We must not tolerate intolerance; however our response must be carefully measured: too strong would be hypocritical and risks spiraling out of control; too weak risks being mistaken for tacit approval. On Thu, Jul 8, 2021 at 4:24 AM Bless, Roland (TM) wrote: > Hi Matt, > > On 08.07.21 at 00:38 Matt Mathis wrote: > > Actually BBR does have a window based backup, which normally only comes > into play during load spikes and at very short RTTs. It defaults to > 2*minRTT*maxBW, which is twice the steady state window in it's normal paced > mode. > > So yes, BBR follows option b), but I guess that you are referring to BBRv1 > here. > We have shown in [1, Sec.III] that BBRv1 flows will *always* run > (conceptually) toward their above quoted inflight-cap of > 2*minRTT*maxBW, if more than one BBR flow is present at the bottleneck. So > strictly speaking " which *normally only* comes > into play during load spikes and at very short RTTs" isn't true for > multiple BBRv1 flows. > > It seems that in BBRv2 there are many more mechanisms present > that try to control the amount of inflight data more tightly and the new > "cap" > is at 1.25 BDP. > > This is too large for short queue routers in the Internet core, but it > helps a lot with cross traffic on large queue edge routers. > > Best regards, > Roland > > [1] https://ieeexplore.ieee.org/document/8117540 > > > On Wed, Jul 7, 2021 at 3:19 PM Bless, Roland (TM) > wrote: > >> Hi Matt, >> >> [sorry for the late reply, overlooked this one] >> >> please, see comments inline. >> >> On 02.07.21 at 21:46 Matt Mathis via Bloat wrote: >> >> The argument is absolutely correct for Reno, CUBIC and all >> other self-clocked protocols. One of the core assumptions in Jacobson88, >> was that the clock for the entire system comes from packets draining >> through the bottleneck queue. In this world, the clock is intrinsically >> brittle if the buffers are too small. The drain time needs to be a >> substantial fraction of the RTT. >> >> I'd like to separate the functions here a bit: >> >> 1) "automatic pacing" by ACK clocking >> >> 2) congestion-window-based operation >> >> I agree that the automatic pacing generated by the ACK clock (function 1) >> is increasingly >> distorted these days and may consequently cause micro bursts. >> This can be mitigated by using paced sending, which I consider very >> useful. >> However, I consider abandoning the (congestion) window-based approaches >> with ACK feedback (function 2) as harmful: >> a congestion window has an automatic self-stabilizing property since the >> ACK feedback reflects >> also the queuing delay and the congestion window limits the amount of >> inflight data. >> In contrast, rate-based senders risk instability: two senders in an M/D/1 >> setting, each sender sending with 50% >> bottleneck rate in average, both using paced sending at 120% of the >> average rate, suffice to cause >> instability (queue grows unlimited). >> >> IMHO, two approaches seem to be useful: >> a) congestion-window-based operation with paced sending >> b) rate-based/paced sending with limiting the amount of inflight data >> >> >> However, we have reached the point where we need to discard that >> requirement. One of the side points of BBR is that in many environments it >> is cheaper to burn serving CPU to pace into short queue networks than it is >> to "right size" the network queues. >> >> The fundamental problem with the old way is that in some contexts the >> buffer memory has to beat Moore's law, because to maintain constant drain >> time the memory size and BW both have to scale with the link (laser) BW. >> >> See the slides I gave at the Stanford Buffer Sizing workshop december >> 2019: Buffer Sizing: Position Paper >> >> >> >> Thanks for the pointer. I don't quite get the point that the buffer must >> have a certain size to keep the ACK clock stable: >> in case of an non application-limited sender, a very small buffer >> suffices to let the ACK clock >> run steady. The large buffers were mainly required for loss-based CCs to >> let the standing queue >> build up that keeps the bottleneck busy during CWnd reduction after >> packet loss, thereby >> keeping the (bottleneck link) utilization high. >> >> Regards, >> >> Roland >> >> >> Note that we are talking about DC and Internet core. At the edge, BW is >> low enough where memory is relatively cheap. In some sense BB came about >> because memory is too cheap in these environments. >> >> Thanks, >> --MM-- >> The best way to predict the future is to create it. - Alan Kay >> >> We must not tolerate intolerance; >> however our response must be carefully measured: >> too strong would be hypocritical and risks spiraling out of >> control; >> too weak risks being mistaken for tacit approval. >> >> >> On Fri, Jul 2, 2021 at 9:59 AM Stephen Hemminger < >> stephen@networkplumber.org> wrote: >> >>> On Fri, 2 Jul 2021 09:42:24 -0700 >>> Dave Taht wrote: >>> >>> > "Debunking Bechtolsheim credibly would get a lot of attention to the >>> > bufferbloat cause, I suspect." - dpreed >>> > >>> > "Why Big Data Needs Big Buffer Switches" - >>> > >>> http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf >>> > >>> >>> Also, a lot depends on the TCP congestion control algorithm being used. >>> They are using NewReno which only researchers use in real life. >>> >>> Even TCP Cubic has gone through several revisions. In my experience, the >>> NS-2 models don't correlate well to real world behavior. >>> >>> In real world tests, TCP Cubic will consume any buffer it sees at a >>> congested link. Maybe that is what they mean by capture effect. >>> >>> There is also a weird oscillation effect with multiple streams, where one >>> flow will take the buffer, then see a packet loss and back off, the >>> other flow will take over the buffer until it sees loss. >>> >>> _______________________________________________ >> >> _______________________________________________ >> >> >> > --00000000000098225705c69ca4c4 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I think there is something missing from your model.=C2=A0 = =C2=A0 I just scanned your paper and noticed that you made no mention of ro= unding errors, nor some details around the drain phase timing,=C2=A0 =C2=A0= The implementation=C2=A0guarantees=C2=A0that the=C2=A0actual average rate a= cross the combined BW probe and drain is strictly less than the measured ma= xBW and that the flight size comes back down to minRTT*maxBW before returni= ng to unity pacing gain.=C2=A0 In some sense these checks are redundant, bu= t If you don't do them, it is absolutely true that you are at risk of s= eeing divergent behaviors.

That said, it is also true th= at multi-stream BBR behavior is quite complicated and needs more queue spac= e than single stream.=C2=A0 =C2=A0This complicates the story around=C2=A0th= e traditional workaround of using multiple streams to compensate for Reno &= amp; CUBIC lameness at larger scales (ordinary scales today).=C2=A0 =C2=A0 = Multi-stream does not help BBR throughput and raises the queue occupancy, t= o the detriment of other users.

And yes, in m= y presentation, I described the=C2=A0core BBR algorithms=C2=A0as a framewor= k, which might be extended to incorporate=C2=A0many additional algorithms i= f they provide optimal control in=C2=A0some settings.=C2=A0 And yes, severa= l are present in BBRv2.
=C2=A0
Thanks,
<= div>
--MM--
The bes= t way to predict the future is to create it. =C2=A0- Alan Kay

We mus= t not tolerate intolerance;
=C2=A0 =C2=A0 =C2=A0 =C2= =A0however our response must be carefully measured:=C2=A0
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 too strong would be hypocritical and ris= ks spiraling out of control;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 too weak risks being mistaken for tacit approval.
<= /div>


On Thu, Jul 8, 2021 at 4:24 AM Bless, Rola= nd (TM) <roland.bless@kit.edu> wrote:
=20 =20 =20
Hi Matt,

On 08.07.21 at 00:38 Matt Mathis wrote:
=20
Actually BBR does have a window based backup, which normally only comes into play during load spikes and at very short RTTs.=C2=A0 =C2=A0It defaults to 2*minRTT*maxBW, which is twi= ce the steady state window in it's normal paced mode.

So yes, BBR follows option b), but I guess that you are referring to BBRv1 here.
We have shown in [1, Sec.III] that BBRv1 flows will always run (conceptually) toward their above quoted inflight-cap of
2*minRTT*maxBW, if more than one BBR flow is present at the bottleneck. So strictly speaking " which normally only comes
into play during load spikes and at very short RTTs" isn't t= rue for multiple BBRv1 flows.

It seems that in BBRv2 there are many more mechanisms present
that try to control the amount of inflight data more tightly and the new "cap"
is at 1.25 BDP.

This is too large for short queue routers in the Internet core, but it helps a lot with cross traffic on large queue edge routers.

Best regards,
=C2=A0Roland

[1] https://ieeexplore.ieee.org/document/8117540


On Wed, Jul 7, 2021 at 3:19 P= M Bless, Roland (TM) <roland.bless@kit.edu> wrote:
Hi Matt,

[sorry for the late reply, overlooked this one]

please, see comments inline.

On 02.07.21 at 21:46 Matt Mathis via Bloat wrote:
The argument is absolutely correct for Reno, CUBIC and all other=C2=A0self-clocked protocols.=C2= =A0 One of the core assumptions in Jacobson88, was that the clock=C2=A0for the entire system comes from packets drainin= g through the bottleneck queue.=C2=A0 In this world, the cloc= k is intrinsically brittle if the buffers=C2=A0are too small.= =C2=A0 The drain time needs to be a substantial fraction of the RTT.
I'd like to separate the functions here a bit:

1) "automatic pacing" by ACK clocking

2) congestion-window-based operation

I agree that the automatic pacing generated by the ACK clock (function 1) is increasingly
distorted these days and may consequently cause micro bursts.
This can be mitigated by using paced sending, which I consider very useful.
However, I consider abandoning the (congestion) window-based approaches
with ACK feedback (function 2) as harmful:
a congestion window has an automatic self-stabilizing property since the ACK feedback reflects
also the queuing delay and the congestion window limits the amount of inflight data.
In contrast, rate-based senders risk instability: two senders in an M/D/1 setting, each sender sending with 50%
bottleneck rate in average, both using paced sending at 120% of the average rate, suffice to cause
instability (queue grows unlimited).

IMHO, two approaches seem to be useful:
a) congestion-window-based operation with paced sending
b) rate-based/paced sending with limiting the amount of inflight data


However, we have reached the point where=C2=A0we need = to discard that requirement.=C2=A0 One of the side points of BBR is that in many environments it is cheaper to burn serving CPU to pace into short queue networks than it is to "right size" the network=C2=A0queues.

The fundamental problem with the old=C2=A0way is that = in some contexts the buffer memory has to beat Moore's law, because to maintain constant drain time the memory=C2=A0size and BW both have to scale with the link (laser) BW.

See the slides I gave at the=C2=A0Stanford Buffer Sizi= ng workshop december 2019:=C2=A0Buffer Sizing: Position Paper=C2=A0

Thanks for the pointer. I don't quite get the point that the buffer must have a certain size to keep the ACK clock stable:
in case of an non application-limited sender, a very small buffer suffices to let the ACK clock
run steady. The large buffers were mainly required for loss-based CCs to let the standing queue
build up that keeps the bottleneck busy during CWnd reduction after packet loss, thereby
keeping the (bottleneck link) utilization high.

Regards,

=C2=A0Roland


Note that we are talking about DC and Internet core.=C2=A0 At the edge, BW is low enough where memory is relatively cheap.=C2=A0 =C2=A0In some sense BB came about because memory is too cheap in these environments.

Thanks,
--MM--
The best way to predict the future is to create it. =C2=A0- Alan Kay

We must not tolerate intolerance;
=C2=A0 =C2=A0 =C2=A0 =C2=A0h= owever our response must be carefully measured:=C2=A0
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 too strong would be hypocritical and risks spiraling out of control;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 too weak risks being mistaken for tacit approval.


On Fri, Jul 2, 2021 a= t 9:59 AM Stephen Hemminger <stephen@networkplumber.org> wrote:
On Fri, 2= Jul 2021 09:42:24 -0700
Dave Taht <dave.taht@gmail.com> wrote:

> "Debunking Bechtolsheim credibly would get a lo= t of attention to the
> bufferbloat cause, I suspect." - dpreed
>
> "Why Big Data Needs Big Buffer Switches" -=
> htt= p://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf=
>

Also, a lot depends on the TCP congestion control algorithm being used.
They are using NewReno which only researchers use in real life.

Even TCP Cubic has gone through several revisions. In my experience, the
NS-2 models don't correlate well to real world behavior.

In real world tests, TCP Cubic will consume any buffer it sees at a
congested link. Maybe that is what they mean by capture effect.

There is also a weird oscillation effect with multiple streams, where one
flow will take the buffer, then see a packet loss and back off, the
other flow will take over the buffer until it sees loss.

_______________________________________________
_______________________________________________


--00000000000098225705c69ca4c4--