From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vk1-xa2e.google.com (mail-vk1-xa2e.google.com [IPv6:2607:f8b0:4864:20::a2e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id F05E13B2A4 for ; Thu, 8 Jul 2021 11:47:57 -0400 (EDT) Received: by mail-vk1-xa2e.google.com with SMTP id u24so1463882vkn.9 for ; Thu, 08 Jul 2021 08:47:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=SAXyibJHZvgr3xRuLOCNeMzCpIcynj40S4LBTVLBiN4=; b=Nx6rCYCZ9NfFlSm/5dKTSSseXHCJVJaCPhJgRO7uFHmAFakBJ9rgzVAOT/R7InmQCZ MC7RatGkolgGE9Ap08PevP2YbLjuyKiZxCWfj3rqk83SARIbVM9IJ0aKaLKkAK8/HDh8 qCP8rXKAzp96Fz7nT5pkOKzvsfvfzKdMiEZk1uy5rIajoEl/6TFwz7L63J9cddlIgWTm /9rj5Ax7rdJyGxwzQMG05bWhLbDrZtFp21EmNK7A0QhcrbO2TxLM+ICHKovkYsSWegP1 opYS4Zp3dSNAl5ojczxeMA2sQ2v9f5AwBlytDF++zVGI0xgjE51QYkQLPAMkIdPxqLPe jrVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=SAXyibJHZvgr3xRuLOCNeMzCpIcynj40S4LBTVLBiN4=; b=Glu7Q1hb7wn46aRK+sjh/JSUypDcxpDFbio11NSSo5jnsu/9vSSRi0PIggVZ7YBycB zZjKfALavBfcuqBbpF9SUhJURoNyiwCzPjt/jDnM8Aqt2FdrSkEPlx2x6logt3pWwn8C 0JmNWWelG34aUJwDx56fKwWjaTYI9RWf09lAsxcTEKWWlD07lBlLnys+ebyW+cdiRo8m vZKeaIYz35naPgBTQQ6O7KjJ7x8uy8vjkHMcykkWgFZkV1cKSOVVzh3tzZqK6Vb2c08Q 9K2pd5UA2S57boYV0CBhVS6Kj4uIF3wBJ5B9aIXiOYp05SneKEvqdO7ypsi6XEEYBGVw gisA== X-Gm-Message-State: AOAM532d8viGn2uTCV+VetuyhEoi6BeKHbSpjZvsXXVlP6b7y+SkjY6F kEZTywUMW7jikwtnC0PFc1k8h5Rc8tGb9t+WFg5biw== X-Google-Smtp-Source: ABdhPJywVCwyDJxi8mDCxaQ84UX4NhwtCsxZuq3W3Nj/WPlgE/wW0AjoxO3ls200TDGFAQiZ+19LDE1rJzrmZp2I1/M= X-Received: by 2002:a1f:dfc1:: with SMTP id w184mr25017950vkg.17.1625759277057; Thu, 08 Jul 2021 08:47:57 -0700 (PDT) MIME-Version: 1.0 References: <55fdf513-9c54-bea9-1f53-fe2c5229d7ba@eggo.org> <871t4as1h9.fsf@toke.dk> <3D32F19B-5DEA-48AD-97E7-D043C4EAEC51@gmail.com> <1465267957.902610235@apps.rackspace.com> <20210702095924.0427b579@hermes.local> <1bab95a0-7904-2807-02fe-62674c19948f@kit.edu> <393e9ca6-f9f3-1826-9fbc-6d36871223d8@kit.edu> <932111a2-8099-0351-caff-f18e0821f9cf@kit.edu> In-Reply-To: <932111a2-8099-0351-caff-f18e0821f9cf@kit.edu> From: Neal Cardwell Date: Thu, 8 Jul 2021 11:47:40 -0400 Message-ID: To: "Bless, Roland (TM)" Cc: Matt Mathis , bloat Content-Type: multipart/alternative; boundary="00000000000033b79a05c69e9495" Subject: Re: [Bloat] Abandoning Window-based CC Considered Harmful (was Re: Bechtolschiem) X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Jul 2021 15:47:58 -0000 --00000000000033b79a05c69e9495 Content-Type: text/plain; charset="UTF-8" On Thu, Jul 8, 2021 at 10:28 AM Bless, Roland (TM) wrote: > Hi Neal, > > On 08.07.21 at 15:29 Neal Cardwell wrote: > > On Thu, Jul 8, 2021 at 7:25 AM Bless, Roland (TM) > wrote: > >> It seems that in BBRv2 there are many more mechanisms present >> that try to control the amount of inflight data more tightly and the new >> "cap" >> is at 1.25 BDP. >> > To clarify, the BBRv2 cwnd cap is not 1.25*BDP. If there is no packet loss > or ECN, the BBRv2 cwnd cap is the same as BBRv1. But if there has been > packet loss then conceptually the cwnd cap is the maximum amount of data > delivered in a single round trip since the last packet loss (with a floor > to ensure that the cwnd does not decrease by more than 30% per round trip > with packet loss, similar to CUBIC's 30% reduction in a round trip with > packet loss). (And upon RTO the BBR (v1 or v2) cwnd is reset to 1, and > slow-starts upward from there.) > > Thanks for the clarification. I'm patiently waiting to see the BBRv2 > mechanisms coherently written up > in that new BBR Internet-Draft version ;-) Getting this together from the > "diffs" on the IETF slides or the source code > is somewhat tedious, so I'll be very grateful for having that single write > up. > > There is an overview of the BBRv2 response to packet loss here: > > https://datatracker.ietf.org/meeting/104/materials/slides-104-iccrg-an-update-on-bbr-00#page=18 > > My assumption came from slide 25 of this slide set: > the probing is terminated if inflight > 1.25 estimated_bdp (or "hard > ceiling" seen). > So without experiencing more than 2% packet loss this may end up beyond > 1.25 estimated_bdp, > Yes, that can be the behavior when BBRv2 is probing for bandwidth, but is not the average or steady-state behavior. > but would it often end at 2estimated_bdp? > That depends on the details of the bottleneck buffer depth, number of competing flows and what congestion control algorithm they are using, etc. neal > Best regards, > > Roland > > > > >> This is too large for short queue routers in the Internet core, but it >> helps a lot with cross traffic on large queue edge routers. >> >> Best regards, >> Roland >> >> [1] https://ieeexplore.ieee.org/document/8117540 >> >> >> On Wed, Jul 7, 2021 at 3:19 PM Bless, Roland (TM) >> wrote: >> >>> Hi Matt, >>> >>> [sorry for the late reply, overlooked this one] >>> >>> please, see comments inline. >>> >>> On 02.07.21 at 21:46 Matt Mathis via Bloat wrote: >>> >>> The argument is absolutely correct for Reno, CUBIC and all >>> other self-clocked protocols. One of the core assumptions in Jacobson88, >>> was that the clock for the entire system comes from packets draining >>> through the bottleneck queue. In this world, the clock is intrinsically >>> brittle if the buffers are too small. The drain time needs to be a >>> substantial fraction of the RTT. >>> >>> I'd like to separate the functions here a bit: >>> >>> 1) "automatic pacing" by ACK clocking >>> >>> 2) congestion-window-based operation >>> >>> I agree that the automatic pacing generated by the ACK clock (function >>> 1) is increasingly >>> distorted these days and may consequently cause micro bursts. >>> This can be mitigated by using paced sending, which I consider very >>> useful. >>> However, I consider abandoning the (congestion) window-based approaches >>> with ACK feedback (function 2) as harmful: >>> a congestion window has an automatic self-stabilizing property since the >>> ACK feedback reflects >>> also the queuing delay and the congestion window limits the amount of >>> inflight data. >>> In contrast, rate-based senders risk instability: two senders in an >>> M/D/1 setting, each sender sending with 50% >>> bottleneck rate in average, both using paced sending at 120% of the >>> average rate, suffice to cause >>> instability (queue grows unlimited). >>> >>> IMHO, two approaches seem to be useful: >>> a) congestion-window-based operation with paced sending >>> b) rate-based/paced sending with limiting the amount of inflight data >>> >>> >>> However, we have reached the point where we need to discard that >>> requirement. One of the side points of BBR is that in many environments it >>> is cheaper to burn serving CPU to pace into short queue networks than it is >>> to "right size" the network queues. >>> >>> The fundamental problem with the old way is that in some contexts the >>> buffer memory has to beat Moore's law, because to maintain constant drain >>> time the memory size and BW both have to scale with the link (laser) BW. >>> >>> See the slides I gave at the Stanford Buffer Sizing workshop december >>> 2019: Buffer Sizing: Position Paper >>> >>> >>> >>> Thanks for the pointer. I don't quite get the point that the buffer must >>> have a certain size to keep the ACK clock stable: >>> in case of an non application-limited sender, a very small buffer >>> suffices to let the ACK clock >>> run steady. The large buffers were mainly required for loss-based CCs to >>> let the standing queue >>> build up that keeps the bottleneck busy during CWnd reduction after >>> packet loss, thereby >>> keeping the (bottleneck link) utilization high. >>> >>> Regards, >>> >>> Roland >>> >>> >>> Note that we are talking about DC and Internet core. At the edge, BW is >>> low enough where memory is relatively cheap. In some sense BB came about >>> because memory is too cheap in these environments. >>> >>> Thanks, >>> --MM-- >>> The best way to predict the future is to create it. - Alan Kay >>> >>> We must not tolerate intolerance; >>> however our response must be carefully measured: >>> too strong would be hypocritical and risks spiraling out of >>> control; >>> too weak risks being mistaken for tacit approval. >>> >>> >>> On Fri, Jul 2, 2021 at 9:59 AM Stephen Hemminger < >>> stephen@networkplumber.org> wrote: >>> >>>> On Fri, 2 Jul 2021 09:42:24 -0700 >>>> Dave Taht wrote: >>>> >>>> > "Debunking Bechtolsheim credibly would get a lot of attention to the >>>> > bufferbloat cause, I suspect." - dpreed >>>> > >>>> > "Why Big Data Needs Big Buffer Switches" - >>>> > >>>> http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf >>>> > >>>> >>>> Also, a lot depends on the TCP congestion control algorithm being used. >>>> They are using NewReno which only researchers use in real life. >>>> >>>> Even TCP Cubic has gone through several revisions. In my experience, the >>>> NS-2 models don't correlate well to real world behavior. >>>> >>>> In real world tests, TCP Cubic will consume any buffer it sees at a >>>> congested link. Maybe that is what they mean by capture effect. >>>> >>>> There is also a weird oscillation effect with multiple streams, where >>>> one >>>> flow will take the buffer, then see a packet loss and back off, the >>>> other flow will take over the buffer until it sees loss. >>>> >>>> _______________________________________________ >>> >>> _______________________________________________ >>> >>> >>> > --00000000000033b79a05c69e9495 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


=
On Thu, Jul 8, 2021 at 10:28 AM Bless= , Roland (TM) <roland.bless@kit.= edu> wrote:
=20 =20 =20
Hi Neal,

On 08.07.21 at 15:29 Neal Cardwell wrote:
=20
On Thu, Jul 8, 2021 at 7:25 AM Bless, Roland (TM) <roland.bless@kit.edu> wrote:

It seems that in BBRv2 there are many more mechanisms present
that try to control the amount of inflight data more tightly and the new "cap"
is at 1.25 BDP.

To clarify, the BBRv2 cwnd cap is not 1.25*BDP. If there is no packet loss or ECN, the BBRv2 cwnd cap is the same as BBRv1. But if there has been packet loss then conceptually the cwnd cap is the maximum amount of data delivered in a single round trip since the last packet loss (with a floor to ensure that the cwnd does not decrease by more than 30% per round trip with packet loss, similar to CUBIC's 30% reduction in a round trip with packet loss). (And upon RTO the BBR (v1 or v2) cwnd is reset to 1, and slow-starts upward from there.)
Thanks for the clarification. I'm patiently waiting to see the BBRv= 2 mechanisms coherently written up
in that new BBR Internet-Draft version ;-) Getting this together from the "diffs" on the IETF slides or the source code
is somewhat tedious, so I'll be very grateful for having that singl= e write up.
There is an overview of the BBRv2 response to packet loss here:
My assumption came from slide 25 of this slide set:
the probing is terminated if inflight > 1.25 estimated_bdp (or "hard ceiling" seen).
So without experiencing more than 2% packet loss this may end up beyond 1.25 estimated_bdp,

Ye= s, that can be the behavior when BBRv2 is probing for bandwidth, but is not= the average or steady-state behavior.
=C2=A0
but would it often end at 2estimated_bdp?

That depends on the details of the bottleneck buffer depth, number= of competing flows and what congestion control algorithm they are using, e= tc.

neal

=C2=A0

Best regards,

=C2=A0Roland


=C2=A0

This is too large for short queue routers in the Internet core, but it helps a lot with cross traffic on large queue edge routers.

Best regards,
=C2=A0Roland

[1] https://ieeexplore.ieee.org/document/8117540


On Wed, Jul 7, 2021 at 3:19 PM Bless, Roland (TM) <roland.bless@kit.edu> wrote:
Hi Matt,

[sorry for the late reply, overlooked this one]

please, see comments inline.

On 02.07.21 at 21:46 Matt Mathis via Bloat wrote:
The argument is absolutely correct for Reno, CUBIC and all other=C2=A0self-clocked protocols.=C2=A0 One of t= he core assumptions in Jacobson88, was that the clock=C2=A0for the entire system comes from packe= ts draining through the bottleneck queue.=C2=A0 In this world, the clock is intrinsically brittle if the buffers=C2=A0are too small.=C2=A0 The drai= n time needs to be a substantial fraction of the RTT.
I'd like to separate the functions here a bit:

1) "automatic pacing" by ACK clocking

2) congestion-window-based operation

I agree that the automatic pacing generated by the ACK clock (function 1) is increasingly
distorted these days and may consequently cause micro bursts.
This can be mitigated by using paced sending, which I consider very useful.
However, I consider abandoning the (congestion) window-based approaches
with ACK feedback (function 2) as harmful:
a congestion window has an automatic self-stabilizing property since the ACK feedback reflects
also the queuing delay and the congestion window limits the amount of inflight data.
In contrast, rate-based senders risk instability: two senders in an M/D/1 setting, each sender sending with 50%
bottleneck rate in average, both using paced sending at 120% of the average rate, suffice to cause
instability (queue grows unlimited).

IMHO, two approaches seem to be useful:
a) congestion-window-based operation with paced sending
b) rate-based/paced sending with limiting the amount of inflight data


However, we have reached the point where=C2=A0we need to discard that requirement.= =C2=A0 One of the side points of BBR is that in many environments it is cheaper to burn serving CPU to pace into short queue networks than it is to "right size" t= he network=C2=A0queues.

The fundamental problem with the old=C2=A0wa= y is that in some contexts the buffer memory has to beat Moore's law, because to maintai= n constant drain time the memory=C2=A0size and BW both have to scale with the link (laser) BW.

See the slides I gave at the=C2=A0Stanford Buffer Sizing workshop december 2019:=C2=A0Buffer Sizing: Position Paper=C2=A0

Thanks for the pointer. I don't quite get the point that the buffer must have a certain size to keep the ACK clock stable:
in case of an non application-limited sender, a very small buffer suffices to let the ACK clock
run steady. The large buffers were mainly required for loss-based CCs to let the standing queue
build up that keeps the bottleneck busy during CWnd reduction after packet loss, thereby
keeping the (bottleneck link) utilization high.

Regards,

=C2=A0Roland


Note that we are talking about DC and Internet core.=C2=A0 At the edge, BW is low enough where memory is relatively cheap.=C2=A0 =C2=A0In some sense BB came about because memor= y is too cheap in these environments.

Thanks,
--MM--
The best way to predict the future is to create it. =C2=A0- Alan Kay

We must not tolerate intolerance;
=C2=A0 =C2=A0 =C2= =A0 =C2=A0however our response must be carefully measured:=C2=A0
=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 too strong would be hypocritical and risks spiraling out of control;
=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 too weak risks being mistaken for tacit approval.


On Fri, Jul 2, 2021 at 9:59 AM Stephen Hemminger <stephen@networkpl= umber.org> wrote:
On Fri, 2 Jul 2021 09:42:24 -0700
Dave Taht <dave.taht@gmail.com> wrote:

> "Debunking Bechtolsheim credibly woul= d get a lot of attention to the
> bufferbloat cause, I suspect." - dpre= ed
>
> "Why Big Data Needs Big Buffer Switches" -
> http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-= WP.pdf
>

Also, a lot depends on the TCP congestion control algorithm being used.
They are using NewReno which only researchers use in real life.

Even TCP Cubic has gone through several revisions. In my experience, the
NS-2 models don't correlate well to real world behavior.

In real world tests, TCP Cubic will consume any buffer it sees at a
congested link. Maybe that is what they mean by capture effect.

There is also a weird oscillation effect with multiple streams, where one
flow will take the buffer, then see a packet loss and back off, the
other flow will take over the buffer until it sees loss.

_______________________________________________
______________________________________________=
_


--00000000000033b79a05c69e9495--