From: "Bless, Roland (TM)" <roland.bless@kit.edu>
To: Matt Mathis <mattmathis@google.com>
Cc: Dave Taht <dave.taht@gmail.com>,
"cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Cerowrt-devel] Abandoning Window-based CC Considered Harmful (was Re: [Bloat] Bechtolschiem)
Date: Thu, 8 Jul 2021 13:24:51 +0200 [thread overview]
Message-ID: <393e9ca6-f9f3-1826-9fbc-6d36871223d8@kit.edu> (raw)
In-Reply-To: <CAH56bmAwCSc82VkZQxdf2DW=NvUVjh+utV5aLGtN=WyUqG=QLg@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 6702 bytes --]
Hi Matt,
On 08.07.21 at 00:38 Matt Mathis wrote:
> Actually BBR does have a window based backup, which normally only
> comes into play during load spikes and at very short RTTs. It
> defaults to 2*minRTT*maxBW, which is twice the steady state window in
> it's normal paced mode.
So yes, BBR follows option b), but I guess that you are referring to
BBRv1 here.
We have shown in [1, Sec.III] that BBRv1 flows will *always* run
(conceptually) toward their above quoted inflight-cap of
2*minRTT*maxBW, if more than one BBR flow is present at the bottleneck.
So strictly speaking " which *normally only* comes
into play during load spikes and at very short RTTs" isn't true for
multiple BBRv1 flows.
It seems that in BBRv2 there are many more mechanisms present
that try to control the amount of inflight data more tightly and the new
"cap"
is at 1.25 BDP.
> This is too large for short queue routers in the Internet core, but it
> helps a lot with cross traffic on large queue edge routers.
Best regards,
Roland
[1] https://ieeexplore.ieee.org/document/8117540
>
> On Wed, Jul 7, 2021 at 3:19 PM Bless, Roland (TM)
> <roland.bless@kit.edu <mailto:roland.bless@kit.edu>> wrote:
>
> Hi Matt,
>
> [sorry for the late reply, overlooked this one]
>
> please, see comments inline.
>
> On 02.07.21 at 21:46 Matt Mathis via Bloat wrote:
>> The argument is absolutely correct for Reno, CUBIC and all
>> other self-clocked protocols. One of the core assumptions in
>> Jacobson88, was that the clock for the entire system comes from
>> packets draining through the bottleneck queue. In this world,
>> the clock is intrinsically brittle if the buffers are too small.
>> The drain time needs to be a substantial fraction of the RTT.
> I'd like to separate the functions here a bit:
>
> 1) "automatic pacing" by ACK clocking
>
> 2) congestion-window-based operation
>
> I agree that the automatic pacing generated by the ACK clock
> (function 1) is increasingly
> distorted these days and may consequently cause micro bursts.
> This can be mitigated by using paced sending, which I consider
> very useful.
> However, I consider abandoning the (congestion) window-based
> approaches
> with ACK feedback (function 2) as harmful:
> a congestion window has an automatic self-stabilizing property
> since the ACK feedback reflects
> also the queuing delay and the congestion window limits the amount
> of inflight data.
> In contrast, rate-based senders risk instability: two senders in
> an M/D/1 setting, each sender sending with 50%
> bottleneck rate in average, both using paced sending at 120% of
> the average rate, suffice to cause
> instability (queue grows unlimited).
>
> IMHO, two approaches seem to be useful:
> a) congestion-window-based operation with paced sending
> b) rate-based/paced sending with limiting the amount of inflight data
>
>>
>> However, we have reached the point where we need to discard that
>> requirement. One of the side points of BBR is that in many
>> environments it is cheaper to burn serving CPU to pace into short
>> queue networks than it is to "right size" the network queues.
>>
>> The fundamental problem with the old way is that in some contexts
>> the buffer memory has to beat Moore's law, because to maintain
>> constant drain time the memory size and BW both have to scale
>> with the link (laser) BW.
>>
>> See the slides I gave at the Stanford Buffer Sizing workshop
>> december 2019: Buffer Sizing: Position Paper
>> <https://docs.google.com/presentation/d/1VyBlYQJqWvPuGnQpxW4S46asHMmiA-OeMbewxo_r3Cc/edit#slide=id.g791555f04c_0_5>
>>
>>
> Thanks for the pointer. I don't quite get the point that the
> buffer must have a certain size to keep the ACK clock stable:
> in case of an non application-limited sender, a very small buffer
> suffices to let the ACK clock
> run steady. The large buffers were mainly required for loss-based
> CCs to let the standing queue
> build up that keeps the bottleneck busy during CWnd reduction
> after packet loss, thereby
> keeping the (bottleneck link) utilization high.
>
> Regards,
>
> Roland
>
>
>> Note that we are talking about DC and Internet core. At the
>> edge, BW is low enough where memory is relatively cheap. In
>> some sense BB came about because memory is too cheap in these
>> environments.
>>
>> Thanks,
>> --MM--
>> The best way to predict the future is to create it. - Alan Kay
>>
>> We must not tolerate intolerance;
>> however our response must be carefully measured:
>> too strong would be hypocritical and risks spiraling
>> out of control;
>> too weak risks being mistaken for tacit approval.
>>
>>
>> On Fri, Jul 2, 2021 at 9:59 AM Stephen Hemminger
>> <stephen@networkplumber.org <mailto:stephen@networkplumber.org>>
>> wrote:
>>
>> On Fri, 2 Jul 2021 09:42:24 -0700
>> Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>>
>> wrote:
>>
>> > "Debunking Bechtolsheim credibly would get a lot of
>> attention to the
>> > bufferbloat cause, I suspect." - dpreed
>> >
>> > "Why Big Data Needs Big Buffer Switches" -
>> >
>> http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf
>> <http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf>
>> >
>>
>> Also, a lot depends on the TCP congestion control algorithm
>> being used.
>> They are using NewReno which only researchers use in real life.
>>
>> Even TCP Cubic has gone through several revisions. In my
>> experience, the
>> NS-2 models don't correlate well to real world behavior.
>>
>> In real world tests, TCP Cubic will consume any buffer it
>> sees at a
>> congested link. Maybe that is what they mean by capture effect.
>>
>> There is also a weird oscillation effect with multiple
>> streams, where one
>> flow will take the buffer, then see a packet loss and back
>> off, the
>> other flow will take over the buffer until it sees loss.
>>
>> _______________________________________________
>>
>> _______________________________________________
>
[-- Attachment #2: Type: text/html, Size: 11660 bytes --]
next prev parent reply other threads:[~2021-07-08 11:24 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-06 15:29 [Cerowrt-devel] trying to make sense of what switch vendors say wrt buffer bloat Eric Johansson
2016-06-06 16:53 ` Toke Høiland-Jørgensen
2016-06-06 17:46 ` Jonathan Morton
2016-06-06 18:37 ` Mikael Abrahamsson
2016-06-06 21:16 ` Ketan Kulkarni
2016-06-07 2:52 ` dpreed
2016-06-07 2:58 ` dpreed
2016-06-07 10:46 ` Mikael Abrahamsson
2016-06-07 14:46 ` Dave Taht
2016-06-07 17:51 ` Eric Johansson
2016-06-10 21:45 ` dpreed
2016-06-11 1:36 ` Jonathan Morton
2016-06-11 8:25 ` Sebastian Moeller
2021-07-02 16:42 ` [Cerowrt-devel] Bechtolschiem Dave Taht
2021-07-02 16:59 ` [Cerowrt-devel] [Bloat] Bechtolschiem Stephen Hemminger
2021-07-02 19:46 ` Matt Mathis
2021-07-07 22:19 ` [Cerowrt-devel] Abandoning Window-based CC Considered Harmful (was Re: [Bloat] Bechtolschiem) Bless, Roland (TM)
2021-07-07 22:38 ` Matt Mathis
2021-07-08 11:24 ` Bless, Roland (TM) [this message]
2021-07-08 13:29 ` Matt Mathis
2021-07-08 14:05 ` [Cerowrt-devel] " Bless, Roland (TM)
2021-07-08 14:40 ` [Cerowrt-devel] [Bloat] Abandoning Window-based CC Considered Harmful (was Bechtolschiem) Jonathan Morton
2021-07-08 20:14 ` David P. Reed
2021-07-08 13:29 ` Neal Cardwell
2021-07-02 20:28 ` [Cerowrt-devel] [Bloat] Bechtolschiem Jonathan Morton
2016-06-07 22:31 ` [Cerowrt-devel] trying to make sense of what switch vendors say wrt buffer bloat Valdis.Kletnieks
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/cerowrt-devel.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=393e9ca6-f9f3-1826-9fbc-6d36871223d8@kit.edu \
--to=roland.bless@kit.edu \
--cc=bloat@lists.bufferbloat.net \
--cc=cerowrt-devel@lists.bufferbloat.net \
--cc=dave.taht@gmail.com \
--cc=mattmathis@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox