<div dir="ltr">I think there is something missing from your model. I just scanned your paper and noticed that you made no mention of rounding errors, nor some details around the drain phase timing, The implementation guarantees that the actual average rate across the combined BW probe and drain is strictly less than the measured maxBW and that the flight size comes back down to minRTT*maxBW before returning to unity pacing gain. In some sense these checks are redundant, but If you don't do them, it is absolutely true that you are at risk of seeing divergent behaviors.<div><br></div><div>That said, it is also true that multi-stream BBR behavior is quite complicated and needs more queue space than single stream. This complicates the story around the traditional workaround of using multiple streams to compensate for Reno & CUBIC lameness at larger scales (ordinary scales today). Multi-stream does not help BBR throughput and raises the queue occupancy, to the detriment of other users.</div><div><br></div><div><div>And yes, in my presentation, I described the core BBR algorithms as a framework, which might be extended to incorporate many additional algorithms if they provide optimal control in some settings. And yes, several are present in BBRv2.</div><div> </div></div><div><div>Thanks,<div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr">--MM--<br>The best way to predict the future is to create it. - Alan Kay<br><br>We must not tolerate intolerance;</div><div dir="ltr"> however our response must be carefully measured: </div><div> too strong would be hypocritical and risks spiraling out of control;</div><div> too weak risks being mistaken for tacit approval.</div></div></div></div></div><br></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jul 8, 2021 at 4:24 AM Bless, Roland (TM) <<a href="mailto:roland.bless@kit.edu">roland.bless@kit.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div>Hi Matt,</div>
<div><br>
</div>
<div>On 08.07.21 at 00:38 Matt Mathis wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Actually BBR does have a window based backup, which
normally only comes into play during load spikes and at very
short RTTs. It defaults to 2*minRTT*maxBW, which is twice the
steady state window in it's normal paced mode.</div>
</blockquote>
<p>So yes, BBR follows option b), but I guess that you are referring
to BBRv1 here. <br>
We have shown in [1, Sec.III] that BBRv1 flows will <b>always</b>
run (conceptually) toward their above quoted inflight-cap of<br>
2*minRTT*maxBW, if more than one BBR flow is present at the
bottleneck. So strictly speaking " which <b>normally only</b>
comes <br>
into play during load spikes and at very short RTTs" isn't true
for multiple BBRv1 flows.<br>
</p>
<p>It seems that in BBRv2 there are many more mechanisms present <br>
that try to control the amount of inflight data more tightly and
the new "cap"<br>
is at 1.25 BDP.<br>
</p>
<blockquote type="cite">
<div dir="ltr">
<div>This is too large for short queue routers in the Internet
core, but it helps a lot with cross traffic on large queue
edge routers.<br>
</div>
</div>
</blockquote>
<p>Best regards,<br>
Roland<br>
</p>
<p>[1] <a href="https://ieeexplore.ieee.org/document/8117540" target="_blank">https://ieeexplore.ieee.org/document/8117540</a></p>
<blockquote type="cite"><br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, Jul 7, 2021 at 3:19 PM
Bless, Roland (TM) <<a href="mailto:roland.bless@kit.edu" target="_blank">roland.bless@kit.edu</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div>Hi Matt,<br>
<br>
[sorry for the late reply, overlooked this one]</div>
<div><br>
</div>
<div>please, see comments inline.<br>
</div>
<div><br>
</div>
<div>On 02.07.21 at 21:46 Matt Mathis via Bloat wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">The argument is absolutely correct for
Reno, CUBIC and all other self-clocked protocols. One
of the core assumptions in Jacobson88, was that the
clock for the entire system comes from packets draining
through the bottleneck queue. In this world, the clock
is intrinsically brittle if the buffers are too small.
The drain time needs to be a substantial fraction of the
RTT.</div>
</blockquote>
I'd like to separate the functions here a bit:<br>
<p>1) "automatic pacing" by ACK clocking</p>
<p>2) congestion-window-based operation</p>
<p>I agree that the automatic pacing generated by the ACK
clock (function 1) is increasingly <br>
distorted these days and may consequently cause micro
bursts.<br>
This can be mitigated by using paced sending, which I
consider very useful. <br>
However, I consider abandoning the (congestion)
window-based approaches <br>
with ACK feedback (function 2) as harmful:<br>
a congestion window has an automatic self-stabilizing
property since the ACK feedback reflects<br>
also the queuing delay and the congestion window limits
the amount of inflight data.<br>
In contrast, rate-based senders risk instability: two
senders in an M/D/1 setting, each sender sending with 50%<br>
bottleneck rate in average, both using paced sending at
120% of the average rate, suffice to cause<br>
instability (queue grows unlimited).<br>
<br>
IMHO, two approaches seem to be useful:<br>
a) congestion-window-based operation with paced sending<br>
b) rate-based/paced sending with limiting the amount of
inflight data<br>
</p>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>However, we have reached the point where we need to
discard that requirement. One of the side points of
BBR is that in many environments it is cheaper to burn
serving CPU to pace into short queue networks than it
is to "right size" the network queues.</div>
<div><br>
</div>
<div>The fundamental problem with the old way is that in
some contexts the buffer memory has to beat Moore's
law, because to maintain constant drain time the
memory size and BW both have to scale with the link
(laser) BW.</div>
<div><br>
</div>
<div>See the slides I gave at the Stanford Buffer Sizing
workshop december 2019: <a href="https://docs.google.com/presentation/d/1VyBlYQJqWvPuGnQpxW4S46asHMmiA-OeMbewxo_r3Cc/edit#slide=id.g791555f04c_0_5" target="_blank">Buffer
Sizing: Position Paper</a> </div>
<div><br>
</div>
</div>
</blockquote>
<p>Thanks for the pointer. I don't quite get the point that
the buffer must have a certain size to keep the ACK clock
stable:<br>
in case of an non application-limited sender, a very small
buffer suffices to let the ACK clock <br>
run steady. The large buffers were mainly required for
loss-based CCs to let the standing queue <br>
build up that keeps the bottleneck busy during CWnd
reduction after packet loss, thereby <br>
keeping the (bottleneck link) utilization high.<br>
</p>
<p>Regards,</p>
<p> Roland<br>
</p>
<p><br>
</p>
<blockquote type="cite">
<div dir="ltr">
<div>Note that we are talking about DC and Internet
core. At the edge, BW is low enough where memory is
relatively cheap. In some sense BB came about
because memory is too cheap in these environments.</div>
<div><br>
</div>
<div>
<div>
<div dir="ltr">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>Thanks,</div>
--MM--<br>
The best way to predict the future is to
create it. - Alan Kay<br>
<br>
We must not tolerate intolerance;</div>
<div dir="ltr"> however our response
must be carefully measured: </div>
<div> too strong would be
hypocritical and risks spiraling out of
control;</div>
<div> too weak risks being
mistaken for tacit approval.</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Fri, Jul 2, 2021 at
9:59 AM Stephen Hemminger <<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Fri, 2 Jul 2021
09:42:24 -0700<br>
Dave Taht <<a href="mailto:dave.taht@gmail.com" target="_blank">dave.taht@gmail.com</a>>
wrote:<br>
<br>
> "Debunking Bechtolsheim credibly would get a lot
of attention to the<br>
> bufferbloat cause, I suspect." - dpreed<br>
> <br>
> "Why Big Data Needs Big Buffer Switches" -<br>
> <a href="http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf" rel="noreferrer" target="_blank">http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf</a><br>
> <br>
<br>
Also, a lot depends on the TCP congestion control
algorithm being used.<br>
They are using NewReno which only researchers use in
real life.<br>
<br>
Even TCP Cubic has gone through several revisions. In
my experience, the<br>
NS-2 models don't correlate well to real world
behavior.<br>
<br>
In real world tests, TCP Cubic will consume any buffer
it sees at a<br>
congested link. Maybe that is what they mean by
capture effect.<br>
<br>
There is also a weird oscillation effect with multiple
streams, where one<br>
flow will take the buffer, then see a packet loss and
back off, the<br>
other flow will take over the buffer until it sees
loss.<br>
<br>
_______________________________________________</blockquote>
</div>
<pre>_______________________________________________
</pre>
</blockquote>
<br>
</div>
</blockquote>
</div>
</blockquote>
<p><br>
</p>
</div>
</blockquote></div>