<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jul 8, 2021 at 7:25 AM Bless, Roland (TM) <<a href="mailto:roland.bless@kit.edu">roland.bless@kit.edu</a>> wrote:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
    <p>It seems that in BBRv2 there are many more mechanisms present <br>
      that try to control the amount of inflight data more tightly and
      the new "cap"<br>
      is at 1.25 BDP.<br></p></div></blockquote><div>To clarify, the BBRv2 cwnd cap is not 1.25*BDP. If there is no packet loss or ECN, the BBRv2 cwnd cap is the same as BBRv1. But if there has been packet loss then conceptually the cwnd cap is the maximum amount of data delivered in a single round trip since the last packet loss (with a floor to ensure that the cwnd does not decrease by more than 30% per round trip with packet loss, similar to CUBIC's 30% reduction in a round trip with packet loss). (And upon RTO the BBR (v1 or v2) cwnd is reset to 1, and slow-starts upward from there.)</div><div><br></div><div>There is an overview of the BBRv2 response to packet loss here:</div><div>  <a href="https://datatracker.ietf.org/meeting/104/materials/slides-104-iccrg-an-update-on-bbr-00#page=18">https://datatracker.ietf.org/meeting/104/materials/slides-104-iccrg-an-update-on-bbr-00#page=18</a><br></div><div><br></div><div>best,</div><div>neal</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><p>
    </p>
    <blockquote type="cite">
      <div dir="ltr">
        <div>This is too large for short queue routers in the Internet
          core, but it helps a lot with cross traffic on large queue
          edge routers.<br>
        </div>
      </div>
    </blockquote>
    <p>Best regards,<br>
       Roland<br>
    </p>
    <p>[1] <a href="https://ieeexplore.ieee.org/document/8117540" target="_blank">https://ieeexplore.ieee.org/document/8117540</a></p>
    <blockquote type="cite"><br>
      <div class="gmail_quote">
        <div dir="ltr" class="gmail_attr">On Wed, Jul 7, 2021 at 3:19 PM
          Bless, Roland (TM) <<a href="mailto:roland.bless@kit.edu" target="_blank">roland.bless@kit.edu</a>> wrote:<br>
        </div>
        <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
          <div>
            <div>Hi Matt,<br>
              <br>
              [sorry for the late reply, overlooked this one]</div>
            <div><br>
            </div>
            <div>please, see comments inline.<br>
            </div>
            <div><br>
            </div>
            <div>On 02.07.21 at 21:46 Matt Mathis via Bloat wrote:<br>
            </div>
            <blockquote type="cite">
              <div dir="ltr">The argument is absolutely correct for
                Reno, CUBIC and all other self-clocked protocols.  One
                of the core assumptions in Jacobson88, was that the
                clock for the entire system comes from packets draining
                through the bottleneck queue.  In this world, the clock
                is intrinsically brittle if the buffers are too small. 
                The drain time needs to be a substantial fraction of the
                RTT.</div>
            </blockquote>
            I'd like to separate the functions here a bit:<br>
            <p>1) "automatic pacing" by ACK clocking</p>
            <p>2) congestion-window-based operation</p>
            <p>I agree that the automatic pacing generated by the ACK
              clock (function 1) is increasingly <br>
              distorted these days and may consequently cause micro
              bursts.<br>
              This can be mitigated by using paced sending, which I
              consider very useful. <br>
              However, I consider abandoning the (congestion)
              window-based approaches <br>
              with ACK feedback (function 2) as harmful:<br>
              a congestion window has an automatic self-stabilizing
              property since the ACK feedback reflects<br>
              also the queuing delay and the congestion window limits
              the amount of inflight data.<br>
              In contrast, rate-based senders risk instability: two
              senders in an M/D/1 setting, each sender sending with 50%<br>
              bottleneck rate in average, both using paced sending at
              120% of the average rate, suffice to cause<br>
              instability (queue grows unlimited).<br>
              <br>
              IMHO, two approaches seem to be useful:<br>
              a) congestion-window-based operation with paced sending<br>
              b) rate-based/paced sending with limiting the amount of
              inflight data<br>
            </p>
            <blockquote type="cite">
              <div dir="ltr">
                <div><br>
                </div>
                <div>However, we have reached the point where we need to
                  discard that requirement.  One of the side points of
                  BBR is that in many environments it is cheaper to burn
                  serving CPU to pace into short queue networks than it
                  is to "right size" the network queues.</div>
                <div><br>
                </div>
                <div>The fundamental problem with the old way is that in
                  some contexts the buffer memory has to beat Moore's
                  law, because to maintain constant drain time the
                  memory size and BW both have to scale with the link
                  (laser) BW.</div>
                <div><br>
                </div>
                <div>See the slides I gave at the Stanford Buffer Sizing
                  workshop december 2019: <a href="https://docs.google.com/presentation/d/1VyBlYQJqWvPuGnQpxW4S46asHMmiA-OeMbewxo_r3Cc/edit#slide=id.g791555f04c_0_5" target="_blank">Buffer
                    Sizing: Position Paper</a> </div>
                <div><br>
                </div>
              </div>
            </blockquote>
            <p>Thanks for the pointer. I don't quite get the point that
              the buffer must have a certain size to keep the ACK clock
              stable:<br>
              in case of an non application-limited sender, a very small
              buffer suffices to let the ACK clock <br>
              run steady. The large buffers were mainly required for
              loss-based CCs to let the standing queue <br>
              build up that keeps the bottleneck busy during CWnd
              reduction after packet loss, thereby <br>
              keeping the (bottleneck link) utilization high.<br>
            </p>
            <p>Regards,</p>
            <p> Roland<br>
            </p>
            <p><br>
            </p>
            <blockquote type="cite">
              <div dir="ltr">
                <div>Note that we are talking about DC and Internet
                  core.  At the edge, BW is low enough where memory is
                  relatively cheap.   In some sense BB came about
                  because memory is too cheap in these environments.</div>
                <div><br>
                </div>
                <div>
                  <div>
                    <div dir="ltr">
                      <div dir="ltr">
                        <div>
                          <div dir="ltr">
                            <div>
                              <div dir="ltr">
                                <div>Thanks,</div>
                                --MM--<br>
                                The best way to predict the future is to
                                create it.  - Alan Kay<br>
                                <br>
                                We must not tolerate intolerance;</div>
                              <div dir="ltr">       however our response
                                must be carefully measured: </div>
                              <div>            too strong would be
                                hypocritical and risks spiraling out of
                                control;</div>
                              <div>            too weak risks being
                                mistaken for tacit approval.</div>
                            </div>
                          </div>
                        </div>
                      </div>
                    </div>
                  </div>
                  <br>
                </div>
              </div>
              <br>
              <div class="gmail_quote">
                <div dir="ltr" class="gmail_attr">On Fri, Jul 2, 2021 at
                  9:59 AM Stephen Hemminger <<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a>>
                  wrote:<br>
                </div>
                <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Fri, 2 Jul 2021
                  09:42:24 -0700<br>
                  Dave Taht <<a href="mailto:dave.taht@gmail.com" target="_blank">dave.taht@gmail.com</a>>
                  wrote:<br>
                  <br>
                  > "Debunking Bechtolsheim credibly would get a lot
                  of attention to the<br>
                  > bufferbloat cause, I suspect." - dpreed<br>
                  > <br>
                  > "Why Big Data Needs Big Buffer Switches" -<br>
                  > <a href="http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf" rel="noreferrer" target="_blank">http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf</a><br>
                  > <br>
                  <br>
                  Also, a lot depends on the TCP congestion control
                  algorithm being used.<br>
                  They are using NewReno which only researchers use in
                  real life.<br>
                  <br>
                  Even TCP Cubic has gone through several revisions. In
                  my experience, the<br>
                  NS-2 models don't correlate well to real world
                  behavior.<br>
                  <br>
                  In real world tests, TCP Cubic will consume any buffer
                  it sees at a<br>
                  congested link. Maybe that is what they mean by
                  capture effect.<br>
                  <br>
                  There is also a weird oscillation effect with multiple
                  streams, where one<br>
                  flow will take the buffer, then see a packet loss and
                  back off, the<br>
                  other flow will take over the buffer until it sees
                  loss.<br>
                  <br>
                  _______________________________________________</blockquote>
              </div>
              <pre>_______________________________________________
</pre>
            </blockquote>
            <br>
          </div>
        </blockquote>
      </div>
    </blockquote>
    <p><br>
    </p>
  </div>

_______________________________________________<br>
Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net" target="_blank">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
</blockquote></div></div>