<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <div class="moz-cite-prefix">Hi Neal,</div>
    <div class="moz-cite-prefix"><br>
    </div>
    <div class="moz-cite-prefix">On 08.07.21 at 15:29 Neal Cardwell
      wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CADVnQy=SyxdOXCrUnE45x_r3vZi7mM0OyeVo6btJcyZ+qnT_1Q@mail.gmail.com">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <div dir="ltr">
        <div class="gmail_quote">
          <div dir="ltr" class="gmail_attr">On Thu, Jul 8, 2021 at 7:25
            AM Bless, Roland (TM) <<a
              href="mailto:roland.bless@kit.edu" moz-do-not-send="true">roland.bless@kit.edu</a>>
            wrote:</div>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px
            0.8ex;border-left:1px solid
            rgb(204,204,204);padding-left:1ex">
            <div>
              <p>It seems that in BBRv2 there are many more mechanisms
                present <br>
                that try to control the amount of inflight data more
                tightly and the new "cap"<br>
                is at 1.25 BDP.<br>
              </p>
            </div>
          </blockquote>
          <div>To clarify, the BBRv2 cwnd cap is not 1.25*BDP. If there
            is no packet loss or ECN, the BBRv2 cwnd cap is the same as
            BBRv1. But if there has been packet loss then conceptually
            the cwnd cap is the maximum amount of data delivered in a
            single round trip since the last packet loss (with a floor
            to ensure that the cwnd does not decrease by more than 30%
            per round trip with packet loss, similar to CUBIC's 30%
            reduction in a round trip with packet loss). (And upon RTO
            the BBR (v1 or v2) cwnd is reset to 1, and slow-starts
            upward from there.)</div>
        </div>
      </div>
    </blockquote>
    Thanks for the clarification. I'm patiently waiting to see the BBRv2
    mechanisms coherently written up<br>
    in that new BBR Internet-Draft version ;-) Getting this together
    from the "diffs" on the IETF slides or the source code<br>
    is somewhat tedious, so I'll be very grateful for having that single
    write up.
    <blockquote type="cite"
cite="mid:CADVnQy=SyxdOXCrUnE45x_r3vZi7mM0OyeVo6btJcyZ+qnT_1Q@mail.gmail.com">
      <div dir="ltr">
        <div class="gmail_quote">
          <div>There is an overview of the BBRv2 response to packet loss
            here:</div>
          <div>  <a
href="https://datatracker.ietf.org/meeting/104/materials/slides-104-iccrg-an-update-on-bbr-00#page=18"
              moz-do-not-send="true">https://datatracker.ietf.org/meeting/104/materials/slides-104-iccrg-an-update-on-bbr-00#page=18</a><br>
          </div>
        </div>
      </div>
    </blockquote>
    My assumption came from slide 25 of this slide set:<br>
    the probing is terminated if inflight > 1.25 estimated_bdp (or
    "hard ceiling" seen).<br>
    So without experiencing more than 2% packet loss this may end up
    beyond 1.25 estimated_bdp,<br>
    but would it often end at 2estimated_bdp?
    <p>Best regards,</p>
    <p> Roland</p>
    <blockquote type="cite"
cite="mid:CADVnQy=SyxdOXCrUnE45x_r3vZi7mM0OyeVo6btJcyZ+qnT_1Q@mail.gmail.com">
      <div dir="ltr">
        <div class="gmail_quote"><br>
          <div> </div>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px
            0.8ex;border-left:1px solid
            rgb(204,204,204);padding-left:1ex">
            <div>
              <p> </p>
              <blockquote type="cite">
                <div dir="ltr">
                  <div>This is too large for short queue routers in the
                    Internet core, but it helps a lot with cross traffic
                    on large queue edge routers.<br>
                  </div>
                </div>
              </blockquote>
              <p>Best regards,<br>
                 Roland<br>
              </p>
              <p>[1] <a
                  href="https://ieeexplore.ieee.org/document/8117540"
                  target="_blank" moz-do-not-send="true">https://ieeexplore.ieee.org/document/8117540</a></p>
              <blockquote type="cite"><br>
                <div class="gmail_quote">
                  <div dir="ltr" class="gmail_attr">On Wed, Jul 7, 2021
                    at 3:19 PM Bless, Roland (TM) <<a
                      href="mailto:roland.bless@kit.edu" target="_blank"
                      moz-do-not-send="true">roland.bless@kit.edu</a>>
                    wrote:<br>
                  </div>
                  <blockquote class="gmail_quote" style="margin:0px 0px
                    0px 0.8ex;border-left:1px solid
                    rgb(204,204,204);padding-left:1ex">
                    <div>
                      <div>Hi Matt,<br>
                        <br>
                        [sorry for the late reply, overlooked this one]</div>
                      <div><br>
                      </div>
                      <div>please, see comments inline.<br>
                      </div>
                      <div><br>
                      </div>
                      <div>On 02.07.21 at 21:46 Matt Mathis via Bloat
                        wrote:<br>
                      </div>
                      <blockquote type="cite">
                        <div dir="ltr">The argument is absolutely
                          correct for Reno, CUBIC and all
                          other self-clocked protocols.  One of the core
                          assumptions in Jacobson88, was that the
                          clock for the entire system comes from packets
                          draining through the bottleneck queue.  In
                          this world, the clock is intrinsically brittle
                          if the buffers are too small.  The drain time
                          needs to be a substantial fraction of the RTT.</div>
                      </blockquote>
                      I'd like to separate the functions here a bit:<br>
                      <p>1) "automatic pacing" by ACK clocking</p>
                      <p>2) congestion-window-based operation</p>
                      <p>I agree that the automatic pacing generated by
                        the ACK clock (function 1) is increasingly <br>
                        distorted these days and may consequently cause
                        micro bursts.<br>
                        This can be mitigated by using paced sending,
                        which I consider very useful. <br>
                        However, I consider abandoning the (congestion)
                        window-based approaches <br>
                        with ACK feedback (function 2) as harmful:<br>
                        a congestion window has an automatic
                        self-stabilizing property since the ACK feedback
                        reflects<br>
                        also the queuing delay and the congestion window
                        limits the amount of inflight data.<br>
                        In contrast, rate-based senders risk
                        instability: two senders in an M/D/1 setting,
                        each sender sending with 50%<br>
                        bottleneck rate in average, both using paced
                        sending at 120% of the average rate, suffice to
                        cause<br>
                        instability (queue grows unlimited).<br>
                        <br>
                        IMHO, two approaches seem to be useful:<br>
                        a) congestion-window-based operation with paced
                        sending<br>
                        b) rate-based/paced sending with limiting the
                        amount of inflight data<br>
                      </p>
                      <blockquote type="cite">
                        <div dir="ltr">
                          <div><br>
                          </div>
                          <div>However, we have reached the point
                            where we need to discard that requirement. 
                            One of the side points of BBR is that in
                            many environments it is cheaper to burn
                            serving CPU to pace into short queue
                            networks than it is to "right size" the
                            network queues.</div>
                          <div><br>
                          </div>
                          <div>The fundamental problem with the old way
                            is that in some contexts the buffer memory
                            has to beat Moore's law, because to maintain
                            constant drain time the memory size and BW
                            both have to scale with the link (laser) BW.</div>
                          <div><br>
                          </div>
                          <div>See the slides I gave at the Stanford
                            Buffer Sizing workshop december 2019: <a
href="https://docs.google.com/presentation/d/1VyBlYQJqWvPuGnQpxW4S46asHMmiA-OeMbewxo_r3Cc/edit#slide=id.g791555f04c_0_5"
                              target="_blank" moz-do-not-send="true">Buffer
                              Sizing: Position Paper</a> </div>
                          <div><br>
                          </div>
                        </div>
                      </blockquote>
                      <p>Thanks for the pointer. I don't quite get the
                        point that the buffer must have a certain size
                        to keep the ACK clock stable:<br>
                        in case of an non application-limited sender, a
                        very small buffer suffices to let the ACK clock
                        <br>
                        run steady. The large buffers were mainly
                        required for loss-based CCs to let the standing
                        queue <br>
                        build up that keeps the bottleneck busy during
                        CWnd reduction after packet loss, thereby <br>
                        keeping the (bottleneck link) utilization high.<br>
                      </p>
                      <p>Regards,</p>
                      <p> Roland<br>
                      </p>
                      <p><br>
                      </p>
                      <blockquote type="cite">
                        <div dir="ltr">
                          <div>Note that we are talking about DC and
                            Internet core.  At the edge, BW is low
                            enough where memory is relatively cheap. 
                             In some sense BB came about because memory
                            is too cheap in these environments.</div>
                          <div><br>
                          </div>
                          <div>
                            <div>
                              <div dir="ltr">
                                <div dir="ltr">
                                  <div>
                                    <div dir="ltr">
                                      <div>
                                        <div dir="ltr">
                                          <div>Thanks,</div>
                                          --MM--<br>
                                          The best way to predict the
                                          future is to create it.  -
                                          Alan Kay<br>
                                          <br>
                                          We must not tolerate
                                          intolerance;</div>
                                        <div dir="ltr">       however
                                          our response must be carefully
                                          measured: </div>
                                        <div>            too strong
                                          would be hypocritical and
                                          risks spiraling out of
                                          control;</div>
                                        <div>            too weak risks
                                          being mistaken for tacit
                                          approval.</div>
                                      </div>
                                    </div>
                                  </div>
                                </div>
                              </div>
                            </div>
                            <br>
                          </div>
                        </div>
                        <br>
                        <div class="gmail_quote">
                          <div dir="ltr" class="gmail_attr">On Fri, Jul
                            2, 2021 at 9:59 AM Stephen Hemminger <<a
                              href="mailto:stephen@networkplumber.org"
                              target="_blank" moz-do-not-send="true">stephen@networkplumber.org</a>>
                            wrote:<br>
                          </div>
                          <blockquote class="gmail_quote"
                            style="margin:0px 0px 0px
                            0.8ex;border-left:1px solid
                            rgb(204,204,204);padding-left:1ex">On Fri, 2
                            Jul 2021 09:42:24 -0700<br>
                            Dave Taht <<a
                              href="mailto:dave.taht@gmail.com"
                              target="_blank" moz-do-not-send="true">dave.taht@gmail.com</a>>
                            wrote:<br>
                            <br>
                            > "Debunking Bechtolsheim credibly would
                            get a lot of attention to the<br>
                            > bufferbloat cause, I suspect." - dpreed<br>
                            > <br>
                            > "Why Big Data Needs Big Buffer
                            Switches" -<br>
                            > <a
href="http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf"
                              rel="noreferrer" target="_blank"
                              moz-do-not-send="true">http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf</a><br>
                            > <br>
                            <br>
                            Also, a lot depends on the TCP congestion
                            control algorithm being used.<br>
                            They are using NewReno which only
                            researchers use in real life.<br>
                            <br>
                            Even TCP Cubic has gone through several
                            revisions. In my experience, the<br>
                            NS-2 models don't correlate well to real
                            world behavior.<br>
                            <br>
                            In real world tests, TCP Cubic will consume
                            any buffer it sees at a<br>
                            congested link. Maybe that is what they mean
                            by capture effect.<br>
                            <br>
                            There is also a weird oscillation effect
                            with multiple streams, where one<br>
                            flow will take the buffer, then see a packet
                            loss and back off, the<br>
                            other flow will take over the buffer until
                            it sees loss.<br>
                            <br>
_______________________________________________</blockquote>
                        </div>
                        <pre>_______________________________________________
</pre>
                      </blockquote>
                      <br>
                    </div>
                  </blockquote>
                </div>
              </blockquote>
            </div>
          </blockquote>
        </div>
      </div>
    </blockquote>
    <br>
  </body>
</html>