<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 28, 2023 at 5:36 AM Ayush Mishra <<a href="mailto:ayumishra.95@gmail.com">ayumishra.95@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hey Neal,</div><div><br></div><div>I was revisiting this thread before presenting this paper in iccrg tomorrow - and I was particularly intrigued by one of the motivations you mentioned for BBR:</div><div><br></div><div>"<span>BBR</span> is not trying to maintain a higher throughput than CUBIC in these kinds of scenarios with steady-state bulk flows. <span>BBR</span>
is trying to be robust to the kinds of random packet loss that happen
in the real world when there are flows dynamically entering/leaving a
bottleneck."</div><div><br></div><div>BBRv1 essentially tried to deal with this problem by doing away with packet loss as a congestion signal and having an entirely different philosophy to congestion control. However, if we set aside the issue of buffer bloat, I would imagine packet loss is a bad congestion signal in this situation because most loss-based congestion control algorithms use it as a binary signal with a binary response (back-off or no back-off). In other words, I feel the blame must be placed on not just the congestion signal, but also on how most algorithms respond to this congestion signal.</div></div></blockquote><div><br></div><div>I would even go a little further, and say we don't need to "blame" loss as a congestion signal: usually it's telling us something useful and important.</div><div><br></div><div>AFAICT the problem is in the combination of:</div><div> (a) only using loss as a signal</div><div> (b) only reacting to whether there is packet loss in a round trip as a signal</div><div> (c) only using a single multiplicative decrease as a response to loss detected in fast recovery</div><div><br></div><div>AFAICT any algorithm that has those properties (like Reno and CUBIC) simply can't scale to large BDPs if there are typical levels of loss or the traffic or available bandwidth is dynamic. At large BDPs and typically achievable loss rates, there will be packet loss in every round trip and the connection will always be decreasing rather than increasing, so will starve. For example, with a BDP of 10 Gbps * 100ms and MTU of 1500 bytes and loss rate of 0.0012% we'd expect a packet loss every round trip, and so we would expect starvation. In particular, a single CUBIC flow over such a path needs >40 secs between experiencing any losses, or a loss rate less than 0.0000029% (2.9e-8) [ <a href="https://tools.ietf.org/html/rfc8312#section-5.2">https://tools.ietf.org/html/rfc8312#section-5.2</a> ].</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>On a per-packet basis, packet loss is a binary signal. But over a window, the loss percentage and distribution, for example, can be a rich signal. There is probably scope for differentiating between different kinds of packet losses (and deciding how to react to them) when packet loss is coupled with the most recent delay measurement too. Now that BBRv2 reacts to packet loss, are you making any of these considerations too?</div></div></blockquote><div><br></div><div>Yes, I agree there is useful information there, and BBRv2 does look explicitly and indirectly at the loss rate when making decisions. BBRv2 does not look at coupling the loss signal with the most recent delay measurement, but I agree that seems like a fruitful direction, and we have been considering that as a component of future CC algorithms.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>This is not something I plan to present in iccrg tomorrow, just something I was curious about :)</div></div></blockquote><div><br></div><div>Thanks for posting! I agree these are interesting topics. :-)</div><div><br></div><div>best regards,</div><div>neal</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Warmest regards,</div><div>Ayush<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Aug 26, 2022 at 9:36 PM 'Neal Cardwell' via BBR Development <<a href="mailto:bbr-dev@googlegroups.com" target="_blank">bbr-dev@googlegroups.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Yes, I agree the assumptions are key here. One key aspect of this paper is that it focuses on the steady-state behavior of bulk flows.<div><br></div><div>Once you allow for short flows (like web pages, RPCs, etc) to dynamically enter and leave a bottleneck, the considerations become different. As is well-known, Reno/CUBIC will starve themselves if new flows enter and cause loss too frequently. For CUBIC, for a somewhat typical 30ms broadband path with a flow fair share of 25 Mbit/sec, if new flows enter and cause loss more frequently than roughly every 2 seconds then CUBIC will not be able to utilize its fair share. For a high-speed WAN path, with 100ms RTT and fair share of 10 Gbit/sec, if new flows enter and cause loss more frequently than roughly every 40 seconds then CUBIC will not be able to utilize its fair share. Basically, loss-based CC can starve itself in some very typical kinds of dynamic scenarios that happen in the real world.</div><div><br></div><div>BBR is not trying to maintain a higher throughput than CUBIC in these kinds of scenarios with steady-state bulk flows. BBR is trying to be robust to the kinds of random packet loss that happen in the real world when there are flows dynamically entering/leaving a bottleneck.</div><div><br></div><div>cheers,</div><div>neal</div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Aug 25, 2022 at 8:01 PM Dave Taht via Bloat <<a href="mailto:bloat@lists.bufferbloat.net" target="_blank">bloat@lists.bufferbloat.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I rather enjoyed this one. I can't help but wonder what would happen<br>
if we plugged some different assumptions into their model.<br>
<br>
<a href="https://www.comp.nus.edu.sg/~bleong/publications/imc2022-nash.pdf" rel="noreferrer" target="_blank">https://www.comp.nus.edu.sg/~bleong/publications/imc2022-nash.pdf</a><br>
<br>
-- <br>
FQ World Domination pending: <a href="https://blog.cerowrt.org/post/state_of_fq_codel/" rel="noreferrer" target="_blank">https://blog.cerowrt.org/post/state_of_fq_codel/</a><br>
Dave Täht CEO, TekLibre, LLC<br>
_______________________________________________<br>
Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net" target="_blank">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
</blockquote></div>
<p></p>
-- <br>
You received this message because you are subscribed to the Google Groups "BBR Development" group.<br>
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="mailto:bbr-dev+unsubscribe@googlegroups.com" target="_blank">bbr-dev+unsubscribe@googlegroups.com</a>.<br>
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/bbr-dev/CADVnQykKbnxpNcpuZATug_4VLhV1%3DaoTTQE2263o8HF9ye_TQg%40mail.gmail.com?utm_medium=email&utm_source=footer" target="_blank">https://groups.google.com/d/msgid/bbr-dev/CADVnQykKbnxpNcpuZATug_4VLhV1%3DaoTTQE2263o8HF9ye_TQg%40mail.gmail.com</a>.<br>
</blockquote></div>
</blockquote></div></div>