<div dir="ltr">On Wednesday, 21 September 2016 20:25:32 UTC+1, Dave Taht wrote:<blockquote class="gmail_quote" style="margin: 0;margin-left: 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;">> Looking at cake_flowblind_noecn, BBR1 and BBR4 just kills both CUBIC flows.
<br>> Same with PIE.
<br>
<br>Yep. The single queue AQMs expect their induced drops to matter to all
<br>flows. BBR disregards them as noise. I think there's hope though, if
<br>BBR can treat ECN CE as a clear indication of of congestion and not
<br>filter it as it does drops.<br></blockquote><div><br>Extra credit assignment: get the next version of DOCSIS PIE to turn on ECN?<br><br>https://tools.ietf.org/html/draft-ietf-aqm-docsis-pie-02#section-4.7<br> </div><blockquote class="gmail_quote" style="margin: 0;margin-left: 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;">
But cake/fq_codel is just fine with different cc's in the mix, and I'm
<br>dying to look at the captures for what happens there.<br></blockquote><div><br>Very glad to see that, I can keep using fq_codel :).<br><br></div><blockquote class="gmail_quote" style="margin: 0;margin-left: 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;">> So it seems my intuition was wrong, at least for these scenarios. It wasn't
<br>> CUBIC that would kill BBR, it's the other way around.</blockquote><div><br>So (from the other thread) BBR is designed to use the traditional
recommendation of 1 BDP's worth of buffer. In the absence of other CC's,
it would limit itself to that. Understandable for bottlenecks in end-site modems or wifi.<br><br>Shallower buffers cause somewhat increased packet loss (given multiple competing BBR streams). BBR is designed to survive this without difficulty (incurring retransmit latency). Competing loss-based CCs will suffer badly.<br><br>The patch says it's designed to improve throughput "on today's high-speed long-haul links using commodity switches with shallow buffers" by not "[over-reacting] to losses caused by transient traffic bursts".<br><br>If there is systemic congestion at those switches[1]...<br><br>[1] ex<br>https://www.ncta.com/sites/prod/files/MIT-Congestion-DC.pdf<br>http://groups.csail.mit.edu/ana/Measurement-and-Analysis-of-Internet-Interconnection-and-Congestion-September2014.pdf<br><br>...I wait with interest to see what the ACM article says.<br><br></div><blockquote class="gmail_quote" style="margin: 0;margin-left: 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;">
My intuition was that "delay based TCPs can't work on the internet!" -
<br>and was wrong, also.
<br></blockquote><blockquote class="gmail_quote" style="margin: 0;margin-left: 0.8ex;border-left: 1px #ccc solid;padding-left: 1ex;">
<br>> Great to have testing
<br>> tools! Thanks Flent!
<br>
<br>Thx, toke! I try not to remember just how hard it was to do this sort
<br>of analysis on complex network flows when we started.<br></blockquote><div><br>And thanks for the matrix of test results!<br><br>It shows how powerful a tool it is, to see points raised so quickly.<br><br>If I'm reading the drops graph right, I can see multi-second periods >= 10% packet loss when the buffer is limited to 25ms (bfifo_64k, bw=20Mbit-rtt=48ms-flows=2-noecn-bbr). Clearly explains why normal CUBIC gets crushed :).<br><br>Alan<br></div></div>