<div dir="ltr"><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, May 3, 2017 at 5:50 AM, Andy Furniss <span dir="ltr"><<a href="mailto:adf.lists@gmail.com" target="_blank">adf.lists@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-">Andy Furniss wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Andy Furniss wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
b) it reacts to increase in RTT. An experiment with 10 Mbps bottleneck, 40 ms RTT and a typical 1000 packet buffer, increase<br>
in RTT with BBR is ~3 ms while with cubic it is over 1000 ms.<br>
</blockquote>
<br>
That is a nice aspect (though at 60mbit hfsc + 80ms bfifo I tested<br>
with 5 tcps it was IIRC 20ms vs 80 for cubic). I deliberately test<br>
using ifb on my PC because I want to pretend to be a router - IME<br>
(OK it was a while ago) testing on eth directly gives different<br>
results - like the locally generated tcp is backing off and giving<br>
different results.<br>
</blockquote>
<br>
I retested this with 40ms latency (netem) with hfsc + 1000 pfifo on<br>
ifb.<br>
</blockquote>
<br></span>
So, as Jonathan pointed out to me in another thread bbr needs fq and it<br>
seems fq only wotks on root of a real eth, which means thay are invalid<br>
tests.<br></blockquote><div><br></div><div class="gmail_default" style="font-size:small">Specifically, BBR needs packet pacing to work properly: the algorithm depends on the packets being properly paced.</div><div class="gmail_default"><br></div><div class="gmail_default">Today, fq is the only qdisc supporting pacing. </div><div class="gmail_default"><br></div><div class="gmail_default">The right answer would be to add packet pacing to cake/fq_codel directly. Until that is done, we don't know how BBR will work in our world.</div><div class="gmail_default" style="font-size:small"> - Jim</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I will soon (need to find a crossover cable!) be able to see using a<br>
third sender how cake varies shaping bbr in simulated ingress.<br>
<br>
I can test now how bbr fills buffers - some slightly strange results,<br>
one netperf ends up being "good" = buffer only a few ms.<br>
<br>
5 netperfs started together are not so good but nothing like cubic.<br>
<br>
5 netperfs started with a gap of a second or two are initially terrible,<br>
filling the buffer for about 30 seconds, then eventually falling back to<br>
lower occupancy.<br>
<br>
TODO - maybe this is a netperf artifact like bbr/fq thinks it is app<br>
limited.<br>
<br>
The worse thing about bbr + longer RTT I see so far is that its design<br>
seems to be to deliberately bork latency by 2x rtt during initial<br>
bandwidth probe. It does drain afterwards, but for something like dash<br>
generating a regular spike is not very game friendly and the spec<br>
"boasts" that unlike cubic a loss in the exponential phase is ignored,<br>
making ingress shaping somewhat less effective.<div class="gmail-HOEnZb"><div class="gmail-h5"><br>
______________________________<wbr>_________________<br>
Cake mailing list<br>
<a href="mailto:Cake@lists.bufferbloat.net" target="_blank">Cake@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/cake" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/<wbr>listinfo/cake</a><br>
</div></div></blockquote></div><br></div></div>