<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Apr 6, 2019 at 7:49 AM Dave Taht <<a href="mailto:dave.taht@gmail.com">dave.taht@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">> With fq_codel and the same ECN marking threshold (fq_codel ce_threshold 242us), we see slightly smoother fairness properties (not surprising) but with slightly higher latency.<br>
><br>
> The basic summary:<br>
><br>
> retransmits: 0<br>
> flow throughput: [46.77 .. 51.48]<br>
> RTT samples at various percentiles:<br>
> % | RTT (ms)<br>
> ------+---------<br>
> 0 1.009<br>
> 50 1.334<br>
> 60 1.416<br>
> 70 1.493<br>
> 80 1.569<br>
> 90 1.655<br>
> 95 1.725<br>
> 99 1.902<br>
> 99.9 2.328<br>
> 100 6.414<br>
<br>
I am still trying to decode the output of this tool. Perhaps you can<br>
check my math?<br>
<br>
At 0, I figure this is the bare minimum RTT latency measurement from<br>
the first flow started, basically a syn/syn ack pair, yielding 9us, of<br>
which gives a tiny baseline to/from the wire delay, the overhead of a<br>
tcp connection getting going, and the routing overheads (if you are<br>
using a route table?) of (last I looked) ~25ns per virtual hop (so *8)<br>
for ipv4 and something much larger if you are using ipv6. This is<br>
using ipv4?<br></blockquote><div><br></div><div>This is using IPv6.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
A perfect level of interleave of 20 flows, 2 large packets each, would<br>
yield a RTT measurement of around 532 extra us, but you are only<br>
seeing that at the 80th percentile...<br></blockquote><div><br></div><div>The flows would only get ~500us extra delay above the two-way propagation delay if all of those packets ended up in the queue at the same time. But the BDP here is 1Gbps*1ms = 82 packets, and there are 20 flows, so for periods where the flows keep their steady-state inflight around 4 or smaller, their aggregate inflight will be around 4*20 = 80, which is below the BDP, so there is no queue. In a steady-state test like this the ECN signal allows the flows to usually keep their inflight close to this level, so that the higher queues and queuing delays only happen when some or all of the flows are pushing up their inflight to probe for bandwidth around the same time. For a scenario like this, those dynamics are not unique to BBRv2; in a scenario like this, at a high level similar reasoning would apply to DCTCP or TCP Prague.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
100: The almost-worst case basic scenario is that 20 flows of 64k<br>
GRO/GSO bytes each are served in order. That's 13us * 42 * 20 =<br>
10.920ms. It's potentially worse than that due to the laws of<br>
probability one flow could get scheduled more than once.<br></blockquote><div><br></div><div>Keep in mind that there's no reason why the flows would use maximally-sized 64KByte GSO packets in this test. The typical pacing rate for the flows is close to the throughput of 50Mbps, and the TCP/BBR TSO autosizing code will tend to choose GSO skbs with around 1ms of data (n the 24-540Mbps range), and at at 50Mbps this 1ms allotment is about 4*MSS.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
1) What is the qdisc on the server and client side? sch_fq? pfifo_fast?<br></blockquote><div><br></div><div>Sender and receiver qdiscs are pfifo_fast<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
2) What happens when you run the same test with gso/gro disabled?<br></blockquote><div><br></div><div>Disabling GSO and GRO is not a realistic config, and we have limited developer resources on the BBR project, so I'm not going to have time to run that kind of test (sorry!). Perhaps you can run that test after we open-source BBRv2.</div><div><br></div><div>best,</div><div>neal</div><div><br></div></div></div>