<div dir="ltr"><span style="font-size:13px">This is how I've changed the graph of latency under load per input from you guys.</span><div style="font-size:13px"><br><div>Taken away log axis.</div><div><br></div><div>Put in two bands. Yellow starts at double the idle latency, and goes to 4x the idle latency</div><div>red starts there, and goes to the top. No red shows if no bars reach into it.</div><div>And no yellow band shows if no bars get into that zone.</div><div><br></div><div>Is it more descriptive?</div><div><br></div><div>(sorry to the list moderator, gmail keeps sending under the wrong email and I get a moderator message)</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 23, 2015 at 8:05 PM, jb <span dir="ltr"><<a href="mailto:justinbeech@gmail.com" target="_blank">justinbeech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">This is how I've changed the graph of latency under load per input from you guys.<div><br><div>Taken away log axis.</div><div><br></div><div>Put in two bands. Yellow starts at double the idle latency, and goes to 4x the idle latency</div><div>red starts there, and goes to the top. No red shows if no bars reach into it.</div><div>And no yellow band shows if no bars get into that zone.</div><div><br></div><div>Is it more descriptive?</div><div><br></div></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 23, 2015 at 4:48 PM, Eric Dumazet <span dir="ltr"><<a href="mailto:eric.dumazet@gmail.com" target="_blank">eric.dumazet@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Wait, this is a 15 years old experiment using Reno and a single test<br>
bed, using ns simulator.<br>
<br>
Naive TCP pacing implementations were tried, and probably failed.<br>
<br>
Pacing individual packet is quite bad, this is the first lesson one<br>
learns when implementing TCP pacing, especially if you try to drive a<br>
40Gbps NIC.<br>
<br>
<a href="https://lwn.net/Articles/564978/" target="_blank">https://lwn.net/Articles/564978/</a><br>
<br>
Also note we use usec based rtt samples, and nanosec high resolution<br>
timers in fq. I suspect the ns simulator experiment had sync issues<br>
because of using low resolution timers or simulation artifact, without<br>
any jitter source.<br>
<br>
Billions of flows are now 'paced', but keep in mind most packets are not<br>
paced. We do not pace in slow start, and we do not pace when tcp is ACK<br>
clocked.<br>
<br>
Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can<br>
eventually have all packets being paced, using TSO 'clusters' for TCP.<br>
<span><br>
<br>
<br>
On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:<br>
> one reference with pdf publicly available. On the website there are<br>
> various papers<br>
> on this topic. Others might me more relevant but I did not check all of<br>
> them.<br>
<br>
> Understanding the Performance of TCP Pacing,<br>
> Amit Aggarwal, Stefan Savage, and Tom Anderson,<br>
> IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.<br>
><br>
> <a href="http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf" target="_blank">http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf</a><br>
<br>
<br>
</span><div><div>_______________________________________________<br>
Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net" target="_blank">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>