This is how I've changed the graph of latency under load per input from you guys.Taken away log axis.Put in two bands. Yellow starts at double the idle latency, and goes to 4x the idle latencyred starts there, and goes to the top. No red shows if no bars reach into it.And no yellow band shows if no bars get into that zone.Is it more descriptive?On Thu, Apr 23, 2015 at 4:48 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:Wait, this is a 15 years old experiment using Reno and a single test
bed, using ns simulator.
Naive TCP pacing implementations were tried, and probably failed.
Pacing individual packet is quite bad, this is the first lesson one
learns when implementing TCP pacing, especially if you try to drive a
40Gbps NIC.
https://lwn.net/Articles/564978/
Also note we use usec based rtt samples, and nanosec high resolution
timers in fq. I suspect the ns simulator experiment had sync issues
because of using low resolution timers or simulation artifact, without
any jitter source.
Billions of flows are now 'paced', but keep in mind most packets are not
paced. We do not pace in slow start, and we do not pace when tcp is ACK
clocked.
Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
eventually have all packets being paced, using TSO 'clusters' for TCP.
On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> one reference with pdf publicly available. On the website there are
> various papers
> on this topic. Others might me more relevant but I did not check all of
> them.
> Understanding the Performance of TCP Pacing,
> Amit Aggarwal, Stefan Savage, and Tom Anderson,
> IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
>
> http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat