Hi, On 2015-5-15, at 13:27, Bill Ver Steeg (versteb) wrote: > But the TCP timestamps are impacted by packet loss. You will sometimes get an accurate RTT reading, and you will sometimes get multiples of the RTT due to packet loss and retransmissions. right. But those will be transient, and so you can identify them against the baseline. > You really need to use a packet oriented protocol (ICMP/UDP) to get a true measure of RTT at the application layer. I disagree. You can use them to establish a lower bound on the delay an application over TCP will see, but not get an accurate estimate of that (because socket buffers are not included in the measurement.) And you rely on the network to not prioritize ICMP/UDP but otherwise leave it in the same queues. > If you can instrument TCP in the kernel to make instantaneous RTT available to the application, that might work. I am not sure how you would roll that out in a timely manner, though. That's already part of the TCP info struct, I think. At least in Linux. Lars > I think I actually wrote some code to do this on BSD many years ago, and it gave pretty good results. I was building a terminal server (remember those?) and needed to have ~50ms +- 20ms echo times. > > > Bvs > > From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Eggert, Lars > Sent: Friday, May 15, 2015 4:18 AM > To: Aaron Wood > Cc: cake@lists.bufferbloat.net; Klatsky, Carl; cerowrt-devel@lists.bufferbloat.net; bloat > Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems > > On 2015-5-15, at 06:44, Aaron Wood wrote: > ICMP prioritization over TCP? > > Probably. > > Ping in parallel to TCP is a hacky way to measure latencies; not only because of prioritization, but also because you don't measure TCP send/receive buffer latencies (and they can be large, auto-tuning is not so great.) > > You really need to embed timestamps in the TCP bytestream and echo them back. See the recent netperf patch I sent. > > Lars