[Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems
Bill Ver Steeg (versteb)
versteb at cisco.com
Fri May 15 07:27:47 EDT 2015
But the TCP timestamps are impacted by packet loss. You will sometimes get an accurate RTT reading, and you will sometimes get multiples of the RTT due to packet loss and retransmissions. I would hate to see a line classified as bloated when the real problem is simple packet loss. Head of line blocking, cumulative acks, yada, yada, yada.
You really need to use a packet oriented protocol (ICMP/UDP) to get a true measure of RTT at the application layer. If you can instrument TCP in the kernel to make instantaneous RTT available to the application, that might work. I am not sure how you would roll that out in a timely manner, though. I think I actually wrote some code to do this on BSD many years ago, and it gave pretty good results. I was building a terminal server (remember those?) and needed to have ~50ms +- 20ms echo times.
From: bloat-bounces at lists.bufferbloat.net [mailto:bloat-bounces at lists.bufferbloat.net] On Behalf Of Eggert, Lars
Sent: Friday, May 15, 2015 4:18 AM
To: Aaron Wood
Cc: cake at lists.bufferbloat.net; Klatsky, Carl; cerowrt-devel at lists.bufferbloat.net; bloat
Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems
On 2015-5-15, at 06:44, Aaron Wood <woody77 at gmail.com<mailto:woody77 at gmail.com>> wrote:
ICMP prioritization over TCP?
Ping in parallel to TCP is a hacky way to measure latencies; not only because of prioritization, but also because you don't measure TCP send/receive buffer latencies (and they can be large, auto-tuning is not so great.)
You really need to embed timestamps in the TCP bytestream and echo them back. See the recent netperf patch I sent.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Cerowrt-devel