Hi, On 2013-11-29, at 10:42, Toke Høiland-Jørgensen wrote: > Well, what the LINCS people have done (the link in my previous mail) is > basically this: Sniff TCP packets that have timestamps on them (i.e. > with the TCP timestamp option enabled), and compute the delta between > the timestamps as a latency measure. we tried this too. The TCP timestamps are too coarse-grained for datacenter latency measurements, I think under at least Linux and FreeBSD they get rounded up to 1ms or something. (Midori, do you remember the exact value?) > Putting timestamps into the TCP stream and reading them out at the other > end might work; but is there a way to force each timestamp to be in a > separate packet? No, but the sender and receiver can agree to embed them every X bytes in the stream. Yeah, sometimes that timestamp may be transmitted in two segments, but I guess that should be OK? > Do you know how that worked more specifically and/or do you have a link > to the source code? http://e2epi.internet2.edu/thrulay/ is the original. There are several variants, but I think they also have been abandoned: http://thrulay-hd.sourceforge.net/ http://thrulay-ng.sourceforge.net/ Lars