[Bloat] Detecting bufferbloat from outside a node

Sebastian Moeller moeller0 at gmx.de
Tue Apr 28 05:58:37 EDT 2015


Hi Neil,


On Apr 28, 2015, at 09:17 , Neil Davies <neil.davies at pnsol.com> wrote:

> Jonathan
> 
> The timestamps don't change very quickly - dozens (or more) of packets can have the same timestamp, so it doesn't give you the appropriate discrimination power. Timed observations at key points gives you all you need (actually, appropriately gathered they give you all you can possibly know - by observation)

	But this has two issues:
1) “timed observations”: relatively easy if all nodes are under your control otherwise hard. I know about the CERN paper, but they had all nodes under their control, symmetric bandwidth and shipload of samples, so over the wild internet “timed observations” are still hard (and harder as the temporal precision requirement goes up)

2) “key points”: once you know the key points you already must have a decent understanding on the effective topology of the network, which again over the wider internet is much harder than if one has all nodes under control.


I am not sure how Paolo’s “no-touching” problem fits into the requirements for your deltaQ (meta-)math ;)

Best Regards
	Sebastian

> 
> Neil
> 
> On 28 Apr 2015, at 00:11, Jonathan Morton <chromatix99 at gmail.com> wrote:
> 
>> On 27 Apr 2015 23:31, "Neil Davies" <neil.davies at pnsol.com> wrote:
>> >
>> > Hi Jonathan
>> >
>> > On 27 Apr 2015, at 16:25, Jonathan Morton <chromatix99 at gmail.com> wrote:
>> >
>> >> One thing that might help you here is the TCP Timestamps option. The timestamps thus produced are opaque, but you can observe them and measure the time intervals between their production and echo. You should be able to infer something from that, with care.
>> >>
>> >> To determine the difference between loaded and unloaded states, you may need to observe for an extended period of time. Eventually you'll observe some sort of bulk flow, even if it's just a software update cycle. It's not quite so certain that you'll observe an idle state, but it is sufficient to observe an instance of the link not being completely saturated, which is likely to occur at least occasionally.
>> >>
>> >> - Jonathan Morton
>> >
>> > We looked at using TCP timestamps early on in our work. The problem is that they don't really help extract the fine-grained information needed. The timestamps can move in very large steps, and the accuracy (and precision) can vary widely from implementation to implementation.
>> 
>> Well, that's why you have to treat them as opaque, just like I said. Ignore whatever meaning the end host producing them might embed in them, and simply watch which ones get echoed back and when. You only have to rely on the resolution of your own clocks.
>> 
>> > The timestamps are there to try and get a gross (if my memory serves me right ~100ms) approximation to the RTT - not good enough for reasoning about TCP based interactive/"real time" apps
>> 
>> On the contrary, these timestamps can indicate much better precision than that; in particular they indicate an upper bound on the instantaneous RTT which can be quite tight under favourable circumstances. On a LAN, you could reliably determine that the RTT was below 1ms this way.
>> 
>> Now, what it doesn't give you is a strict lower bound. But you can often look at what's going on in that TCP stream and determine that favourable circumstances exist, such that the upper bound RTT estimate is probably reasonably tight. Or you could observe that the stream is mostly idle, and thus probably influenced by delayed acks and Nagle's algorithm, and discount that measurement accordingly.
>> 
>> - Jonathan Morton
>> 
> 
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat




More information about the Bloat mailing list