[Bloat] Bufferbloat test measurements

Alan Jenkins alan.christopher.jenkins at gmail.com
Sat Aug 27 15:18:16 EDT 2016


On 27/08/16 18:37, Kathleen Nichols wrote:
> In-line below. Only for geeks.
Present.

> On 8/27/16 9:03 AM, Alan Jenkins wrote:
>> That's the simplest measure of bufferbloat though :).
> Don't I know! :)  Have spent a couple of years figuring out how to measure
> experienced delay...
>> Do you have a criticism in terms of dslreports.com?  I think it's fairly
>> transparent, showing idle v.s. download v.s. upload.  The headline
>> figures are an average, and you can look at all the data points.  (You
>> can increase the measurement frequency if you're specifically interested
>> in that).
> "Criticism" seems too harsh. The uplink and downlink speed stuff is great
> and agrees with all other measures. The "bufferbloat" grade is, I think,
> sort
> of confusing. Also, it's not clear where the queue the test builds up is
> located.
> It could be in ISP or home. So, I ran the test while I was also streaming a
> Netflix video. Under the column "RTT/jitter Avg" the test lists values that
> range from 654 to 702 with +/- 5.2 to 20.8 ms (for the four servers). I
> couldn't
> figure out what that means.

My assumption is the RTT is just read out from the TCP socket, i.e. it's 
one of the kernel statistics.

http://stackoverflow.com/questions/16231600/fetching-the-tcp-rtt-in-linux/16232250#16232250

Looking in `ss.c` as suggested, the second figure shown by `ss` is 
`rttvar`.  And that's the kernel's measure of RTT variation.  If my 
assumption is right, that would tell us where the "jitter" figure comes 
from as well.

>   I can see the delay ramp up over the test
> (the video
> stream also follows that ramp though it's normal delay ranges from about
> 1ms to
> about 40ms). If I took the RT delay experienced by the packets to/from
> those servers,
> I got median values between 391 and 429 ms. The IQRs were about 240ms to
> 536-616ms. The maximum values where all just over 700ms, agreeing with
> the dslreports
> number. But that number is listed as an average so I don't understand?
> Also what is the
> jitter. I looked around for info on the numbers but I didn't find it.
> Probably I just totally
> missed some obvious thing to click on.
>
> But the thing is, I've been doing a lot of monitoring of my link and I
> don't normally see
> those kinds of delays. In the note I put out a bit ago, there is
> definitely this bursting
> behavior that is particularly indulged in by netflix and google but when
> our downstream
> bandwidth went up, those bursts no longer caused such long transient
> delays (okay, duh).
> So, I'm not sure who should be tagged as the "responsible party" for the
> grade that the
> test gives. Nor am I convinced that users have to assume that means they
> are going to see
> those kinds of delays. But this is a general issue with active measurement.
>
> It's not a criticism of the test, but maybe of the presentation of the
> result.
>
> 	Kathie

Thanks.  I wasn't clear what you meant the first time, particularly 
about the 40ms figure.  Very comprehensive explanation of your point.

It's easy to rant about ubiquitous dumb FIFO's in consumer equipment 
:).  Whereas the queue on the ISP side of the link ("download"), can be 
more complex and varied.  Mine gets good results on dslreports.com, but 
the latency isn't always so good during torrent downloads.

I do have doubts about the highly "multi-threaded" test method. 
dslreports let you dial it down manually (Preferences).  As you say, a 
better real-world test is to just use your connection normally & run 
smokeping... or an online "line monitor" like 
http://www.thinkbroadband.com/ping :).

Alan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20160827/0530125a/attachment.html>


More information about the Bloat mailing list