General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
@ 2015-04-19  5:26 jb
  2015-04-19  7:36 ` David Lang
                   ` (3 more replies)
  0 siblings, 4 replies; 127+ messages in thread
From: jb @ 2015-04-19  5:26 UTC (permalink / raw)
  To: Dave Taht, bloat


[-- Attachment #1.1: Type: text/plain, Size: 3070 bytes --]

The graph below the upload and download is what is new.
(unfortunately you do have to be logged into the site to see this)
it shows the latency during the upload and download, color coded. (see
attached image).

In your case during the upload it spiked to ~200ms from ~50ms but it was
not so bad. During upload, there were no issues with latency.

I don't want to force anyone to sign up, just was making sure not to
confuse anonymous users with more information than they knew what to do
with. When I'm clear how to present the information, I'll make it available
by default, to anyone member or otherwise.

Also, regarding your download, it stalled out completely for 5 seconds..
Hence the low conclusion as to your actual speed. It picked up to full
speed again at the end. It basically went
40 .. 40 .. 40 .. 40 .. 8 .. 8 .. 8 .. 40 .. 40 .. 40
which explains why the Latency measurements in blue are not all high.
A TCP stall? you may want to re-run or re-run with Chrome or Safari to see
if it is reproducible. Normally users on your ISP have flat downloads with
no stalls.

thanks
-Justin



> On Sun, Apr 19, 2015 at 2:01 PM, Dave Taht <dave.taht@gmail.com> wrote:
>
>> What I see here is the same old latency, upload, download series, not
>> latency and bandwidth at the same time.
>>
>> http://www.dslreports.com/speedtest/319616
>>
>> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com>
>> wrote:
>> > Folks,
>> >
>> > I am delighted to pass along the news that Justin has added latency
>> measurements into the Speed Test at DSLReports.com.
>> >
>> > Go to: https://www.dslreports.com/speedtest and click the button for
>> your Internet link. This controls the number of simultaneous connections
>> that get established between your browser and the speedtest server. After
>> you run the test, click the green "Results + Share" button to see detailed
>> info. For the moment, you need to be logged in to see the latency results.
>> There's a "register" link on each page.
>> >
>> > The speed test measures latency using websocket pings: Justin says that
>> a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just
>> ran a test and got 48 msec latency from DSLReports, while ping
>> gstatic.com gave 38-40 msec, so they're pretty fast.
>> >
>> > You can leave feedback on this page -
>> http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest
>> - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>> >
>> > Enjoy!
>> >
>> > Rich
>> > _______________________________________________
>> > Bloat mailing list
>> > Bloat@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>>
>> --
>> Dave Täht
>> Open Networking needs **Open Source Hardware**
>>
>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>

[-- Attachment #1.2: Type: text/html, Size: 4778 bytes --]

[-- Attachment #2: Screen Shot 2015-04-19 at 3.08.56 pm.png --]
[-- Type: image/png, Size: 14545 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19  5:26 [Bloat] DSLReports Speed Test has latency measurement built-in jb
@ 2015-04-19  7:36 ` David Lang
  2015-04-19  7:48   ` David Lang
  2015-04-19  9:33   ` jb
  2015-04-19  8:28 ` Alex Burr
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 127+ messages in thread
From: David Lang @ 2015-04-19  7:36 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: TEXT/Plain, Size: 3973 bytes --]

As a start, the ping time during the test that shows up in the results page is 
good, but you should show that on the main screen, not just in the results+share 
tab.

it looked like the main test was showing when the upload was stalled, (white 
under the line instead of color), but this didn't show up in the report tab.

I also think that the retransmit stats are probably worth watching and doing 
something with. You are trying to drive the line to full capacity, so some 
drops/retransmts are expected. How many re expected vs how many are showing up?

http://www.dslreports.com/speedtest/320230 (and now you see my pathetic link)

David Lang


On Sun, 19 Apr 2015, jb wrote:

> Date: Sun, 19 Apr 2015 15:26:51 +1000
> From: jb <justin@dslr.net>
> To: Dave Taht <dave.taht@gmail.com>, bloat@lists.bufferbloat.net
> Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in
> 
> The graph below the upload and download is what is new.
> (unfortunately you do have to be logged into the site to see this)
> it shows the latency during the upload and download, color coded. (see
> attached image).
>
> In your case during the upload it spiked to ~200ms from ~50ms but it was
> not so bad. During upload, there were no issues with latency.
>
> I don't want to force anyone to sign up, just was making sure not to
> confuse anonymous users with more information than they knew what to do
> with. When I'm clear how to present the information, I'll make it available
> by default, to anyone member or otherwise.
>
> Also, regarding your download, it stalled out completely for 5 seconds..
> Hence the low conclusion as to your actual speed. It picked up to full
> speed again at the end. It basically went
> 40 .. 40 .. 40 .. 40 .. 8 .. 8 .. 8 .. 40 .. 40 .. 40
> which explains why the Latency measurements in blue are not all high.
> A TCP stall? you may want to re-run or re-run with Chrome or Safari to see
> if it is reproducible. Normally users on your ISP have flat downloads with
> no stalls.
>
> thanks
> -Justin
>
>
>
>> On Sun, Apr 19, 2015 at 2:01 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>
>>> What I see here is the same old latency, upload, download series, not
>>> latency and bandwidth at the same time.
>>>
>>> http://www.dslreports.com/speedtest/319616
>>>
>>> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com>
>>> wrote:
>>>> Folks,
>>>>
>>>> I am delighted to pass along the news that Justin has added latency
>>> measurements into the Speed Test at DSLReports.com.
>>>>
>>>> Go to: https://www.dslreports.com/speedtest and click the button for
>>> your Internet link. This controls the number of simultaneous connections
>>> that get established between your browser and the speedtest server. After
>>> you run the test, click the green "Results + Share" button to see detailed
>>> info. For the moment, you need to be logged in to see the latency results.
>>> There's a "register" link on each page.
>>>>
>>>> The speed test measures latency using websocket pings: Justin says that
>>> a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just
>>> ran a test and got 48 msec latency from DSLReports, while ping
>>> gstatic.com gave 38-40 msec, so they're pretty fast.
>>>>
>>>> You can leave feedback on this page -
>>> http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest
>>> - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>>>>
>>>> Enjoy!
>>>>
>>>> Rich
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>>
>>> --
>>> Dave Täht
>>> Open Networking needs **Open Source Hardware**
>>>
>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>>
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19  7:36 ` David Lang
@ 2015-04-19  7:48   ` David Lang
  2015-04-19  9:33   ` jb
  1 sibling, 0 replies; 127+ messages in thread
From: David Lang @ 2015-04-19  7:48 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 4322 bytes --]

On Sun, 19 Apr 2015, David Lang wrote:

> As a start, the ping time during the test that shows up in the results page 
> is good, but you should show that on the main screen, not just in the 
> results+share tab.

Thinking about it, how about a graph showing the ratio of latency under test to 
the initial idle latency along with the bandwidth number?

David Lang

> it looked like the main test was showing when the upload was stalled, (white 
> under the line instead of color), but this didn't show up in the report tab.
>
> I also think that the retransmit stats are probably worth watching and doing 
> something with. You are trying to drive the line to full capacity, so some 
> drops/retransmts are expected. How many re expected vs how many are showing 
> up?
>
> http://www.dslreports.com/speedtest/320230 (and now you see my pathetic link)
>
> David Lang
>
>
> On Sun, 19 Apr 2015, jb wrote:
>
>> Date: Sun, 19 Apr 2015 15:26:51 +1000
>> From: jb <justin@dslr.net>
>> To: Dave Taht <dave.taht@gmail.com>, bloat@lists.bufferbloat.net
>> Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in
>> 
>> The graph below the upload and download is what is new.
>> (unfortunately you do have to be logged into the site to see this)
>> it shows the latency during the upload and download, color coded. (see
>> attached image).
>> 
>> In your case during the upload it spiked to ~200ms from ~50ms but it was
>> not so bad. During upload, there were no issues with latency.
>> 
>> I don't want to force anyone to sign up, just was making sure not to
>> confuse anonymous users with more information than they knew what to do
>> with. When I'm clear how to present the information, I'll make it available
>> by default, to anyone member or otherwise.
>> 
>> Also, regarding your download, it stalled out completely for 5 seconds..
>> Hence the low conclusion as to your actual speed. It picked up to full
>> speed again at the end. It basically went
>> 40 .. 40 .. 40 .. 40 .. 8 .. 8 .. 8 .. 40 .. 40 .. 40
>> which explains why the Latency measurements in blue are not all high.
>> A TCP stall? you may want to re-run or re-run with Chrome or Safari to see
>> if it is reproducible. Normally users on your ISP have flat downloads with
>> no stalls.
>> 
>> thanks
>> -Justin
>> 
>> 
>> 
>>> On Sun, Apr 19, 2015 at 2:01 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>> 
>>>> What I see here is the same old latency, upload, download series, not
>>>> latency and bandwidth at the same time.
>>>> 
>>>> http://www.dslreports.com/speedtest/319616
>>>> 
>>>> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com>
>>>> wrote:
>>>>> Folks,
>>>>> 
>>>>> I am delighted to pass along the news that Justin has added latency
>>>> measurements into the Speed Test at DSLReports.com.
>>>>> 
>>>>> Go to: https://www.dslreports.com/speedtest and click the button for
>>>> your Internet link. This controls the number of simultaneous connections
>>>> that get established between your browser and the speedtest server. After
>>>> you run the test, click the green "Results + Share" button to see 
>>>> detailed
>>>> info. For the moment, you need to be logged in to see the latency 
>>>> results.
>>>> There's a "register" link on each page.
>>>>> 
>>>>> The speed test measures latency using websocket pings: Justin says that
>>>> a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I 
>>>> just
>>>> ran a test and got 48 msec latency from DSLReports, while ping
>>>> gstatic.com gave 38-40 msec, so they're pretty fast.
>>>>> 
>>>>> You can leave feedback on this page -
>>>> http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest
>>>> - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>>>>> 
>>>>> Enjoy!
>>>>> 
>>>>> Rich
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Dave Täht
>>>> Open Networking needs **Open Source Hardware**
>>>> 
>>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>> 
>>> 
>>> 
>

[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19  5:26 [Bloat] DSLReports Speed Test has latency measurement built-in jb
  2015-04-19  7:36 ` David Lang
@ 2015-04-19  8:28 ` Alex Burr
  2015-04-19 10:20 ` Sebastian Moeller
  2015-04-19 12:14 ` [Bloat] " Toke Høiland-Jørgensen
  3 siblings, 0 replies; 127+ messages in thread
From: Alex Burr @ 2015-04-19  8:28 UTC (permalink / raw)
  To: jb, bloat

[-- Attachment #1: Type: text/plain, Size: 1597 bytes --]

Justin,
This looks really useful. On the subject of presenting the information in a clear way, we have always struggled with how to present a 'larger is worse' number. A while back I posted a graphic which attempts to overcome this for latency:https://imgrush.com/0oPGJ8VHluFy.pngFeel free to use any aspect of that.  The example there compares fictional ISPs but it could easily be used to compare typical latency with latency under load.
(I used the word 'delay' as it is more familiar than latency. The number is illustrated by a picture of a physical queue; hopefully everyone can identify it instantly, and knows that a longer one is worse. The eye supposed to be drawn to the figure at the back of the queue to emphasise this.)
  Best,Alex

      From: jb <justin@dslr.net>
 To: Dave Taht <dave.taht@gmail.com>; bloat@lists.bufferbloat.net 
 Sent: Sunday, April 19, 2015 6:26 AM
 Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in
   
The graph below the upload and download is what is new.(unfortunately you do have to be logged into the site to see this)it shows the latency during the upload and download, color coded. (see attached image).

In your case during the upload it spiked to ~200ms from ~50ms but it was not so bad. During upload, there were no issues with latency.

I don't want to force anyone to sign up, just was making sure not to confuse anonymous users with more information than they knew what to do with. When I'm clear how to present the information, I'll make it available by default, to anyone member or otherwise.

   

[-- Attachment #2: Type: text/html, Size: 4283 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19  7:36 ` David Lang
  2015-04-19  7:48   ` David Lang
@ 2015-04-19  9:33   ` jb
  2015-04-19 10:45     ` David Lang
  1 sibling, 1 reply; 127+ messages in thread
From: jb @ 2015-04-19  9:33 UTC (permalink / raw)
  To: David Lang, bloat

[-- Attachment #1: Type: text/plain, Size: 6403 bytes --]

Hey there is nothing wrong with Megapath ! they were my second  ISP after
Northpoint went bust.
If I had a time machine, and went back to 2001, I'd be very happy with
Megapath.. :)

Anyway Rich has some good ideas for how to display the latency during the
test progress and yes
that is the plan, to do that.

The results page, for the sake of non-confusion by random people, does just
show the smoothed
line and not the instant download speeds. I'm not confident enough that the
instant speeds
relate 1:1 with what is going on at your interface as unfortunately modern
browsers don't
give nearly enough feedback on how uploads are going compared to how they
instrument
downloads - which are almost on a packet by packet level. You would hope
they would pass
back events regularly but unless then line is pretty fast, they don't. They
go quiet and catch up
and meanwhile the actual upload might be steady.

The retransmit stats and congestion window and other things come from the
linux tcp structures
on the server side. Retransmits are not necessarily lost packets, it can
just be TCP getting confused by
a highly variable RTT and re-sending too soon. But on a very good
connection (google fiber) that
column is 0%. On some bad connections, it can be 20%+ Often it is between 1
and 3%.

Yes the speed test is trying to drive the line to capacity but not force
feeding, it is just TCP after
all, and all streams should find their relative place in the available
bandwidth and lost packets
should be rare when the underlying network is good quality. At least, that
is what I've seen. I'm
far from understanding the congestion algorithms etc. However I did find it
somewhat surprising
that whether one stream or 20, it all sort of works out with roughly the
same efficiency.

With more data I think the re-transmits and other indicators can just show
real problems, even
despite that the test tends to (hopefully) find your last mile sync speed
and drive it to capacity.

-justin

On Sun, Apr 19, 2015 at 5:36 PM, David Lang <david@lang.hm> wrote:

> As a start, the ping time during the test that shows up in the results
> page is good, but you should show that on the main screen, not just in the
> results+share tab.
>
> it looked like the main test was showing when the upload was stalled,
> (white under the line instead of color), but this didn't show up in the
> report tab.
>
> I also think that the retransmit stats are probably worth watching and
> doing something with. You are trying to drive the line to full capacity, so
> some drops/retransmts are expected. How many re expected vs how many are
> showing up?
>
> http://www.dslreports.com/speedtest/320230 (and now you see my pathetic
> link)
>
> David Lang
>
>
> On Sun, 19 Apr 2015, jb wrote:
>
>  Date: Sun, 19 Apr 2015 15:26:51 +1000
>> From: jb <justin@dslr.net>
>> To: Dave Taht <dave.taht@gmail.com>, bloat@lists.bufferbloat.net
>> Subject: Re: [Bloat] DSLReports Speed Test has latency measurement
>> built-in
>>
>>
>> The graph below the upload and download is what is new.
>> (unfortunately you do have to be logged into the site to see this)
>> it shows the latency during the upload and download, color coded. (see
>> attached image).
>>
>> In your case during the upload it spiked to ~200ms from ~50ms but it was
>> not so bad. During upload, there were no issues with latency.
>>
>> I don't want to force anyone to sign up, just was making sure not to
>> confuse anonymous users with more information than they knew what to do
>> with. When I'm clear how to present the information, I'll make it
>> available
>> by default, to anyone member or otherwise.
>>
>> Also, regarding your download, it stalled out completely for 5 seconds..
>> Hence the low conclusion as to your actual speed. It picked up to full
>> speed again at the end. It basically went
>> 40 .. 40 .. 40 .. 40 .. 8 .. 8 .. 8 .. 40 .. 40 .. 40
>> which explains why the Latency measurements in blue are not all high.
>> A TCP stall? you may want to re-run or re-run with Chrome or Safari to see
>> if it is reproducible. Normally users on your ISP have flat downloads with
>> no stalls.
>>
>> thanks
>> -Justin
>>
>>
>>
>>  On Sun, Apr 19, 2015 at 2:01 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>>  What I see here is the same old latency, upload, download series, not
>>>> latency and bandwidth at the same time.
>>>>
>>>> http://www.dslreports.com/speedtest/319616
>>>>
>>>> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com>
>>>> wrote:
>>>>
>>>>> Folks,
>>>>>
>>>>> I am delighted to pass along the news that Justin has added latency
>>>>>
>>>> measurements into the Speed Test at DSLReports.com.
>>>>
>>>>>
>>>>> Go to: https://www.dslreports.com/speedtest and click the button for
>>>>>
>>>> your Internet link. This controls the number of simultaneous connections
>>>> that get established between your browser and the speedtest server.
>>>> After
>>>> you run the test, click the green "Results + Share" button to see
>>>> detailed
>>>> info. For the moment, you need to be logged in to see the latency
>>>> results.
>>>> There's a "register" link on each page.
>>>>
>>>>>
>>>>> The speed test measures latency using websocket pings: Justin says that
>>>>>
>>>> a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I
>>>> just
>>>> ran a test and got 48 msec latency from DSLReports, while ping
>>>> gstatic.com gave 38-40 msec, so they're pretty fast.
>>>>
>>>>>
>>>>> You can leave feedback on this page -
>>>>>
>>>>
>>>> http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest
>>>> - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>>>>
>>>>>
>>>>> Enjoy!
>>>>>
>>>>> Rich
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Dave Täht
>>>> Open Networking needs **Open Source Hardware**
>>>>
>>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>
>>>

[-- Attachment #2: Type: text/html, Size: 8952 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19  5:26 [Bloat] DSLReports Speed Test has latency measurement built-in jb
  2015-04-19  7:36 ` David Lang
  2015-04-19  8:28 ` Alex Burr
@ 2015-04-19 10:20 ` Sebastian Moeller
  2015-04-19 10:46   ` Jonathan Morton
  2015-04-19 12:14 ` [Bloat] " Toke Høiland-Jørgensen
  3 siblings, 1 reply; 127+ messages in thread
From: Sebastian Moeller @ 2015-04-19 10:20 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 5674 bytes --]

Hi Justin,


On Apr 19, 2015, at 07:26 , jb <justin@dslr.net> wrote:

> The graph below the upload and download is what is new.
> (unfortunately you do have to be logged into the site to see this)
> it shows the latency during the upload and download, color coded. (see attached image).

	This looks really good! The whole new test is great and reporting the latency numbers are the cherry on top.

	If there was a fairy around granting me three wishes restricted to your sppedtest’s latency portion (I know sort of the short end as far as wish-fairies go) I would ask for;
1) show the mean baseline latency as a line crossing all the bars, after all that is the best case and what we need to compare against to get the “under load” part from latency under load from.

2) To be able to asses the variability in the baseline I would ask for 95% confidence intervals around the baseline line. (Sure latencies are not normally distributed and hence neuter arithmetic mean nor confidence interval is the right thing to calculate from a statistics point of view, but at least they are relatively easy to understand, should be known to the users, and should still capture the gist of what is happening). The beauty of confidence intervals is that this allows to eye-ball the significance of the latency deviations under the two load conditions, if the bar does not fall into the 95% confidence interval, testing this value against the baseline distribution will turn out significant with p<= 0.05 in a t-test.

3) I would ask to never use a log scale, as this makes extreme outliers look better than they are so linear scale starting at 0 would be my wish here. People starting out from a high latency link will not b able willing to tolerate more latency increase under load than people on low latency links but rather they can only tolerate less latency increase if they still want decent VoIP or gaming experience, so reporting the latency under load as ratio of the unloaded latency would be counter productive. Reporting the latency under load as frequency (inverse of delay time) would be nice in that higher numbers denote a "better” link, but has the issue that it is going to be hard to quickly add different latency sources/components...

4) I know I only had three wishes, but measuring the latency while simultaneously saturating up- and download would be nice to test the worst case latency under load increase...

	I wonder is the latency test running against a different host than the bandwidth tests? If so are they using the same connection/port? (I just wonder whether fq_codel will hash the latency probe packets into different bins than the bandwidth packets).

Best Regards
	Sebastian


> 
> In your case during the upload it spiked to ~200ms from ~50ms but it was not so bad. During upload, there were no issues with latency.
> 
> I don't want to force anyone to sign up, just was making sure not to confuse anonymous users with more information than they knew what to do with. When I'm clear how to present the information, I'll make it available by default, to anyone member or otherwise.
> 
> Also, regarding your download, it stalled out completely for 5 seconds.. Hence the low conclusion as to your actual speed. It picked up to full speed again at the end. It basically went 
> 40 .. 40 .. 40 .. 40 .. 8 .. 8 .. 8 .. 40 .. 40 .. 40
> which explains why the Latency measurements in blue are not all high.
> A TCP stall? you may want to re-run or re-run with Chrome or Safari to see if it is reproducible. Normally users on your ISP have flat downloads with no stalls.
> 
> thanks
> -Justin
> 
>  
> On Sun, Apr 19, 2015 at 2:01 PM, Dave Taht <dave.taht@gmail.com> wrote:
> What I see here is the same old latency, upload, download series, not
> latency and bandwidth at the same time.
> 
> http://www.dslreports.com/speedtest/319616
> 
> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com> wrote:
> > Folks,
> >
> > I am delighted to pass along the news that Justin has added latency measurements into the Speed Test at DSLReports.com.
> >
> > Go to: https://www.dslreports.com/speedtest and click the button for your Internet link. This controls the number of simultaneous connections that get established between your browser and the speedtest server. After you run the test, click the green "Results + Share" button to see detailed info. For the moment, you need to be logged in to see the latency results. There's a "register" link on each page.
> >
> > The speed test measures latency using websocket pings: Justin says that a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just ran a test and got 48 msec latency from DSLReports, while ping gstatic.com gave 38-40 msec, so they're pretty fast.
> >
> > You can leave feedback on this page - http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest - or wait 'til Justin creates a new Bufferbloat topic on the forums.
> >
> > Enjoy!
> >
> > Rich
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> 
> 
> 
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
> 
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> 
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


[-- Attachment #2.1: Type: text/html, Size: 6784 bytes --]

[-- Attachment #2.2: Screen Shot 2015-04-19 at 3.08.56 pm.png --]
[-- Type: image/png, Size: 14545 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19  9:33   ` jb
@ 2015-04-19 10:45     ` David Lang
  0 siblings, 0 replies; 127+ messages in thread
From: David Lang @ 2015-04-19 10:45 UTC (permalink / raw)
  To: jb; +Cc: bloat

On Sun, 19 Apr 2015, jb wrote:

> Hey there is nothing wrong with Megapath ! they were my second  ISP after
> Northpoint went bust.
> If I had a time machine, and went back to 2001, I'd be very happy with
> Megapath.. :)

nothing's wrong with megapath, just with the phone lines in the area limiting 
the best speed I can get.


In any case, this looks like a good test.

David Lang


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19 10:20 ` Sebastian Moeller
@ 2015-04-19 10:46   ` Jonathan Morton
  2015-04-19 16:30     ` Sebastian Moeller
  0 siblings, 1 reply; 127+ messages in thread
From: Jonathan Morton @ 2015-04-19 10:46 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat


> On 19 Apr, 2015, at 13:20, Sebastian Moeller <moeller0@gmx.de> wrote:
> 
> Reporting the latency under load as frequency (inverse of delay time) would be nice in that higher numbers denote a "better” link, but has the issue that it is going to be hard to quickly add different latency sources/components...

Personally I’d say that this disadvantage matters more to us scientists and engineers than to end-users.  Frequency readouts are probably more accessible to the latter.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19  5:26 [Bloat] DSLReports Speed Test has latency measurement built-in jb
                   ` (2 preceding siblings ...)
  2015-04-19 10:20 ` Sebastian Moeller
@ 2015-04-19 12:14 ` Toke Høiland-Jørgensen
  3 siblings, 0 replies; 127+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-19 12:14 UTC (permalink / raw)
  To: jb; +Cc: bloat

jb <justin@dslr.net> writes:

> The graph below the upload and download is what is new. (unfortunately
> you do have to be logged into the site to see this) it shows the
> latency during the upload and download, color coded. (see attached
> image).

So where is that graph? I only see the regular up- and down graphs.

http://www.dslreports.com/speedtest/320936

It shows up for this result, though...

http://www.dslreports.com/speedtest/319616

-Toke

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19 10:46   ` Jonathan Morton
@ 2015-04-19 16:30     ` Sebastian Moeller
  2015-04-19 17:41       ` Jonathan Morton
  0 siblings, 1 reply; 127+ messages in thread
From: Sebastian Moeller @ 2015-04-19 16:30 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: bloat

Hi Jonathan,

On Apr 19, 2015, at 12:46 , Jonathan Morton <chromatix99@gmail.com> wrote:

> 
>> On 19 Apr, 2015, at 13:20, Sebastian Moeller <moeller0@gmx.de> wrote:
>> 
>> Reporting the latency under load as frequency (inverse of delay time) would be nice in that higher numbers denote a "better” link, but has the issue that it is going to be hard to quickly add different latency sources/components...
> 
> Personally I’d say that this disadvantage matters more to us scientists and engineers than to end-users.  Frequency readouts are probably more accessible to the latter.

	The frequency domain more accessible to laypersons? I have my doubts ;) I like your responsiveness frequency roprt as I tend to call it myself, but I more and more think calling the whole thing latency cost or latency tax will make everybody understand that it should be minimized, plus it allows for easier calculations…. ;)

Best Regards
	Sebastian


> 
> - Jonathan Morton
> 


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19 16:30     ` Sebastian Moeller
@ 2015-04-19 17:41       ` Jonathan Morton
  2015-04-19 19:40         ` Sebastian Moeller
  0 siblings, 1 reply; 127+ messages in thread
From: Jonathan Morton @ 2015-04-19 17:41 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat


> On 19 Apr, 2015, at 19:30, Sebastian Moeller <moeller0@gmx.de> wrote:
> 
>> Frequency readouts are probably more accessible to the latter.
> 
> 	The frequency domain more accessible to laypersons? I have my doubts ;)

Gamers, at least, are familiar with “frames per second” and how that corresponds to their monitor’s refresh rate.  The desirable range of latencies, when converted to Hz, happens to be roughly the same as the range of desirable framerates.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19 17:41       ` Jonathan Morton
@ 2015-04-19 19:40         ` Sebastian Moeller
  2015-04-19 20:53           ` Jonathan Morton
  0 siblings, 1 reply; 127+ messages in thread
From: Sebastian Moeller @ 2015-04-19 19:40 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: bloat

Hi Jonathan,

On Apr 19, 2015, at 19:41 , Jonathan Morton <chromatix99@gmail.com> wrote:

> 
>> On 19 Apr, 2015, at 19:30, Sebastian Moeller <moeller0@gmx.de> wrote:
>> 
>>> Frequency readouts are probably more accessible to the latter.
>> 
>> 	The frequency domain more accessible to laypersons? I have my doubts ;)
> 
> Gamers, at least, are familiar with “frames per second” and how that corresponds to their monitor’s refresh rate.  

	I am sure they can easily transform back into time domain to get the frame period ;) .  I am partly kidding, I think your idea is great in that it is a truly positive value which could lend itself to being used in ISP/router manufacturer advertising, and hence might work in the real work; on the other hand I like to keep data as “raw” as possible (not that ^(-1) is a transformation worthy of being called data massage).

> The desirable range of latencies, when converted to Hz, happens to be roughly the same as the range of desirable frame rates.

	Just to play devils advocate, the interesting part is time or saving time so seconds or milliseconds are also intuitively understandable and can be easily added ;)

Best Regards
	Sebastian

> 
> - Jonathan Morton
> 


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19 19:40         ` Sebastian Moeller
@ 2015-04-19 20:53           ` Jonathan Morton
  2015-04-21  2:56             ` Simon Barber
  0 siblings, 1 reply; 127+ messages in thread
From: Jonathan Morton @ 2015-04-19 20:53 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

>>>> Frequency readouts are probably more accessible to the latter.
>>> 
>>> 	The frequency domain more accessible to laypersons? I have my doubts ;)
>> 
>> Gamers, at least, are familiar with “frames per second” and how that corresponds to their monitor’s refresh rate.  
> 
> 	I am sure they can easily transform back into time domain to get the frame period ;) .  I am partly kidding, I think your idea is great in that it is a truly positive value which could lend itself to being used in ISP/router manufacturer advertising, and hence might work in the real work; on the other hand I like to keep data as “raw” as possible (not that ^(-1) is a transformation worthy of being called data massage).
> 
>> The desirable range of latencies, when converted to Hz, happens to be roughly the same as the range of desirable frame rates.
> 
> 	Just to play devils advocate, the interesting part is time or saving time so seconds or milliseconds are also intuitively understandable and can be easily added ;)

Such readouts are certainly interesting to people like us.  I have no objection to them being reported alongside a frequency readout.  But I think most people are not interested in “time savings” measured in milliseconds; they’re much more aware of the minute- and hour-level time savings associated with greater bandwidth.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-19 20:53           ` Jonathan Morton
@ 2015-04-21  2:56             ` Simon Barber
  2015-04-21  4:15               ` jb
  0 siblings, 1 reply; 127+ messages in thread
From: Simon Barber @ 2015-04-21  2:56 UTC (permalink / raw)
  To: Jonathan Morton, Sebastian Moeller; +Cc: bloat

One thing users understand is slow web access.  Perhaps translating the 
latency measurement into 'a typical web page will take X seconds longer to 
load', or even stating the impact as 'this latency causes a typical web 
page to load slower, as if your connection was only YY% of the measured speed.'

Simon

Sent with AquaMail for Android
http://www.aqua-mail.com


On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com> wrote:

> >>>> Frequency readouts are probably more accessible to the latter.
> >>>
> >>> 	The frequency domain more accessible to laypersons? I have my doubts ;)
> >>
> >> Gamers, at least, are familiar with “frames per second” and how that 
> corresponds to their monitor’s refresh rate.
> >
> > 	I am sure they can easily transform back into time domain to get the 
> frame period ;) .  I am partly kidding, I think your idea is great in that 
> it is a truly positive value which could lend itself to being used in 
> ISP/router manufacturer advertising, and hence might work in the real work; 
> on the other hand I like to keep data as “raw” as possible (not that ^(-1) 
> is a transformation worthy of being called data massage).
> >
> >> The desirable range of latencies, when converted to Hz, happens to be 
> roughly the same as the range of desirable frame rates.
> >
> > 	Just to play devils advocate, the interesting part is time or saving 
> time so seconds or milliseconds are also intuitively understandable and can 
> be easily added ;)
>
> Such readouts are certainly interesting to people like us.  I have no 
> objection to them being reported alongside a frequency readout.  But I 
> think most people are not interested in “time savings” measured in 
> milliseconds; they’re much more aware of the minute- and hour-level time 
> savings associated with greater bandwidth.
>
>  - Jonathan Morton
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21  2:56             ` Simon Barber
@ 2015-04-21  4:15               ` jb
  2015-04-21  4:47                 ` David Lang
  2015-04-21  9:37                 ` Jonathan Morton
  0 siblings, 2 replies; 127+ messages in thread
From: jb @ 2015-04-21  4:15 UTC (permalink / raw)
  To: bloat


[-- Attachment #1.1: Type: text/plain, Size: 4960 bytes --]

I've discovered something perhaps you guys can explain it better or shed
some light.
It isn't specifically to do with buffer bloat but it is to do with TCP
tuning.

Attached is two pictures of my upload to New York speed test server with 1
stream.
It doesn't make any difference if it is 1 stream or 8 streams, the picture
and behaviour remains the same.
I am 200ms from new york so it qualifies as a fairly long (but not very
fat) pipe.

The nice smooth one is with linux tcp_rmem set to '4096 32768 65535' (on
the server)
The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864' (on
the server)

It actually doesn't matter what that last huge number is, once it goes much
about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
upload speed.

Now as I understand this setting, it is the tcp receive window that Linux
advertises, and the last number sets the maximum size it can get to (for
one TCP stream).

For users with very fast upload speeds, they do not see an ugly bumpy
upload graph, it is smooth and sustained.
But for the majority of users (like me) with uploads less than 5 to 10mbit,
we frequently see the ugly graph.

The second tcp_rmem setting is how I have been running the speed test
servers.

Up to now I thought this was just the distance of the speedtest from the
interface: perhaps the browser was buffering a lot, and didn't feed back
progress but now I realise the bumpy one is actually being influenced by
the server receive window.

I guess my question is this: Why does ALLOWING a large receive window
appear to encourage problems with upload smoothness??

This implies that setting the receive window should be done on a connection
by connection basis: small for slow connections, large, for high speed,
long distance connections.

In addition, if I cap it to 65k, for reasons of smoothness,
that means the bandwidth delay product will keep maximum speed per upload
stream quite low. So a symmetric or gigabit connection is going to need a
ton of parallel streams to see full speed.

Most puzzling is why would anything special be required on the Client -->
Server side of the equation
but nothing much appears wrong with the Server --> Client side, whether
speeds are very low (GPRS) or very high (gigabit).

Note that also I am not yet sure if smoothness == better throughput. I have
noticed upload speeds for some people often being under their claimed sync
rate by 10 or 20% but I've no logs that show the bumpy graph is showing
inefficiency. Maybe.

help!


On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net> wrote:

> One thing users understand is slow web access.  Perhaps translating the
> latency measurement into 'a typical web page will take X seconds longer to
> load', or even stating the impact as 'this latency causes a typical web
> page to load slower, as if your connection was only YY% of the measured
> speed.'
>
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
>
>
> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
> wrote:
>
>  >>>> Frequency readouts are probably more accessible to the latter.
>> >>>
>> >>>     The frequency domain more accessible to laypersons? I have my
>> doubts ;)
>> >>
>> >> Gamers, at least, are familiar with “frames per second” and how that
>> corresponds to their monitor’s refresh rate.
>> >
>> >       I am sure they can easily transform back into time domain to get
>> the frame period ;) .  I am partly kidding, I think your idea is great in
>> that it is a truly positive value which could lend itself to being used in
>> ISP/router manufacturer advertising, and hence might work in the real work;
>> on the other hand I like to keep data as “raw” as possible (not that ^(-1)
>> is a transformation worthy of being called data massage).
>> >
>> >> The desirable range of latencies, when converted to Hz, happens to be
>> roughly the same as the range of desirable frame rates.
>> >
>> >       Just to play devils advocate, the interesting part is time or
>> saving time so seconds or milliseconds are also intuitively understandable
>> and can be easily added ;)
>>
>> Such readouts are certainly interesting to people like us.  I have no
>> objection to them being reported alongside a frequency readout.  But I
>> think most people are not interested in “time savings” measured in
>> milliseconds; they’re much more aware of the minute- and hour-level time
>> savings associated with greater bandwidth.
>>
>>  - Jonathan Morton
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #1.2: Type: text/html, Size: 6432 bytes --]

[-- Attachment #2: Screen Shot 2015-04-21 at 2.00.46 pm.png --]
[-- Type: image/png, Size: 10663 bytes --]

[-- Attachment #3: Screen Shot 2015-04-21 at 1.59.25 pm.png --]
[-- Type: image/png, Size: 9279 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21  4:15               ` jb
@ 2015-04-21  4:47                 ` David Lang
  2015-04-21  7:35                   ` jb
  2015-04-21  9:37                 ` Jonathan Morton
  1 sibling, 1 reply; 127+ messages in thread
From: David Lang @ 2015-04-21  4:47 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: TEXT/Plain, Size: 6628 bytes --]

On Tue, 21 Apr 2015, jb wrote:

> I've discovered something perhaps you guys can explain it better or shed
> some light.
> It isn't specifically to do with buffer bloat but it is to do with TCP
> tuning.
>
> Attached is two pictures of my upload to New York speed test server with 1
> stream.
> It doesn't make any difference if it is 1 stream or 8 streams, the picture
> and behaviour remains the same.
> I am 200ms from new york so it qualifies as a fairly long (but not very
> fat) pipe.
>
> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535' (on
> the server)
> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864' (on
> the server)
>
> It actually doesn't matter what that last huge number is, once it goes much
> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
> upload speed.
>
> Now as I understand this setting, it is the tcp receive window that Linux
> advertises, and the last number sets the maximum size it can get to (for
> one TCP stream).
>
> For users with very fast upload speeds, they do not see an ugly bumpy
> upload graph, it is smooth and sustained.
> But for the majority of users (like me) with uploads less than 5 to 10mbit,
> we frequently see the ugly graph.
>
> The second tcp_rmem setting is how I have been running the speed test
> servers.
>
> Up to now I thought this was just the distance of the speedtest from the
> interface: perhaps the browser was buffering a lot, and didn't feed back
> progress but now I realise the bumpy one is actually being influenced by
> the server receive window.
>
> I guess my question is this: Why does ALLOWING a large receive window
> appear to encourage problems with upload smoothness??
>
> This implies that setting the receive window should be done on a connection
> by connection basis: small for slow connections, large, for high speed,
> long distance connections.

This is classic bufferbloat

the receiver advertizes a large receive window, so the sender doesn't pause 
until there is that much data outstanding, or they get a timeout of a packet as 
a signal to slow down.

and because you have a gig-E link locally, your machine generates traffic very 
rapidly, until all that data is 'in flight'. but it's really sitting in the 
buffer of a router trying to get through.

then when a packet times out, the sender slows down a smidge and retransmits it. 
But the old packet is still sitting in a queue, eating bandwidth. the packets 
behind it are also going to timeout and be retransmitted before your first 
retransmitted packet gets through, so you have a large slug of data that's being 
retransmitted, and the first of the replacement data can't get through until the 
last of the old (timed out) data is transmitted.

then when data starts flowing again, the sender again tries to fill up the 
window with data in flight.

> In addition, if I cap it to 65k, for reasons of smoothness,
> that means the bandwidth delay product will keep maximum speed per upload
> stream quite low. So a symmetric or gigabit connection is going to need a
> ton of parallel streams to see full speed.
>
> Most puzzling is why would anything special be required on the Client -->
> Server side of the equation
> but nothing much appears wrong with the Server --> Client side, whether
> speeds are very low (GPRS) or very high (gigabit).

but what window sizes are these clients advertising?


> Note that also I am not yet sure if smoothness == better throughput. I have
> noticed upload speeds for some people often being under their claimed sync
> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
> inefficiency. Maybe.

If you were to do a packet capture on the server side, you would see that you 
have a bunch of packets that are arriving multiple times, but the first time 
"does't count" because the replacement is already on the way.

so your overall throughput is lower for two reasons

1. it's bursty, and there are times when the connection actually is idle (after 
you have a lot of timed out packets, the sender needs to ramp up it's speed 
again)

2. you are sending some packets multiple times, consuming more total bandwidth 
for the same 'goodput' (effective throughput)

David Lang

> help!
>
>
> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net> wrote:
>
>> One thing users understand is slow web access.  Perhaps translating the
>> latency measurement into 'a typical web page will take X seconds longer to
>> load', or even stating the impact as 'this latency causes a typical web
>> page to load slower, as if your connection was only YY% of the measured
>> speed.'
>>
>> Simon
>>
>> Sent with AquaMail for Android
>> http://www.aqua-mail.com
>>
>>
>>
>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
>> wrote:
>>
>> >>>> Frequency readouts are probably more accessible to the latter.
>>>>>>
>>>>>>     The frequency domain more accessible to laypersons? I have my
>>> doubts ;)
>>>>>
>>>>> Gamers, at least, are familiar with “frames per second” and how that
>>> corresponds to their monitor’s refresh rate.
>>>>
>>>>       I am sure they can easily transform back into time domain to get
>>> the frame period ;) .  I am partly kidding, I think your idea is great in
>>> that it is a truly positive value which could lend itself to being used in
>>> ISP/router manufacturer advertising, and hence might work in the real work;
>>> on the other hand I like to keep data as “raw” as possible (not that ^(-1)
>>> is a transformation worthy of being called data massage).
>>>>
>>>>> The desirable range of latencies, when converted to Hz, happens to be
>>> roughly the same as the range of desirable frame rates.
>>>>
>>>>       Just to play devils advocate, the interesting part is time or
>>> saving time so seconds or milliseconds are also intuitively understandable
>>> and can be easily added ;)
>>>
>>> Such readouts are certainly interesting to people like us.  I have no
>>> objection to them being reported alongside a frequency readout.  But I
>>> think most people are not interested in “time savings” measured in
>>> milliseconds; they’re much more aware of the minute- and hour-level time
>>> savings associated with greater bandwidth.
>>>
>>>  - Jonathan Morton
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>

[-- Attachment #2: Type: IMAGE/PNG, Size: 10663 bytes --]

[-- Attachment #3: Type: IMAGE/PNG, Size: 9279 bytes --]

[-- Attachment #4: Type: TEXT/PLAIN, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21  4:47                 ` David Lang
@ 2015-04-21  7:35                   ` jb
  2015-04-21  9:14                     ` Steinar H. Gunderson
  2015-04-21 14:20                     ` David Lang
  0 siblings, 2 replies; 127+ messages in thread
From: jb @ 2015-04-21  7:35 UTC (permalink / raw)
  To: David Lang, bloat

[-- Attachment #1: Type: text/plain, Size: 8849 bytes --]

> the receiver advertizes a large receive window, so the sender doesn't
pause > until there is that much data outstanding, or they get a timeout of
a packet as > a signal to slow down.

> and because you have a gig-E link locally, your machine generates traffic
 \
> very rapidly, until all that data is 'in flight'. but it's really sitting
in the buffer of
> router trying to get through.

Hmm, then I have a quandary because I can easily solve the nasty bumpy
upload graphs by keeping the advertised receive window on the server capped
low, however then, paradoxically, there is no more sign of buffer bloat in
the result, at least for the upload phase.

(The graph under the upload/download graphs for my results shows almost no
latency increase during the upload phase, now).

Or, I can crank it back open again, serving people with fiber connections
without having to run heaps of streams in parallel -- and then have people
complain that the upload result is inefficient, or bumpy, vs what they
expect.

And I can't offer an option, because the server receive window (I think)
cannot be set on a case by case basis. You set it for all TCP and forget it.

I suspect you guys are going to say the server should be left with a large
max receive window.. and let people complain to find out what their issue
is.

BTW my setup is wire to billion 7800N, which is a DSL modem and router. I
believe it is a linux based (judging from the system log) device.

cheers,
-Justin

On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:

> On Tue, 21 Apr 2015, jb wrote:
>
>  I've discovered something perhaps you guys can explain it better or shed
>> some light.
>> It isn't specifically to do with buffer bloat but it is to do with TCP
>> tuning.
>>
>> Attached is two pictures of my upload to New York speed test server with 1
>> stream.
>> It doesn't make any difference if it is 1 stream or 8 streams, the picture
>> and behaviour remains the same.
>> I am 200ms from new york so it qualifies as a fairly long (but not very
>> fat) pipe.
>>
>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535' (on
>> the server)
>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864' (on
>> the server)
>>
>> It actually doesn't matter what that last huge number is, once it goes
>> much
>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
>> upload speed.
>>
>> Now as I understand this setting, it is the tcp receive window that Linux
>> advertises, and the last number sets the maximum size it can get to (for
>> one TCP stream).
>>
>> For users with very fast upload speeds, they do not see an ugly bumpy
>> upload graph, it is smooth and sustained.
>> But for the majority of users (like me) with uploads less than 5 to
>> 10mbit,
>> we frequently see the ugly graph.
>>
>> The second tcp_rmem setting is how I have been running the speed test
>> servers.
>>
>> Up to now I thought this was just the distance of the speedtest from the
>> interface: perhaps the browser was buffering a lot, and didn't feed back
>> progress but now I realise the bumpy one is actually being influenced by
>> the server receive window.
>>
>> I guess my question is this: Why does ALLOWING a large receive window
>> appear to encourage problems with upload smoothness??
>>
>> This implies that setting the receive window should be done on a
>> connection
>> by connection basis: small for slow connections, large, for high speed,
>> long distance connections.
>>
>
> This is classic bufferbloat
>
> the receiver advertizes a large receive window, so the sender doesn't
> pause until there is that much data outstanding, or they get a timeout of a
> packet as a signal to slow down.
>
> and because you have a gig-E link locally, your machine generates traffic
> very rapidly, until all that data is 'in flight'. but it's really sitting
> in the buffer of a router trying to get through.
>
> then when a packet times out, the sender slows down a smidge and
> retransmits it. But the old packet is still sitting in a queue, eating
> bandwidth. the packets behind it are also going to timeout and be
> retransmitted before your first retransmitted packet gets through, so you
> have a large slug of data that's being retransmitted, and the first of the
> replacement data can't get through until the last of the old (timed out)
> data is transmitted.
>
> then when data starts flowing again, the sender again tries to fill up the
> window with data in flight.
>
>  In addition, if I cap it to 65k, for reasons of smoothness,
>> that means the bandwidth delay product will keep maximum speed per upload
>> stream quite low. So a symmetric or gigabit connection is going to need a
>> ton of parallel streams to see full speed.
>>
>> Most puzzling is why would anything special be required on the Client -->
>> Server side of the equation
>> but nothing much appears wrong with the Server --> Client side, whether
>> speeds are very low (GPRS) or very high (gigabit).
>>
>
> but what window sizes are these clients advertising?
>
>
>  Note that also I am not yet sure if smoothness == better throughput. I
>> have
>> noticed upload speeds for some people often being under their claimed sync
>> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
>> inefficiency. Maybe.
>>
>
> If you were to do a packet capture on the server side, you would see that
> you have a bunch of packets that are arriving multiple times, but the first
> time "does't count" because the replacement is already on the way.
>
> so your overall throughput is lower for two reasons
>
> 1. it's bursty, and there are times when the connection actually is idle
> (after you have a lot of timed out packets, the sender needs to ramp up
> it's speed again)
>
> 2. you are sending some packets multiple times, consuming more total
> bandwidth for the same 'goodput' (effective throughput)
>
> David Lang
>
>
>  help!
>>
>>
>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net>
>> wrote:
>>
>>  One thing users understand is slow web access.  Perhaps translating the
>>> latency measurement into 'a typical web page will take X seconds longer
>>> to
>>> load', or even stating the impact as 'this latency causes a typical web
>>> page to load slower, as if your connection was only YY% of the measured
>>> speed.'
>>>
>>> Simon
>>>
>>> Sent with AquaMail for Android
>>> http://www.aqua-mail.com
>>>
>>>
>>>
>>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
>>> wrote:
>>>
>>> >>>> Frequency readouts are probably more accessible to the latter.
>>>
>>>>
>>>>>>>     The frequency domain more accessible to laypersons? I have my
>>>>>>>
>>>>>> doubts ;)
>>>>
>>>>>
>>>>>> Gamers, at least, are familiar with “frames per second” and how that
>>>>>>
>>>>> corresponds to their monitor’s refresh rate.
>>>>
>>>>>
>>>>>       I am sure they can easily transform back into time domain to get
>>>>>
>>>> the frame period ;) .  I am partly kidding, I think your idea is great
>>>> in
>>>> that it is a truly positive value which could lend itself to being used
>>>> in
>>>> ISP/router manufacturer advertising, and hence might work in the real
>>>> work;
>>>> on the other hand I like to keep data as “raw” as possible (not that
>>>> ^(-1)
>>>> is a transformation worthy of being called data massage).
>>>>
>>>>>
>>>>>  The desirable range of latencies, when converted to Hz, happens to be
>>>>>>
>>>>> roughly the same as the range of desirable frame rates.
>>>>
>>>>>
>>>>>       Just to play devils advocate, the interesting part is time or
>>>>>
>>>> saving time so seconds or milliseconds are also intuitively
>>>> understandable
>>>> and can be easily added ;)
>>>>
>>>> Such readouts are certainly interesting to people like us.  I have no
>>>> objection to them being reported alongside a frequency readout.  But I
>>>> think most people are not interested in “time savings” measured in
>>>> milliseconds; they’re much more aware of the minute- and hour-level time
>>>> savings associated with greater bandwidth.
>>>>
>>>>  - Jonathan Morton
>>>>
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>

[-- Attachment #2: Type: text/html, Size: 12386 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21  7:35                   ` jb
@ 2015-04-21  9:14                     ` Steinar H. Gunderson
  2015-04-21 14:20                     ` David Lang
  1 sibling, 0 replies; 127+ messages in thread
From: Steinar H. Gunderson @ 2015-04-21  9:14 UTC (permalink / raw)
  To: bloat

On Tue, Apr 21, 2015 at 05:35:32PM +1000, jb wrote:
> And I can't offer an option, because the server receive window (I think)
> cannot be set on a case by case basis. You set it for all TCP and forget it.

You can set both send and receive buffers using a setsockopt() call
(SO_SNDBUF, SO_RCVBUF). I would advise against it, though; hardly anyone
does it (except the ones that did so to _increase_ the buffer 10-15 years
ago, which now is thoroughly superseded by auto-tuning and thus a
pessimization), and if the point of the test is to identify real-world
performance, you shouldn't do workarounds.

/* Steinar */
-- 
Homepage: http://www.sesse.net/

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21  4:15               ` jb
  2015-04-21  4:47                 ` David Lang
@ 2015-04-21  9:37                 ` Jonathan Morton
  2015-04-21 10:35                   ` jb
  1 sibling, 1 reply; 127+ messages in thread
From: Jonathan Morton @ 2015-04-21  9:37 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 2969 bytes --]

I would explain it a bit differently to David. There are a lot of
interrelated components and concepts in TCP, and its sometimes hard to see
which ones are relevant in a given situation.

The key insight though is that there are two windows which are maintained
by the sender and receiver respectively, and data can only be sent if it
fits into BOTH windows. The receive window is effectively set by that
sysctl, and the congestion window (maintained by the sender) is the one
that changes dynamically.

The correct size of both windows is the bandwidth delay product of the path
between the two hosts. However, this size varies, so you can't set a single
size which works in all our even most situations. The general approach that
has the best chance of working is to set the receive window large and rely
on the congestion window to adapt.

Incidentally, 200ms at say 2Mbps gives a BDP of about 40KB.

The problem with that is that in most networks today, there is insufficient
information for the congestion window to find its ideal size. It will grow
until it receives an unambiguous congestion signal, typically a lost packet
or ECN flag. But that will most likely occur on queue overflow at the
bottleneck, and due to the resulting induced delay, the sender will have
been overdosing that queue for a while before it gets the signal to back
off - so probably a whole bunch of packets got lost in the meantime. Then,
after transmitting the lost packets, the sender has to wait for the
receiver to catch up with the smaller congestion window before it can
resume.

Meanwhile, the receiver can't deliver any of the data it's receiving
because the lost packets belong in front of it. If you've ever noticed a
transfer that seems to stall and then suddenly catch up, that's due to a
lost packet and retransmission. The effect is known as "head of line
blocking", and can be used to detect packet loss at the application layer.

Ironically, most hardware designers will tell you that buffers are meant to
smooth data delivery. It's true, but only when it doesn't overflow - and
TCP will always overflow a dumb queue if allowed to.

Reducing the receive window, to a value below the native BDP of the path
plus the bottleneck queue length, can be used as a crude way to prevent the
bottleneck queue from overflowing. Then, the congestion window will grow to
the receive window size and stay there, and TCP will enter a steady state
where every ack results in the next packet(s) being sent. (Most receivers
won't send an ack for every received packet, as long as none are missing.)

However, running multiple flows in parallel using a receive window tuned
for one flow will double the data in flight, and the queue may once again
overflow. If you look only at aggregate throughput, you might not notice
this because parallel TCPs tend to fill in each others' gaps. But the
individual flow throughputs will show the same "head of line blocking"
effect.

- Jonathan Morton

[-- Attachment #2: Type: text/html, Size: 3203 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21  9:37                 ` Jonathan Morton
@ 2015-04-21 10:35                   ` jb
  2015-04-22  4:04                     ` Steinar H. Gunderson
  0 siblings, 1 reply; 127+ messages in thread
From: jb @ 2015-04-21 10:35 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 4378 bytes --]

As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket buffers
for the application layer, they do not change the TCP window size either
send or receive. Which is perhaps why they aren't used much. They don't do
much good in iperf that's for sure! Might be wrong, but I agree with the
premise - auto-tuning should work.

Regarding my own equipment, I've seen a 2012 topic about the Billion 7800N
I have, complaining it has buffer bloat. The replies to the topic suggested
using QOS to get round the problem of uploading blowing up the latency sky
high. Unfortunately it is a very popular and well regarded DSL modem at
least in Australia AND cannot be flashed with dd-wrt or anything. So I
think for me personally (and for people who use our speed test and complain
about very choppy results on upload), this is the explanation I'll be
giving: experiment with your gear at home, it'll be the problem.

Currently the servers are running at a low maximum receive window. I'll be
switching them back in a day, after I let this one guy witness the
improvement it make for his connection. He has been at me for days saying
the test has an issue because the upload on his bonded 5mbit+5mbit channel
is so choppy.

thanks


On Tue, Apr 21, 2015 at 7:37 PM, Jonathan Morton <chromatix99@gmail.com>
wrote:

> I would explain it a bit differently to David. There are a lot of
> interrelated components and concepts in TCP, and its sometimes hard to see
> which ones are relevant in a given situation.
>
> The key insight though is that there are two windows which are maintained
> by the sender and receiver respectively, and data can only be sent if it
> fits into BOTH windows. The receive window is effectively set by that
> sysctl, and the congestion window (maintained by the sender) is the one
> that changes dynamically.
>
> The correct size of both windows is the bandwidth delay product of the
> path between the two hosts. However, this size varies, so you can't set a
> single size which works in all our even most situations. The general
> approach that has the best chance of working is to set the receive window
> large and rely on the congestion window to adapt.
>
> Incidentally, 200ms at say 2Mbps gives a BDP of about 40KB.
>
> The problem with that is that in most networks today, there is
> insufficient information for the congestion window to find its ideal size.
> It will grow until it receives an unambiguous congestion signal, typically
> a lost packet or ECN flag. But that will most likely occur on queue
> overflow at the bottleneck, and due to the resulting induced delay, the
> sender will have been overdosing that queue for a while before it gets the
> signal to back off - so probably a whole bunch of packets got lost in the
> meantime. Then, after transmitting the lost packets, the sender has to wait
> for the receiver to catch up with the smaller congestion window before it
> can resume.
>
> Meanwhile, the receiver can't deliver any of the data it's receiving
> because the lost packets belong in front of it. If you've ever noticed a
> transfer that seems to stall and then suddenly catch up, that's due to a
> lost packet and retransmission. The effect is known as "head of line
> blocking", and can be used to detect packet loss at the application layer.
>
> Ironically, most hardware designers will tell you that buffers are meant
> to smooth data delivery. It's true, but only when it doesn't overflow - and
> TCP will always overflow a dumb queue if allowed to.
>
> Reducing the receive window, to a value below the native BDP of the path
> plus the bottleneck queue length, can be used as a crude way to prevent the
> bottleneck queue from overflowing. Then, the congestion window will grow to
> the receive window size and stay there, and TCP will enter a steady state
> where every ack results in the next packet(s) being sent. (Most receivers
> won't send an ack for every received packet, as long as none are missing.)
>
> However, running multiple flows in parallel using a receive window tuned
> for one flow will double the data in flight, and the queue may once again
> overflow. If you look only at aggregate throughput, you might not notice
> this because parallel TCPs tend to fill in each others' gaps. But the
> individual flow throughputs will show the same "head of line blocking"
> effect.
>
> - Jonathan Morton
>

[-- Attachment #2: Type: text/html, Size: 5087 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21  7:35                   ` jb
  2015-04-21  9:14                     ` Steinar H. Gunderson
@ 2015-04-21 14:20                     ` David Lang
  2015-04-21 14:25                       ` David Lang
  2015-04-22 14:32                       ` Simon Barber
  1 sibling, 2 replies; 127+ messages in thread
From: David Lang @ 2015-04-21 14:20 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 9767 bytes --]

On Tue, 21 Apr 2015, jb wrote:

>> the receiver advertizes a large receive window, so the sender doesn't
> pause > until there is that much data outstanding, or they get a timeout of
> a packet as > a signal to slow down.
>
>> and because you have a gig-E link locally, your machine generates traffic
> \
>> very rapidly, until all that data is 'in flight'. but it's really sitting
> in the buffer of
>> router trying to get through.
>
> Hmm, then I have a quandary because I can easily solve the nasty bumpy
> upload graphs by keeping the advertised receive window on the server capped
> low, however then, paradoxically, there is no more sign of buffer bloat in
> the result, at least for the upload phase.
>
> (The graph under the upload/download graphs for my results shows almost no
> latency increase during the upload phase, now).
>
> Or, I can crank it back open again, serving people with fiber connections
> without having to run heaps of streams in parallel -- and then have people
> complain that the upload result is inefficient, or bumpy, vs what they
> expect.

well, many people expect it to be bumpy (I've heard ISPs explain to customers 
that when a link is full it is bumpy, that's just the way things work)

> And I can't offer an option, because the server receive window (I think)
> cannot be set on a case by case basis. You set it for all TCP and forget it.

I think you are right

> I suspect you guys are going to say the server should be left with a large
> max receive window.. and let people complain to find out what their issue
> is.

what is your customer base? how important is it to provide faster service to teh 
fiber users? Are they transferring ISO images so the difference is significant 
to them? or are they downloading web pages where it's the difference between a 
half second and a quarter second? remember that you are seeing this on the 
upload side.

in the long run, fixing the problem at the client side is the best thing to do, 
but in the meantime, you sometimes have to work around broken customer stuff.

> BTW my setup is wire to billion 7800N, which is a DSL modem and router. I
> believe it is a linux based (judging from the system log) device.

if it's linux based, it would be interesting to learn what sort of settings it 
has. It may be one of the rarer devices that has something in place already to 
do active queue management.

David Lang

> cheers,
> -Justin
>
> On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
>
>> On Tue, 21 Apr 2015, jb wrote:
>>
>>  I've discovered something perhaps you guys can explain it better or shed
>>> some light.
>>> It isn't specifically to do with buffer bloat but it is to do with TCP
>>> tuning.
>>>
>>> Attached is two pictures of my upload to New York speed test server with 1
>>> stream.
>>> It doesn't make any difference if it is 1 stream or 8 streams, the picture
>>> and behaviour remains the same.
>>> I am 200ms from new york so it qualifies as a fairly long (but not very
>>> fat) pipe.
>>>
>>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535' (on
>>> the server)
>>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864' (on
>>> the server)
>>>
>>> It actually doesn't matter what that last huge number is, once it goes
>>> much
>>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
>>> upload speed.
>>>
>>> Now as I understand this setting, it is the tcp receive window that Linux
>>> advertises, and the last number sets the maximum size it can get to (for
>>> one TCP stream).
>>>
>>> For users with very fast upload speeds, they do not see an ugly bumpy
>>> upload graph, it is smooth and sustained.
>>> But for the majority of users (like me) with uploads less than 5 to
>>> 10mbit,
>>> we frequently see the ugly graph.
>>>
>>> The second tcp_rmem setting is how I have been running the speed test
>>> servers.
>>>
>>> Up to now I thought this was just the distance of the speedtest from the
>>> interface: perhaps the browser was buffering a lot, and didn't feed back
>>> progress but now I realise the bumpy one is actually being influenced by
>>> the server receive window.
>>>
>>> I guess my question is this: Why does ALLOWING a large receive window
>>> appear to encourage problems with upload smoothness??
>>>
>>> This implies that setting the receive window should be done on a
>>> connection
>>> by connection basis: small for slow connections, large, for high speed,
>>> long distance connections.
>>>
>>
>> This is classic bufferbloat
>>
>> the receiver advertizes a large receive window, so the sender doesn't
>> pause until there is that much data outstanding, or they get a timeout of a
>> packet as a signal to slow down.
>>
>> and because you have a gig-E link locally, your machine generates traffic
>> very rapidly, until all that data is 'in flight'. but it's really sitting
>> in the buffer of a router trying to get through.
>>
>> then when a packet times out, the sender slows down a smidge and
>> retransmits it. But the old packet is still sitting in a queue, eating
>> bandwidth. the packets behind it are also going to timeout and be
>> retransmitted before your first retransmitted packet gets through, so you
>> have a large slug of data that's being retransmitted, and the first of the
>> replacement data can't get through until the last of the old (timed out)
>> data is transmitted.
>>
>> then when data starts flowing again, the sender again tries to fill up the
>> window with data in flight.
>>
>>  In addition, if I cap it to 65k, for reasons of smoothness,
>>> that means the bandwidth delay product will keep maximum speed per upload
>>> stream quite low. So a symmetric or gigabit connection is going to need a
>>> ton of parallel streams to see full speed.
>>>
>>> Most puzzling is why would anything special be required on the Client -->
>>> Server side of the equation
>>> but nothing much appears wrong with the Server --> Client side, whether
>>> speeds are very low (GPRS) or very high (gigabit).
>>>
>>
>> but what window sizes are these clients advertising?
>>
>>
>>  Note that also I am not yet sure if smoothness == better throughput. I
>>> have
>>> noticed upload speeds for some people often being under their claimed sync
>>> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
>>> inefficiency. Maybe.
>>>
>>
>> If you were to do a packet capture on the server side, you would see that
>> you have a bunch of packets that are arriving multiple times, but the first
>> time "does't count" because the replacement is already on the way.
>>
>> so your overall throughput is lower for two reasons
>>
>> 1. it's bursty, and there are times when the connection actually is idle
>> (after you have a lot of timed out packets, the sender needs to ramp up
>> it's speed again)
>>
>> 2. you are sending some packets multiple times, consuming more total
>> bandwidth for the same 'goodput' (effective throughput)
>>
>> David Lang
>>
>>
>>  help!
>>>
>>>
>>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net>
>>> wrote:
>>>
>>>  One thing users understand is slow web access.  Perhaps translating the
>>>> latency measurement into 'a typical web page will take X seconds longer
>>>> to
>>>> load', or even stating the impact as 'this latency causes a typical web
>>>> page to load slower, as if your connection was only YY% of the measured
>>>> speed.'
>>>>
>>>> Simon
>>>>
>>>> Sent with AquaMail for Android
>>>> http://www.aqua-mail.com
>>>>
>>>>
>>>>
>>>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
>>>> wrote:
>>>>
>>>>>>>> Frequency readouts are probably more accessible to the latter.
>>>>
>>>>>
>>>>>>>>     The frequency domain more accessible to laypersons? I have my
>>>>>>>>
>>>>>>> doubts ;)
>>>>>
>>>>>>
>>>>>>> Gamers, at least, are familiar with “frames per second” and how that
>>>>>>>
>>>>>> corresponds to their monitor’s refresh rate.
>>>>>
>>>>>>
>>>>>>       I am sure they can easily transform back into time domain to get
>>>>>>
>>>>> the frame period ;) .  I am partly kidding, I think your idea is great
>>>>> in
>>>>> that it is a truly positive value which could lend itself to being used
>>>>> in
>>>>> ISP/router manufacturer advertising, and hence might work in the real
>>>>> work;
>>>>> on the other hand I like to keep data as “raw” as possible (not that
>>>>> ^(-1)
>>>>> is a transformation worthy of being called data massage).
>>>>>
>>>>>>
>>>>>>  The desirable range of latencies, when converted to Hz, happens to be
>>>>>>>
>>>>>> roughly the same as the range of desirable frame rates.
>>>>>
>>>>>>
>>>>>>       Just to play devils advocate, the interesting part is time or
>>>>>>
>>>>> saving time so seconds or milliseconds are also intuitively
>>>>> understandable
>>>>> and can be easily added ;)
>>>>>
>>>>> Such readouts are certainly interesting to people like us.  I have no
>>>>> objection to them being reported alongside a frequency readout.  But I
>>>>> think most people are not interested in “time savings” measured in
>>>>> milliseconds; they’re much more aware of the minute- and hour-level time
>>>>> savings associated with greater bandwidth.
>>>>>
>>>>>  - Jonathan Morton
>>>>>
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21 14:20                     ` David Lang
@ 2015-04-21 14:25                       ` David Lang
  2015-04-21 14:28                         ` David Lang
  2015-04-22 14:32                       ` Simon Barber
  1 sibling, 1 reply; 127+ messages in thread
From: David Lang @ 2015-04-21 14:25 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1056 bytes --]

On Tue, 21 Apr 2015, David Lang wrote:

>> I suspect you guys are going to say the server should be left with a large
>> max receive window.. and let people complain to find out what their issue
>> is.
>
> what is your customer base? how important is it to provide faster service to 
> teh fiber users? Are they transferring ISO images so the difference is 
> significant to them? or are they downloading web pages where it's the 
> difference between a half second and a quarter second? remember that you are 
> seeing this on the upload side.
>
> in the long run, fixing the problem at the client side is the best thing to 
> do, but in the meantime, you sometimes have to work around broken customer 
> stuff.

for the speedtest servers, it should be set large, the purpose is to test the 
quality of the customer stuff, so you don't want to do anything on your end that 
papers over the problem, only to have the customer think things are good and 
experience problems when connecting to another server that doesn't implement 
work-arounds.

David Lang

[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21 14:25                       ` David Lang
@ 2015-04-21 14:28                         ` David Lang
  2015-04-21 22:13                           ` jb
  0 siblings, 1 reply; 127+ messages in thread
From: David Lang @ 2015-04-21 14:28 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1420 bytes --]

On Tue, 21 Apr 2015, David Lang wrote:

> On Tue, 21 Apr 2015, David Lang wrote:
>
>>> I suspect you guys are going to say the server should be left with a large
>>> max receive window.. and let people complain to find out what their issue
>>> is.
>> 
>> what is your customer base? how important is it to provide faster service 
>> to teh fiber users? Are they transferring ISO images so the difference is 
>> significant to them? or are they downloading web pages where it's the 
>> difference between a half second and a quarter second? remember that you 
>> are seeing this on the upload side.
>> 
>> in the long run, fixing the problem at the client side is the best thing to 
>> do, but in the meantime, you sometimes have to work around broken customer 
>> stuff.
>
> for the speedtest servers, it should be set large, the purpose is to test the 
> quality of the customer stuff, so you don't want to do anything on your end 
> that papers over the problem, only to have the customer think things are good 
> and experience problems when connecting to another server that doesn't 
> implement work-arounds.

Just after hitting send it occured to me that it may be the right thing to have 
the server that's being hit by the test play with these settings. If the user 
works well at lower settings, but has problems at higher settings, the point 
where they start having problems may be useful to know.

David Lang

[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21 14:28                         ` David Lang
@ 2015-04-21 22:13                           ` jb
  2015-04-21 22:39                             ` Aaron Wood
  2015-04-21 23:17                             ` jb
  0 siblings, 2 replies; 127+ messages in thread
From: jb @ 2015-04-21 22:13 UTC (permalink / raw)
  To: David Lang; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 2620 bytes --]

Today I've switched it back to large receive window max.

The customer base is everything from GPRS to gigabit. But I know from
experience that if a test doesn't flatten someones gigabit connection they
will immediately assume "oh congested servers, insufficient capacity" and
the early adopters of fiber to the home and faster cable products are the
most visible in tech forums and so on.

It would be interesting to set one or a few servers with a small receive
window, take them from the pool, and allow an option to select those,
otherwise they would not participate in any default run. Then as you point
out, the test can suggest trying those as an option for results with
chaotic upload speeds and probable bloat. The person would notice the
beauty of the more intimate connection between their kernel and a server,
and work harder to eliminate the problematic equipment. Or. They'd stop
telling me the test was bugged.

thanks


On Wed, Apr 22, 2015 at 12:28 AM, David Lang <david@lang.hm> wrote:

> On Tue, 21 Apr 2015, David Lang wrote:
>
>  On Tue, 21 Apr 2015, David Lang wrote:
>>
>>  I suspect you guys are going to say the server should be left with a
>>>> large
>>>> max receive window.. and let people complain to find out what their
>>>> issue
>>>> is.
>>>>
>>>
>>> what is your customer base? how important is it to provide faster
>>> service to teh fiber users? Are they transferring ISO images so the
>>> difference is significant to them? or are they downloading web pages where
>>> it's the difference between a half second and a quarter second? remember
>>> that you are seeing this on the upload side.
>>>
>>> in the long run, fixing the problem at the client side is the best thing
>>> to do, but in the meantime, you sometimes have to work around broken
>>> customer stuff.
>>>
>>
>> for the speedtest servers, it should be set large, the purpose is to test
>> the quality of the customer stuff, so you don't want to do anything on your
>> end that papers over the problem, only to have the customer think things
>> are good and experience problems when connecting to another server that
>> doesn't implement work-arounds.
>>
>
> Just after hitting send it occured to me that it may be the right thing to
> have the server that's being hit by the test play with these settings. If
> the user works well at lower settings, but has problems at higher settings,
> the point where they start having problems may be useful to know.
>
> David Lang
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>

[-- Attachment #2: Type: text/html, Size: 3600 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21 22:13                           ` jb
@ 2015-04-21 22:39                             ` Aaron Wood
  2015-04-21 23:17                             ` jb
  1 sibling, 0 replies; 127+ messages in thread
From: Aaron Wood @ 2015-04-21 22:39 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1892 bytes --]

On Tue, Apr 21, 2015 at 3:13 PM, jb <justin@dslr.net> wrote:

> Today I've switched it back to large receive window max.
>
> The customer base is everything from GPRS to gigabit. But I know from
> experience that if a test doesn't flatten someones gigabit connection they
> will immediately assume "oh congested servers, insufficient capacity" and
> the early adopters of fiber to the home and faster cable products are the
> most visible in tech forums and so on.
>
> It would be interesting to set one or a few servers with a small receive
> window, take them from the pool, and allow an option to select those,
> otherwise they would not participate in any default run. Then as you point
> out, the test can suggest trying those as an option for results with
> chaotic upload speeds and probable bloat. The person would notice the
> beauty of the more intimate connection between their kernel and a server,
> and work harder to eliminate the problematic equipment. Or. They'd stop
> telling me the test was bugged.
>

Well, the sawtooth pattern that's the classic sign of bufferbloat should be
readily detectable, especially if the pings during the test climb in a
similar fashion.  And from the two sets of numbers, it should be possible
to put a guess on how overbuffered the uplink is.  Then when the test
completes, an analysis that flags to the user that they have a bufferbloat
issue might continue to shed light on this.

Attached is results from my location which shows a couple hundred ms of
bloat.  While my results didn't have congestion collapse, they do clearly
have a bunch of bloat.  That amount of bloat should be easy to spot in an
analysis of the results, and a recommendation to the user that they may
want to look into fixing that if they use their link at the limit with VOIP
or gaming.

I just wish we had a really good un-bloated retail option to recommend.

-Aaron

[-- Attachment #2: Type: text/html, Size: 2368 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21 22:13                           ` jb
  2015-04-21 22:39                             ` Aaron Wood
@ 2015-04-21 23:17                             ` jb
  2015-04-22  2:14                               ` Simon Barber
  1 sibling, 1 reply; 127+ messages in thread
From: jb @ 2015-04-21 23:17 UTC (permalink / raw)
  Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 4117 bytes --]

Regarding the low TCP RWIN max setting, and smoothness.

One remark up-thread still bothers me. It was pointed out (and it makes
sense to me) that if you set a low TCP max rwin it is per stream, but if
you do multiple streams you are still going to rush the soho buffer.

However my observation with a low server rwin max was that the smooth
upload graph was the same whether I did 1 upload stream or 6 upload
streams, or apparently any number.
I would have thought that with 6 streams, the PC is going to try to flood
6x as much data as 1 stream, and this would put you back to square one.
However this was not what happened. It was puzzling that no matter what,
one setting server side got rid of the chop.
Anyone got any plausible explanations for this ?

if not, I'll run some more tests with 1, 6 and 12, to a low rwin server,
and post the graphs to the list. I might also have to start to graph the
interface traffic on a sub-second level, rather than the browser traffic,
to make sure the browser isn't lying about the stalls and chop.

This 7800N has setting for priority of traffic, and utilisation (as a
percentage). Utilisation % didn't help, but priority helped. Making web low
priority and SSH high priority smoothed things out a lot without changing
the speed. Perhaps "low" priority means it isn't so eager to fill its
buffers..

thanks


On Wed, Apr 22, 2015 at 8:13 AM, jb <justin@dslr.net> wrote:

> Today I've switched it back to large receive window max.
>
> The customer base is everything from GPRS to gigabit. But I know from
> experience that if a test doesn't flatten someones gigabit connection they
> will immediately assume "oh congested servers, insufficient capacity" and
> the early adopters of fiber to the home and faster cable products are the
> most visible in tech forums and so on.
>
> It would be interesting to set one or a few servers with a small receive
> window, take them from the pool, and allow an option to select those,
> otherwise they would not participate in any default run. Then as you point
> out, the test can suggest trying those as an option for results with
> chaotic upload speeds and probable bloat. The person would notice the
> beauty of the more intimate connection between their kernel and a server,
> and work harder to eliminate the problematic equipment. Or. They'd stop
> telling me the test was bugged.
>
> thanks
>
>
> On Wed, Apr 22, 2015 at 12:28 AM, David Lang <david@lang.hm> wrote:
>
>> On Tue, 21 Apr 2015, David Lang wrote:
>>
>>  On Tue, 21 Apr 2015, David Lang wrote:
>>>
>>>  I suspect you guys are going to say the server should be left with a
>>>>> large
>>>>> max receive window.. and let people complain to find out what their
>>>>> issue
>>>>> is.
>>>>>
>>>>
>>>> what is your customer base? how important is it to provide faster
>>>> service to teh fiber users? Are they transferring ISO images so the
>>>> difference is significant to them? or are they downloading web pages where
>>>> it's the difference between a half second and a quarter second? remember
>>>> that you are seeing this on the upload side.
>>>>
>>>> in the long run, fixing the problem at the client side is the best
>>>> thing to do, but in the meantime, you sometimes have to work around broken
>>>> customer stuff.
>>>>
>>>
>>> for the speedtest servers, it should be set large, the purpose is to
>>> test the quality of the customer stuff, so you don't want to do anything on
>>> your end that papers over the problem, only to have the customer think
>>> things are good and experience problems when connecting to another server
>>> that doesn't implement work-arounds.
>>>
>>
>> Just after hitting send it occured to me that it may be the right thing
>> to have the server that's being hit by the test play with these settings.
>> If the user works well at lower settings, but has problems at higher
>> settings, the point where they start having problems may be useful to know.
>>
>> David Lang
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>

[-- Attachment #2: Type: text/html, Size: 5549 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21 23:17                             ` jb
@ 2015-04-22  2:14                               ` Simon Barber
  2015-04-22  2:56                                 ` jb
  0 siblings, 1 reply; 127+ messages in thread
From: Simon Barber @ 2015-04-22  2:14 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 4811 bytes --]

If you set the window only a little bit larger than the actual BDP of the 
link then there will only be a little bit of data to fill buffer, so given 
large buffers it will take many connections to overflow the buffer.

Simon

Sent with AquaMail for Android
http://www.aqua-mail.com


On April 21, 2015 4:18:10 PM jb <justin@dslr.net> wrote:

> Regarding the low TCP RWIN max setting, and smoothness.
>
> One remark up-thread still bothers me. It was pointed out (and it makes
> sense to me) that if you set a low TCP max rwin it is per stream, but if
> you do multiple streams you are still going to rush the soho buffer.
>
> However my observation with a low server rwin max was that the smooth
> upload graph was the same whether I did 1 upload stream or 6 upload
> streams, or apparently any number.
> I would have thought that with 6 streams, the PC is going to try to flood
> 6x as much data as 1 stream, and this would put you back to square one.
> However this was not what happened. It was puzzling that no matter what,
> one setting server side got rid of the chop.
> Anyone got any plausible explanations for this ?
>
> if not, I'll run some more tests with 1, 6 and 12, to a low rwin server,
> and post the graphs to the list. I might also have to start to graph the
> interface traffic on a sub-second level, rather than the browser traffic,
> to make sure the browser isn't lying about the stalls and chop.
>
> This 7800N has setting for priority of traffic, and utilisation (as a
> percentage). Utilisation % didn't help, but priority helped. Making web low
> priority and SSH high priority smoothed things out a lot without changing
> the speed. Perhaps "low" priority means it isn't so eager to fill its
> buffers..
>
> thanks
>
>
> On Wed, Apr 22, 2015 at 8:13 AM, jb <justin@dslr.net> wrote:
>
> > Today I've switched it back to large receive window max.
> >
> > The customer base is everything from GPRS to gigabit. But I know from
> > experience that if a test doesn't flatten someones gigabit connection they
> > will immediately assume "oh congested servers, insufficient capacity" and
> > the early adopters of fiber to the home and faster cable products are the
> > most visible in tech forums and so on.
> >
> > It would be interesting to set one or a few servers with a small receive
> > window, take them from the pool, and allow an option to select those,
> > otherwise they would not participate in any default run. Then as you point
> > out, the test can suggest trying those as an option for results with
> > chaotic upload speeds and probable bloat. The person would notice the
> > beauty of the more intimate connection between their kernel and a server,
> > and work harder to eliminate the problematic equipment. Or. They'd stop
> > telling me the test was bugged.
> >
> > thanks
> >
> >
> > On Wed, Apr 22, 2015 at 12:28 AM, David Lang <david@lang.hm> wrote:
> >
> >> On Tue, 21 Apr 2015, David Lang wrote:
> >>
> >>  On Tue, 21 Apr 2015, David Lang wrote:
> >>>
> >>>  I suspect you guys are going to say the server should be left with a
> >>>>> large
> >>>>> max receive window.. and let people complain to find out what their
> >>>>> issue
> >>>>> is.
> >>>>>
> >>>>
> >>>> what is your customer base? how important is it to provide faster
> >>>> service to teh fiber users? Are they transferring ISO images so the
> >>>> difference is significant to them? or are they downloading web pages where
> >>>> it's the difference between a half second and a quarter second? remember
> >>>> that you are seeing this on the upload side.
> >>>>
> >>>> in the long run, fixing the problem at the client side is the best
> >>>> thing to do, but in the meantime, you sometimes have to work around broken
> >>>> customer stuff.
> >>>>
> >>>
> >>> for the speedtest servers, it should be set large, the purpose is to
> >>> test the quality of the customer stuff, so you don't want to do anything on
> >>> your end that papers over the problem, only to have the customer think
> >>> things are good and experience problems when connecting to another server
> >>> that doesn't implement work-arounds.
> >>>
> >>
> >> Just after hitting send it occured to me that it may be the right thing
> >> to have the server that's being hit by the test play with these settings.
> >> If the user works well at lower settings, but has problems at higher
> >> settings, the point where they start having problems may be useful to know.
> >>
> >> David Lang
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >>
> >>
> >
>
>
>
> ----------
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 6695 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22  2:14                               ` Simon Barber
@ 2015-04-22  2:56                                 ` jb
  0 siblings, 0 replies; 127+ messages in thread
From: jb @ 2015-04-22  2:56 UTC (permalink / raw)
  To: Simon Barber; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 4949 bytes --]

That makes sense. Ok.

On Wed, Apr 22, 2015 at 12:14 PM, Simon Barber <simon@superduper.net> wrote:

>   If you set the window only a little bit larger than the actual BDP of
> the link then there will only be a little bit of data to fill buffer, so
> given large buffers it will take many connections to overflow the buffer.
>
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
> On April 21, 2015 4:18:10 PM jb <justin@dslr.net> wrote:
>
>> Regarding the low TCP RWIN max setting, and smoothness.
>>
>> One remark up-thread still bothers me. It was pointed out (and it makes
>> sense to me) that if you set a low TCP max rwin it is per stream, but if
>> you do multiple streams you are still going to rush the soho buffer.
>>
>> However my observation with a low server rwin max was that the smooth
>> upload graph was the same whether I did 1 upload stream or 6 upload
>> streams, or apparently any number.
>> I would have thought that with 6 streams, the PC is going to try to flood
>> 6x as much data as 1 stream, and this would put you back to square one.
>> However this was not what happened. It was puzzling that no matter what,
>> one setting server side got rid of the chop.
>> Anyone got any plausible explanations for this ?
>>
>> if not, I'll run some more tests with 1, 6 and 12, to a low rwin server,
>> and post the graphs to the list. I might also have to start to graph the
>> interface traffic on a sub-second level, rather than the browser traffic,
>> to make sure the browser isn't lying about the stalls and chop.
>>
>> This 7800N has setting for priority of traffic, and utilisation (as a
>> percentage). Utilisation % didn't help, but priority helped. Making web low
>> priority and SSH high priority smoothed things out a lot without changing
>> the speed. Perhaps "low" priority means it isn't so eager to fill its
>> buffers..
>>
>> thanks
>>
>>
>> On Wed, Apr 22, 2015 at 8:13 AM, jb <justin@dslr.net> wrote:
>>
>>> Today I've switched it back to large receive window max.
>>>
>>> The customer base is everything from GPRS to gigabit. But I know from
>>> experience that if a test doesn't flatten someones gigabit connection they
>>> will immediately assume "oh congested servers, insufficient capacity" and
>>> the early adopters of fiber to the home and faster cable products are the
>>> most visible in tech forums and so on.
>>>
>>> It would be interesting to set one or a few servers with a small receive
>>> window, take them from the pool, and allow an option to select those,
>>> otherwise they would not participate in any default run. Then as you point
>>> out, the test can suggest trying those as an option for results with
>>> chaotic upload speeds and probable bloat. The person would notice the
>>> beauty of the more intimate connection between their kernel and a server,
>>> and work harder to eliminate the problematic equipment. Or. They'd stop
>>> telling me the test was bugged.
>>>
>>> thanks
>>>
>>>
>>> On Wed, Apr 22, 2015 at 12:28 AM, David Lang <david@lang.hm> wrote:
>>>
>>>> On Tue, 21 Apr 2015, David Lang wrote:
>>>>
>>>>  On Tue, 21 Apr 2015, David Lang wrote:
>>>>>
>>>>>  I suspect you guys are going to say the server should be left with a
>>>>>>> large
>>>>>>> max receive window.. and let people complain to find out what their
>>>>>>> issue
>>>>>>> is.
>>>>>>>
>>>>>>
>>>>>> what is your customer base? how important is it to provide faster
>>>>>> service to teh fiber users? Are they transferring ISO images so the
>>>>>> difference is significant to them? or are they downloading web pages where
>>>>>> it's the difference between a half second and a quarter second? remember
>>>>>> that you are seeing this on the upload side.
>>>>>>
>>>>>> in the long run, fixing the problem at the client side is the best
>>>>>> thing to do, but in the meantime, you sometimes have to work around broken
>>>>>> customer stuff.
>>>>>>
>>>>>
>>>>> for the speedtest servers, it should be set large, the purpose is to
>>>>> test the quality of the customer stuff, so you don't want to do anything on
>>>>> your end that papers over the problem, only to have the customer think
>>>>> things are good and experience problems when connecting to another server
>>>>> that doesn't implement work-arounds.
>>>>>
>>>>
>>>> Just after hitting send it occured to me that it may be the right thing
>>>> to have the server that's being hit by the test play with these settings.
>>>> If the user works well at lower settings, but has problems at higher
>>>> settings, the point where they start having problems may be useful to know.
>>>>
>>>> David Lang
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>

[-- Attachment #2: Type: text/html, Size: 7278 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21 10:35                   ` jb
@ 2015-04-22  4:04                     ` Steinar H. Gunderson
  2015-04-22  4:28                       ` Eric Dumazet
  0 siblings, 1 reply; 127+ messages in thread
From: Steinar H. Gunderson @ 2015-04-22  4:04 UTC (permalink / raw)
  To: jb; +Cc: bloat

On Tue, Apr 21, 2015 at 08:35:21PM +1000, jb wrote:
> As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket buffers
> for the application layer, they do not change the TCP window size either
> send or receive.

I haven't gone into the code and checked, but from practical experience I
think you're wrong. I've certainly seen positive effects (and verified with
tcpdump) from reducing SO_SNDBUF on a server that should have no problems at
all sending data really fast to the kernel.

Then again, this kind of manual tuning trickery got obsolete for me the
moment sch_fq became available.

/* Steinar */
-- 
Homepage: http://www.sesse.net/

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22  4:04                     ` Steinar H. Gunderson
@ 2015-04-22  4:28                       ` Eric Dumazet
  2015-04-22  8:51                         ` [Bloat] RE : " luca.muscariello
  0 siblings, 1 reply; 127+ messages in thread
From: Eric Dumazet @ 2015-04-22  4:28 UTC (permalink / raw)
  To: Steinar H. Gunderson; +Cc: bloat

On Wed, 2015-04-22 at 06:04 +0200, Steinar H. Gunderson wrote:
> On Tue, Apr 21, 2015 at 08:35:21PM +1000, jb wrote:
> > As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket buffers
> > for the application layer, they do not change the TCP window size either
> > send or receive.
> 
> I haven't gone into the code and checked, but from practical experience I
> think you're wrong. I've certainly seen positive effects (and verified with
> tcpdump) from reducing SO_SNDBUF on a server that should have no problems at
> all sending data really fast to the kernel.

Well, using SO_SNDBUF disables TCP autotuning.

Doing so :

Pros:

autotuning is known to enable TCP cubic to grow cwnd to bloat levels.
With small enough SO_SNDBUF, you limit this cwnd increase.

Cons:

Long rtt sessions might not have enough packets to utilize bandwidth.


> 
> Then again, this kind of manual tuning trickery got obsolete for me the
> moment sch_fq became available.

Note that I suppose the SO_MAX_PACING rate is really helping you.

Without it, TCP cubic is still allowed to 'fill the pipes' until packet
losses.




^ permalink raw reply	[flat|nested] 127+ messages in thread

* [Bloat] RE : DSLReports Speed Test has latency measurement built-in
  2015-04-22  4:28                       ` Eric Dumazet
@ 2015-04-22  8:51                         ` luca.muscariello
  2015-04-22 12:02                           ` jb
  2015-04-22 13:50                           ` [Bloat] " Eric Dumazet
  0 siblings, 2 replies; 127+ messages in thread
From: luca.muscariello @ 2015-04-22  8:51 UTC (permalink / raw)
  To: Eric Dumazet, Steinar H. Gunderson; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 2833 bytes --]

cons: large BDP in general would be negatively affected.
A Gbps access vs a DSL access to the same server would require very different tuning.

sch_fq would probably make the whole thing less of a problem.
But running it in a VM does not sound a good idea and would not reflect usual servers setting BTW









-------- Message d'origine --------
De : Eric Dumazet
Date :2015/04/22 12:29 (GMT+08:00)
À : "Steinar H. Gunderson"
Cc : bloat
Objet : Re: [Bloat] DSLReports Speed Test has latency measurement built-in

On Wed, 2015-04-22 at 06:04 +0200, Steinar H. Gunderson wrote:
> On Tue, Apr 21, 2015 at 08:35:21PM +1000, jb wrote:
> > As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket buffers
> > for the application layer, they do not change the TCP window size either
> > send or receive.
>
> I haven't gone into the code and checked, but from practical experience I
> think you're wrong. I've certainly seen positive effects (and verified with
> tcpdump) from reducing SO_SNDBUF on a server that should have no problems at
> all sending data really fast to the kernel.

Well, using SO_SNDBUF disables TCP autotuning.

Doing so :

Pros:

autotuning is known to enable TCP cubic to grow cwnd to bloat levels.
With small enough SO_SNDBUF, you limit this cwnd increase.

Cons:

Long rtt sessions might not have enough packets to utilize bandwidth.


>
> Then again, this kind of manual tuning trickery got obsolete for me the
> moment sch_fq became available.

Note that I suppose the SO_MAX_PACING rate is really helping you.

Without it, TCP cubic is still allowed to 'fill the pipes' until packet
losses.



_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


[-- Attachment #2: Type: text/html, Size: 3848 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
  2015-04-22  8:51                         ` [Bloat] RE : " luca.muscariello
@ 2015-04-22 12:02                           ` jb
  2015-04-22 13:08                             ` Jonathan Morton
       [not found]                             ` <14ce17a7810.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net>
  2015-04-22 13:50                           ` [Bloat] " Eric Dumazet
  1 sibling, 2 replies; 127+ messages in thread
From: jb @ 2015-04-22 12:02 UTC (permalink / raw)
  Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 4281 bytes --]

So I find a page that explains SO_RCVBUF is allegedly the most poorly
implemented on Linux, vs Windows or OSX, mainly because the one you start
with is the cap, you can go lower, but not higher, and data is needed to
shrink the window to a new setting, instead of slamming it shut by
setsockopt

Nevertheless! it is good enough that if I set it on tcp connect I can at
least offer the option to run pretty much the same test to the same set of
servers, but with a selectable cap on sender rate.

By the way, is there a selectable congestion control algorithm available
that is sensitive to an RTT that increases dramatically? in other words,
one that does the best at avoiding buffer size issues on the remote side of
the slowest link? I know heuristics always sound better in theory than
practice but surely if an algorithm picks up the idle RTT of a link, it can
then pump up the window until an RTT increase indicates it should back off,
Instead of (encouraged by no loss) thinking the end-user must be
accelerating towards the moon..

thanks.

On Wed, Apr 22, 2015 at 6:51 PM, <luca.muscariello@orange.com> wrote:

>  cons: large BDP in general would be negatively affected.
> A Gbps access vs a DSL access to the same server would require very
> different tuning.
>
>  sch_fq would probably make the whole thing less of a problem.
> But running it in a VM does not sound a good idea and would not reflect
> usual servers setting BTW
>
>
>
>
>
>
>
>
>
> -------- Message d'origine --------
> De : Eric Dumazet
> Date :2015/04/22 12:29 (GMT+08:00)
> À : "Steinar H. Gunderson"
> Cc : bloat
> Objet : Re: [Bloat] DSLReports Speed Test has latency measurement built-in
>
>   On Wed, 2015-04-22 at 06:04 +0200, Steinar H. Gunderson wrote:
> > On Tue, Apr 21, 2015 at 08:35:21PM +1000, jb wrote:
> > > As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket
> buffers
> > > for the application layer, they do not change the TCP window size
> either
> > > send or receive.
> >
> > I haven't gone into the code and checked, but from practical experience I
> > think you're wrong. I've certainly seen positive effects (and verified
> with
> > tcpdump) from reducing SO_SNDBUF on a server that should have no
> problems at
> > all sending data really fast to the kernel.
>
> Well, using SO_SNDBUF disables TCP autotuning.
>
> Doing so :
>
> Pros:
>
> autotuning is known to enable TCP cubic to grow cwnd to bloat levels.
> With small enough SO_SNDBUF, you limit this cwnd increase.
>
> Cons:
>
> Long rtt sessions might not have enough packets to utilize bandwidth.
>
>
> >
> > Then again, this kind of manual tuning trickery got obsolete for me the
> > moment sch_fq became available.
>
> Note that I suppose the SO_MAX_PACING rate is really helping you.
>
> Without it, TCP cubic is still allowed to 'fill the pipes' until packet
> losses.
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
> _________________________________________________________________________________________________________________________
>
> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
> Thank you.
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>

[-- Attachment #2: Type: text/html, Size: 5461 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
  2015-04-22 12:02                           ` jb
@ 2015-04-22 13:08                             ` Jonathan Morton
       [not found]                             ` <14ce17a7810.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net>
  1 sibling, 0 replies; 127+ messages in thread
From: Jonathan Morton @ 2015-04-22 13:08 UTC (permalink / raw)
  To: jb; +Cc: bloat


> On 22 Apr, 2015, at 15:02, jb <justin@dslr.net> wrote:
> 
> ...data is needed to shrink the window to a new setting, instead of slamming it shut by setsockopt

I believe that is RFC-compliant behaviour; one is not supposed to renege on an advertised TCP receive window.  So Linux holds the rwin pointer in place until the window has shrunk to the new setting.

> By the way, is there a selectable congestion control algorithm available that is sensitive to an RTT that increases dramatically? 

Vegas and LEDBAT do this explicitly; Vegas is old, and LEDBAT isn’t yet upstream but can be built against an existing kernel.  Some other TCPs incorporate RTT into their control law (eg. Westwood+, Illinois and Microsoft’s CompoundTCP), but won’t actually stop growing the congestion window based on that; Westwood+ uses RTT and bandwidth to determine what congestion window size to set *after* receiving a conventional congestion signal, while Illinois uses increasing RTT as a signal to *slow* the increase of cwnd, thus spending more time *near* the BDP.

Both Vegas and LEDBAT, however, compete very unfavourably with conventional senders (for Vegas, there’s a contemporary paper showing this against Reno) sharing the same link, which is why they aren’t widely deployed.  LEDBAT is however used as part of uTP (ie. BitTorrent’s UDP protocol) specifically for its “background traffic” properties.

Westwood+ does compete fairly with conventional TCPs and works well with AQM, since it avoids the sawtooth of under-utilisation that Reno shows, but it has a tendency to underestimate the cwnd on exiting the slow-start phase.  On extreme LFNs, this can result in an extremely long time to converge on the correct BDP.

Illinois is also potentially interesting, because it does make an effort to avoid filling buffers quite as quickly as most.  By contrast, CUBIC sets its inflection point at the cwnd where the previous congestion signal was received.

CompoundTCP is described roughly as using a cwnd that is the sum of the results of running Reno and Vegas.  So there is a region of operation where the Reno part is increasing its cwnd and Vegas is decreasing it at the same time, resulting in a roughly constant overall cwnd in the vicinity of the BDP.  I don’t know offhand how well it works in practice.

The fact remains, though, that most servers use conventional TCPs, usually CUBIC (if Linux based) or Compound (if Microsoft).

One interesting theory is that it’s possible to detect whether FQ is in use on a link, by observing whether Vegas competes on equal terms with a conventional TCP or not.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22  8:51                         ` [Bloat] RE : " luca.muscariello
  2015-04-22 12:02                           ` jb
@ 2015-04-22 13:50                           ` Eric Dumazet
  2015-04-22 14:09                             ` Steinar H. Gunderson
  2015-04-22 15:26                             ` [Bloat] RE : " luca.muscariello
  1 sibling, 2 replies; 127+ messages in thread
From: Eric Dumazet @ 2015-04-22 13:50 UTC (permalink / raw)
  To: luca.muscariello; +Cc: bloat

On Wed, 2015-04-22 at 08:51 +0000, luca.muscariello@orange.com wrote:
> cons: large BDP in general would be negatively affected.  
> A Gbps access vs a DSL access to the same server would require very
> different tuning. 
> 
Yep. This is what I mentioned with 'long rtt'. This was relative to BDP.
> 
> sch_fq would probably make the whole thing less of a problem. 
> But running it in a VM does not sound a good idea and would not
> reflect usual servers setting BTW 
> 
No idea why it should mater. Have you got some experimental data ?

You know, 'usual servers' used to run pfifo_fast, they now run sch_fq.

(All Google fleet at least)

So this kind of argument sounds not based on experiments.



^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22 13:50                           ` [Bloat] " Eric Dumazet
@ 2015-04-22 14:09                             ` Steinar H. Gunderson
  2015-04-22 15:26                             ` [Bloat] RE : " luca.muscariello
  1 sibling, 0 replies; 127+ messages in thread
From: Steinar H. Gunderson @ 2015-04-22 14:09 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

On Wed, Apr 22, 2015 at 06:50:57AM -0700, Eric Dumazet wrote:
> You know, 'usual servers' used to run pfifo_fast, they now run sch_fq.
> 
> (All Google fleet at least)

I think Google is a bit ahead of the curve here :-) Does any distribution
ship sch_fq by default yet?

/* Steinar */
-- 
Homepage: http://www.sesse.net/

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
       [not found]                             ` <14ce17a7810.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net>
@ 2015-04-22 14:15                               ` Simon Barber
  0 siblings, 0 replies; 127+ messages in thread
From: Simon Barber @ 2015-04-22 14:15 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 4740 bytes --]

Yes - the classic one is TCP Vegas.

Simon

Sent with AquaMail for Android
http://www.aqua-mail.com


On April 22, 2015 5:03:26 AM jb <justin@dslr.net> wrote:

> So I find a page that explains SO_RCVBUF is allegedly the most poorly
> implemented on Linux, vs Windows or OSX, mainly because the one you start
> with is the cap, you can go lower, but not higher, and data is needed to
> shrink the window to a new setting, instead of slamming it shut by
> setsockopt
>
> Nevertheless! it is good enough that if I set it on tcp connect I can at
> least offer the option to run pretty much the same test to the same set of
> servers, but with a selectable cap on sender rate.
>
> By the way, is there a selectable congestion control algorithm available
> that is sensitive to an RTT that increases dramatically? in other words,
> one that does the best at avoiding buffer size issues on the remote side of
> the slowest link? I know heuristics always sound better in theory than
> practice but surely if an algorithm picks up the idle RTT of a link, it can
> then pump up the window until an RTT increase indicates it should back off,
> Instead of (encouraged by no loss) thinking the end-user must be
> accelerating towards the moon..
>
> thanks.
>
> On Wed, Apr 22, 2015 at 6:51 PM, <luca.muscariello@orange.com> wrote:
>
> >  cons: large BDP in general would be negatively affected.
> > A Gbps access vs a DSL access to the same server would require very
> > different tuning.
> >
> >  sch_fq would probably make the whole thing less of a problem.
> > But running it in a VM does not sound a good idea and would not reflect
> > usual servers setting BTW
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > -------- Message d'origine --------
> > De : Eric Dumazet
> > Date :2015/04/22 12:29 (GMT+08:00)
> > À : "Steinar H. Gunderson"
> > Cc : bloat
> > Objet : Re: [Bloat] DSLReports Speed Test has latency measurement built-in
> >
> >   On Wed, 2015-04-22 at 06:04 +0200, Steinar H. Gunderson wrote:
> > > On Tue, Apr 21, 2015 at 08:35:21PM +1000, jb wrote:
> > > > As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket
> > buffers
> > > > for the application layer, they do not change the TCP window size
> > either
> > > > send or receive.
> > >
> > > I haven't gone into the code and checked, but from practical experience I
> > > think you're wrong. I've certainly seen positive effects (and verified
> > with
> > > tcpdump) from reducing SO_SNDBUF on a server that should have no
> > problems at
> > > all sending data really fast to the kernel.
> >
> > Well, using SO_SNDBUF disables TCP autotuning.
> >
> > Doing so :
> >
> > Pros:
> >
> > autotuning is known to enable TCP cubic to grow cwnd to bloat levels.
> > With small enough SO_SNDBUF, you limit this cwnd increase.
> >
> > Cons:
> >
> > Long rtt sessions might not have enough packets to utilize bandwidth.
> >
> >
> > >
> > > Then again, this kind of manual tuning trickery got obsolete for me the
> > > moment sch_fq became available.
> >
> > Note that I suppose the SO_MAX_PACING rate is really helping you.
> >
> > Without it, TCP cubic is still allowed to 'fill the pipes' until packet
> > losses.
> >
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> >
> > 
> _________________________________________________________________________________________________________________________
> >
> > Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc
> > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez 
> recu ce message par erreur, veuillez le signaler
> > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
> electroniques etant susceptibles d'alteration,
> > Orange decline toute responsabilite si ce message a ete altere, deforme 
> ou falsifie. Merci.
> >
> > This message and its attachments may contain confidential or privileged 
> information that may be protected by law;
> > they should not be distributed, used or copied without authorisation.
> > If you have received this email in error, please notify the sender and 
> delete this message and its attachments.
> > As emails may be altered, Orange is not liable for messages that have 
> been modified, changed or falsified.
> > Thank you.
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> >
> >
>
>
>
> ----------
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 6353 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-21 14:20                     ` David Lang
  2015-04-21 14:25                       ` David Lang
@ 2015-04-22 14:32                       ` Simon Barber
  2015-04-22 17:35                         ` David Lang
  1 sibling, 1 reply; 127+ messages in thread
From: Simon Barber @ 2015-04-22 14:32 UTC (permalink / raw)
  To: David Lang, jb; +Cc: bloat

The bumps are due to packet loss causing head of line blocking. Until the 
lost packet is retransmitted the receiver can't release any subsequent 
received packets to the application due to the requirement for in order 
delivery. If you counted received bytes with a packet counter rather than 
looking at application level you would be able to illustrate that data was 
being received smoothly (even though out of order).

Simon

Sent with AquaMail for Android
http://www.aqua-mail.com


On April 21, 2015 7:21:09 AM David Lang <david@lang.hm> wrote:

> On Tue, 21 Apr 2015, jb wrote:
>
> >> the receiver advertizes a large receive window, so the sender doesn't
> > pause > until there is that much data outstanding, or they get a timeout of
> > a packet as > a signal to slow down.
> >
> >> and because you have a gig-E link locally, your machine generates traffic
> > \
> >> very rapidly, until all that data is 'in flight'. but it's really sitting
> > in the buffer of
> >> router trying to get through.
> >
> > Hmm, then I have a quandary because I can easily solve the nasty bumpy
> > upload graphs by keeping the advertised receive window on the server capped
> > low, however then, paradoxically, there is no more sign of buffer bloat in
> > the result, at least for the upload phase.
> >
> > (The graph under the upload/download graphs for my results shows almost no
> > latency increase during the upload phase, now).
> >
> > Or, I can crank it back open again, serving people with fiber connections
> > without having to run heaps of streams in parallel -- and then have people
> > complain that the upload result is inefficient, or bumpy, vs what they
> > expect.
>
> well, many people expect it to be bumpy (I've heard ISPs explain to customers
> that when a link is full it is bumpy, that's just the way things work)
>
> > And I can't offer an option, because the server receive window (I think)
> > cannot be set on a case by case basis. You set it for all TCP and forget it.
>
> I think you are right
>
> > I suspect you guys are going to say the server should be left with a large
> > max receive window.. and let people complain to find out what their issue
> > is.
>
> what is your customer base? how important is it to provide faster service 
> to teh
> fiber users? Are they transferring ISO images so the difference is significant
> to them? or are they downloading web pages where it's the difference between a
> half second and a quarter second? remember that you are seeing this on the
> upload side.
>
> in the long run, fixing the problem at the client side is the best thing to do,
> but in the meantime, you sometimes have to work around broken customer stuff.
>
> > BTW my setup is wire to billion 7800N, which is a DSL modem and router. I
> > believe it is a linux based (judging from the system log) device.
>
> if it's linux based, it would be interesting to learn what sort of settings it
> has. It may be one of the rarer devices that has something in place already to
> do active queue management.
>
> David Lang
>
> > cheers,
> > -Justin
> >
> > On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
> >
> >> On Tue, 21 Apr 2015, jb wrote:
> >>
> >>  I've discovered something perhaps you guys can explain it better or shed
> >>> some light.
> >>> It isn't specifically to do with buffer bloat but it is to do with TCP
> >>> tuning.
> >>>
> >>> Attached is two pictures of my upload to New York speed test server with 1
> >>> stream.
> >>> It doesn't make any difference if it is 1 stream or 8 streams, the picture
> >>> and behaviour remains the same.
> >>> I am 200ms from new york so it qualifies as a fairly long (but not very
> >>> fat) pipe.
> >>>
> >>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535' (on
> >>> the server)
> >>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864' (on
> >>> the server)
> >>>
> >>> It actually doesn't matter what that last huge number is, once it goes
> >>> much
> >>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
> >>> upload speed.
> >>>
> >>> Now as I understand this setting, it is the tcp receive window that Linux
> >>> advertises, and the last number sets the maximum size it can get to (for
> >>> one TCP stream).
> >>>
> >>> For users with very fast upload speeds, they do not see an ugly bumpy
> >>> upload graph, it is smooth and sustained.
> >>> But for the majority of users (like me) with uploads less than 5 to
> >>> 10mbit,
> >>> we frequently see the ugly graph.
> >>>
> >>> The second tcp_rmem setting is how I have been running the speed test
> >>> servers.
> >>>
> >>> Up to now I thought this was just the distance of the speedtest from the
> >>> interface: perhaps the browser was buffering a lot, and didn't feed back
> >>> progress but now I realise the bumpy one is actually being influenced by
> >>> the server receive window.
> >>>
> >>> I guess my question is this: Why does ALLOWING a large receive window
> >>> appear to encourage problems with upload smoothness??
> >>>
> >>> This implies that setting the receive window should be done on a
> >>> connection
> >>> by connection basis: small for slow connections, large, for high speed,
> >>> long distance connections.
> >>>
> >>
> >> This is classic bufferbloat
> >>
> >> the receiver advertizes a large receive window, so the sender doesn't
> >> pause until there is that much data outstanding, or they get a timeout of a
> >> packet as a signal to slow down.
> >>
> >> and because you have a gig-E link locally, your machine generates traffic
> >> very rapidly, until all that data is 'in flight'. but it's really sitting
> >> in the buffer of a router trying to get through.
> >>
> >> then when a packet times out, the sender slows down a smidge and
> >> retransmits it. But the old packet is still sitting in a queue, eating
> >> bandwidth. the packets behind it are also going to timeout and be
> >> retransmitted before your first retransmitted packet gets through, so you
> >> have a large slug of data that's being retransmitted, and the first of the
> >> replacement data can't get through until the last of the old (timed out)
> >> data is transmitted.
> >>
> >> then when data starts flowing again, the sender again tries to fill up the
> >> window with data in flight.
> >>
> >>  In addition, if I cap it to 65k, for reasons of smoothness,
> >>> that means the bandwidth delay product will keep maximum speed per upload
> >>> stream quite low. So a symmetric or gigabit connection is going to need a
> >>> ton of parallel streams to see full speed.
> >>>
> >>> Most puzzling is why would anything special be required on the Client -->
> >>> Server side of the equation
> >>> but nothing much appears wrong with the Server --> Client side, whether
> >>> speeds are very low (GPRS) or very high (gigabit).
> >>>
> >>
> >> but what window sizes are these clients advertising?
> >>
> >>
> >>  Note that also I am not yet sure if smoothness == better throughput. I
> >>> have
> >>> noticed upload speeds for some people often being under their claimed sync
> >>> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
> >>> inefficiency. Maybe.
> >>>
> >>
> >> If you were to do a packet capture on the server side, you would see that
> >> you have a bunch of packets that are arriving multiple times, but the first
> >> time "does't count" because the replacement is already on the way.
> >>
> >> so your overall throughput is lower for two reasons
> >>
> >> 1. it's bursty, and there are times when the connection actually is idle
> >> (after you have a lot of timed out packets, the sender needs to ramp up
> >> it's speed again)
> >>
> >> 2. you are sending some packets multiple times, consuming more total
> >> bandwidth for the same 'goodput' (effective throughput)
> >>
> >> David Lang
> >>
> >>
> >>  help!
> >>>
> >>>
> >>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net>
> >>> wrote:
> >>>
> >>>  One thing users understand is slow web access.  Perhaps translating the
> >>>> latency measurement into 'a typical web page will take X seconds longer
> >>>> to
> >>>> load', or even stating the impact as 'this latency causes a typical web
> >>>> page to load slower, as if your connection was only YY% of the measured
> >>>> speed.'
> >>>>
> >>>> Simon
> >>>>
> >>>> Sent with AquaMail for Android
> >>>> http://www.aqua-mail.com
> >>>>
> >>>>
> >>>>
> >>>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
> >>>> wrote:
> >>>>
> >>>>>>>> Frequency readouts are probably more accessible to the latter.
> >>>>
> >>>>>
> >>>>>>>>     The frequency domain more accessible to laypersons? I have my
> >>>>>>>>
> >>>>>>> doubts ;)
> >>>>>
> >>>>>>
> >>>>>>> Gamers, at least, are familiar with “frames per second” and how that
> >>>>>>>
> >>>>>> corresponds to their monitor’s refresh rate.
> >>>>>
> >>>>>>
> >>>>>>       I am sure they can easily transform back into time domain to get
> >>>>>>
> >>>>> the frame period ;) .  I am partly kidding, I think your idea is great
> >>>>> in
> >>>>> that it is a truly positive value which could lend itself to being used
> >>>>> in
> >>>>> ISP/router manufacturer advertising, and hence might work in the real
> >>>>> work;
> >>>>> on the other hand I like to keep data as “raw” as possible (not that
> >>>>> ^(-1)
> >>>>> is a transformation worthy of being called data massage).
> >>>>>
> >>>>>>
> >>>>>>  The desirable range of latencies, when converted to Hz, happens to be
> >>>>>>>
> >>>>>> roughly the same as the range of desirable frame rates.
> >>>>>
> >>>>>>
> >>>>>>       Just to play devils advocate, the interesting part is time or
> >>>>>>
> >>>>> saving time so seconds or milliseconds are also intuitively
> >>>>> understandable
> >>>>> and can be easily added ;)
> >>>>>
> >>>>> Such readouts are certainly interesting to people like us.  I have no
> >>>>> objection to them being reported alongside a frequency readout.  But I
> >>>>> think most people are not interested in “time savings” measured in
> >>>>> milliseconds; they’re much more aware of the minute- and hour-level time
> >>>>> savings associated with greater bandwidth.
> >>>>>
> >>>>>  - Jonathan Morton
> >>>>>
> >>>>> _______________________________________________
> >>>>> Bloat mailing list
> >>>>> Bloat@lists.bufferbloat.net
> >>>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>>
> >>>>>
> >>>>
> >>>> _______________________________________________
> >>>> Bloat mailing list
> >>>> Bloat@lists.bufferbloat.net
> >>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>
> >>>>
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >>
> >>
> >
>
>
> ----------
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>



^ permalink raw reply	[flat|nested] 127+ messages in thread

* [Bloat] RE : DSLReports Speed Test has latency measurement built-in
  2015-04-22 13:50                           ` [Bloat] " Eric Dumazet
  2015-04-22 14:09                             ` Steinar H. Gunderson
@ 2015-04-22 15:26                             ` luca.muscariello
  2015-04-22 15:44                               ` [Bloat] " Eric Dumazet
  2015-04-22 15:59                               ` [Bloat] RE : " Steinar H. Gunderson
  1 sibling, 2 replies; 127+ messages in thread
From: luca.muscariello @ 2015-04-22 15:26 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 2312 bytes --]

Do I need to read this as all Google servers == all servers :)

BTW if a paced flow from Google shares a bloated buffer with a non paced flow from a non Google server,  doesn't this turn out to be a performance penalty for the paced flow?

fq_codel gives incentives to do pacing but if it's not deployed what's the performance gain of using pacing?


-------- Message d'origine --------
De : Eric Dumazet
Date :2015/04/22 21:51 (GMT+08:00)
À : MUSCARIELLO Luca IMT/OLN
Cc : "Steinar H. Gunderson" , bloat
Objet : Re: [Bloat] DSLReports Speed Test has latency measurement built-in

On Wed, 2015-04-22 at 08:51 +0000, luca.muscariello@orange.com wrote:
> cons: large BDP in general would be negatively affected.
> A Gbps access vs a DSL access to the same server would require very
> different tuning.
>
Yep. This is what I mentioned with 'long rtt'. This was relative to BDP.
>
> sch_fq would probably make the whole thing less of a problem.
> But running it in a VM does not sound a good idea and would not
> reflect usual servers setting BTW
>
No idea why it should mater. Have you got some experimental data ?

You know, 'usual servers' used to run pfifo_fast, they now run sch_fq.

(All Google fleet at least)

So this kind of argument sounds not based on experiments.



_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


[-- Attachment #2: Type: text/html, Size: 3063 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22 15:26                             ` [Bloat] RE : " luca.muscariello
@ 2015-04-22 15:44                               ` Eric Dumazet
  2015-04-22 16:35                                 ` MUSCARIELLO Luca IMT/OLN
  2015-04-22 15:59                               ` [Bloat] RE : " Steinar H. Gunderson
  1 sibling, 1 reply; 127+ messages in thread
From: Eric Dumazet @ 2015-04-22 15:44 UTC (permalink / raw)
  To: luca.muscariello; +Cc: bloat

On Wed, 2015-04-22 at 15:26 +0000, luca.muscariello@orange.com wrote:
> Do I need to read this as all Google servers == all servers :)  
> 
> 
Read again what I wrote. Don't play with my words.

> BTW if a paced flow from Google shares a bloated buffer with a non
> paced flow from a non Google server,  doesn't this turn out to be a
> performance penalty for the paced flow? 
> 
> 
What do you think ? Do you think Google would still use sch_fq if this
was a potential problem ?

> fq_codel gives incentives to do pacing but if it's not deployed what's
> the performance gain of using pacing? 

1) fq_codel has nothing to do with pacing.

2) sch_fq doesn't depend on fq_codel or codel being used anywhere.

It seems you are quite confused, and unfortunately I wont take time to
explain anything.

Run experiments, then draw your own conclusions.




^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
  2015-04-22 15:26                             ` [Bloat] RE : " luca.muscariello
  2015-04-22 15:44                               ` [Bloat] " Eric Dumazet
@ 2015-04-22 15:59                               ` Steinar H. Gunderson
  2015-04-22 16:16                                 ` Eric Dumazet
  2015-04-22 16:19                                 ` Dave Taht
  1 sibling, 2 replies; 127+ messages in thread
From: Steinar H. Gunderson @ 2015-04-22 15:59 UTC (permalink / raw)
  To: luca.muscariello; +Cc: bloat

On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com wrote:
> BTW if a paced flow from Google shares a bloated buffer with a non paced
> flow from a non Google server,  doesn't this turn out to be a performance
> penalty for the paced flow?

Nope. The paced flow puts less strain on the buffer (and hooray for that),
which is a win no matter if the buffer is contended or not.

> fq_codel gives incentives to do pacing but if it's not deployed what's the
> performance gain of using pacing?

fq_codel doesn't give any specific incentive to do pacing. In fact, if
absolutely all devices on your path would use fq_codel and have adequate
buffers, I believe pacing would be largely a no-op.

/* Steinar */
-- 
Homepage: http://www.sesse.net/

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
  2015-04-22 15:59                               ` [Bloat] RE : " Steinar H. Gunderson
@ 2015-04-22 16:16                                 ` Eric Dumazet
  2015-04-22 16:19                                 ` Dave Taht
  1 sibling, 0 replies; 127+ messages in thread
From: Eric Dumazet @ 2015-04-22 16:16 UTC (permalink / raw)
  To: Steinar H. Gunderson; +Cc: bloat

On Wed, 2015-04-22 at 17:59 +0200, Steinar H. Gunderson wrote:
> On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com wrote:
> > BTW if a paced flow from Google shares a bloated buffer with a non paced
> > flow from a non Google server,  doesn't this turn out to be a performance
> > penalty for the paced flow?
> 
> Nope. The paced flow puts less strain on the buffer (and hooray for that),
> which is a win no matter if the buffer is contended or not.
> 
> > fq_codel gives incentives to do pacing but if it's not deployed what's the
> > performance gain of using pacing?
> 
> fq_codel doesn't give any specific incentive to do pacing. In fact, if
> absolutely all devices on your path would use fq_codel and have adequate
> buffers, I believe pacing would be largely a no-op.

While this might be true for stationary flows (ACK driven, no pacing is
enforced in sch_fq), sch_fq/pacing is still nice after idle period.

Say a flow deliver chunks of data. With pacing, you no longer have to
slow start after idle.



^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
  2015-04-22 15:59                               ` [Bloat] RE : " Steinar H. Gunderson
  2015-04-22 16:16                                 ` Eric Dumazet
@ 2015-04-22 16:19                                 ` Dave Taht
  2015-04-22 17:15                                   ` Rick Jones
  1 sibling, 1 reply; 127+ messages in thread
From: Dave Taht @ 2015-04-22 16:19 UTC (permalink / raw)
  To: Steinar H. Gunderson; +Cc: bloat

On Wed, Apr 22, 2015 at 8:59 AM, Steinar H. Gunderson
<sgunderson@bigfoot.com> wrote:
> On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com wrote:
>> BTW if a paced flow from Google shares a bloated buffer with a non paced
>> flow from a non Google server,  doesn't this turn out to be a performance
>> penalty for the paced flow?
>
> Nope. The paced flow puts less strain on the buffer (and hooray for that),
> which is a win no matter if the buffer is contended or not.

I just posted some test results for 450 simultaneous flows on a new thread.
sch_fq has a fixed per flow packet limit of 100 packets, which shows up here.

Cake did surprisingly well, I have no idea why. I suspect my kernel is broken,
actually. I am getting on a plane in a bit, and have done too much work this
"vacation" already.

Has anyone added pacing to netperf yet? (I can do so, but would need
guidance as to what getopt option to add)

>> fq_codel gives incentives to do pacing but if it's not deployed what's the
>> performance gain of using pacing?
>
> fq_codel doesn't give any specific incentive to do pacing.
\
Concur, except that in the case where there is no queue for that flow,
fq_codel gives a boost.

> In fact, if
> absolutely all devices on your path would use fq_codel and have adequate
> buffers, I believe pacing would be largely a no-op.

Concur.

> /* Steinar */
> --
> Homepage: http://www.sesse.net/
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22 15:44                               ` [Bloat] " Eric Dumazet
@ 2015-04-22 16:35                                 ` MUSCARIELLO Luca IMT/OLN
  2015-04-22 17:16                                   ` Eric Dumazet
  0 siblings, 1 reply; 127+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-22 16:35 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

On 04/22/2015 05:44 PM, Eric Dumazet wrote:
> On Wed, 2015-04-22 at 15:26 +0000, luca.muscariello@orange.com wrote:
>> Do I need to read this as all Google servers == all servers :)
>>
>>
> Read again what I wrote. Don't play with my words.

let the stupid guy ask questions.
In the worst  case don't answer, but no reason to get nervous.


>
>> BTW if a paced flow from Google shares a bloated buffer with a non
>> paced flow from a non Google server,  doesn't this turn out to be a
>> performance penalty for the paced flow?
>>
>>
> What do you think ? Do you think Google would still use sch_fq if this
> was a potential problem ?

Frankly, I do not understand this statement as it seems you tell me
that that would be the right choice as Google does it.
I believe that Google does it for technical reasons and that would be
part of your possible answers. You have the right not to share with the 
list of course.

>
>> fq_codel gives incentives to do pacing but if it's not deployed what's
>> the performance gain of using pacing?
> 1) fq_codel has nothing to do with pacing.

FQ gives you flow isolation.
Extreme example: If you share the link with a flow that saturates the 
buffer (for whatever reason)
flow isolation gives you the comfort to do whatever works better for 
your application.
Without flow isolation how can you benefit from the good features of 
pacing if
the buffer get screwed by a competing flow?

This is what I meant with incentives to do pacing in presence of flow 
isolation, as you get rewarded
if the sender behaves better no matter what others do.


>
> 2) sch_fq doesn't depend on fq_codel or codel being used anywhere.
that's clear to me.


>


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
  2015-04-22 16:19                                 ` Dave Taht
@ 2015-04-22 17:15                                   ` Rick Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Rick Jones @ 2015-04-22 17:15 UTC (permalink / raw)
  To: bloat

On 04/22/2015 09:19 AM, Dave Taht wrote:
>
> Has anyone added pacing to netperf yet? (I can do so, but would need
> guidance as to what getopt option to add)

./configure --enable-intervals
recompile netperf and then you can use:

netperf ... -b <NumSendsInBurst> -w <InterBurstInterval>

If you want to be able to specify an interval shorter than the minimum 
time for the itimer you need to add --enable-spin to the ./configure - 
netperf will burn CPU time like there was no tomorrow, but you'll get 
the finer control on the burst interval.

happy benchmarking,

rick

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22 16:35                                 ` MUSCARIELLO Luca IMT/OLN
@ 2015-04-22 17:16                                   ` Eric Dumazet
  2015-04-22 17:24                                     ` Steinar H. Gunderson
  2015-04-22 17:28                                     ` MUSCARIELLO Luca IMT/OLN
  0 siblings, 2 replies; 127+ messages in thread
From: Eric Dumazet @ 2015-04-22 17:16 UTC (permalink / raw)
  To: MUSCARIELLO Luca IMT/OLN; +Cc: bloat

On Wed, 2015-04-22 at 18:35 +0200, MUSCARIELLO Luca IMT/OLN wrote:

> FQ gives you flow isolation.

So does fq_codel.

sch_fq adds *pacing*, which in itself has benefits, regardless of fair
queues : Smaller bursts, less self inflicted drops.

If flows are competing, this is the role of Congestion Control module,
not packet schedulers / AQM.

Packets schedulers help to have smaller bursts, and help CC modules to
see a better signal (packet drops or rtt variations)




^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22 17:16                                   ` Eric Dumazet
@ 2015-04-22 17:24                                     ` Steinar H. Gunderson
  2015-04-22 17:28                                     ` MUSCARIELLO Luca IMT/OLN
  1 sibling, 0 replies; 127+ messages in thread
From: Steinar H. Gunderson @ 2015-04-22 17:24 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

On Wed, Apr 22, 2015 at 10:16:19AM -0700, Eric Dumazet wrote:
> sch_fq adds *pacing*, which in itself has benefits, regardless of fair
> queues : Smaller bursts, less self inflicted drops.

Somehow I think sch_fq should just have been named sch_pacing :-)

/* Steinar */
-- 
Homepage: http://www.sesse.net/

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22 17:16                                   ` Eric Dumazet
  2015-04-22 17:24                                     ` Steinar H. Gunderson
@ 2015-04-22 17:28                                     ` MUSCARIELLO Luca IMT/OLN
  2015-04-22 17:45                                       ` MUSCARIELLO Luca IMT/OLN
  2015-04-22 18:22                                       ` Eric Dumazet
  1 sibling, 2 replies; 127+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-22 17:28 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

On 04/22/2015 07:16 PM, Eric Dumazet wrote:
> On Wed, 2015-04-22 at 18:35 +0200, MUSCARIELLO Luca IMT/OLN wrote:
>
>> FQ gives you flow isolation.
> So does fq_codel.

yes, the FQ part of fq_codel. that's what I meant. Not the FQ part of 
sch_fq.


>
> sch_fq adds *pacing*, which in itself has benefits, regardless of fair
> queues : Smaller bursts, less self inflicted drops.

This I understand. But it can't protect from non self inflicted drops.

>
> If flows are competing, this is the role of Congestion Control module,
> not packet schedulers / AQM.

Exactly. Two same CC modules competing on the same link, one w pacing 
the other one w/o pacing.
The latter will have negative impact on the former in FIFO. Not in FQ 
(fq_codel to clarify).
And that's my incentive argument which comes from the flow isolation 
feature of FQ (_codel).



^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22 14:32                       ` Simon Barber
@ 2015-04-22 17:35                         ` David Lang
  2015-04-23  1:37                           ` Simon Barber
  0 siblings, 1 reply; 127+ messages in thread
From: David Lang @ 2015-04-22 17:35 UTC (permalink / raw)
  To: Simon Barber; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 11865 bytes --]

Data that's received and not used doesn't really matter (a tree falls in the 
woods type of thing).

The head of line blocking can cause a chunk of packets to be retransmitted, even 
though the receiving machine got them the first time. So looking at the received 
bytes gives you a false picture of what is going on.

David Lang

On Wed, 22 Apr 2015, Simon Barber wrote:

> The bumps are due to packet loss causing head of line blocking. Until the 
> lost packet is retransmitted the receiver can't release any subsequent 
> received packets to the application due to the requirement for in order 
> delivery. If you counted received bytes with a packet counter rather than 
> looking at application level you would be able to illustrate that data was 
> being received smoothly (even though out of order).
>
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
>
> On April 21, 2015 7:21:09 AM David Lang <david@lang.hm> wrote:
>
>> On Tue, 21 Apr 2015, jb wrote:
>> 
>> >> the receiver advertizes a large receive window, so the sender doesn't
>> > pause > until there is that much data outstanding, or they get a timeout 
>> of
>> > a packet as > a signal to slow down.
>> >
>> >> and because you have a gig-E link locally, your machine generates 
>> traffic
>> > \
>> >> very rapidly, until all that data is 'in flight'. but it's really 
>> sitting
>> > in the buffer of
>> >> router trying to get through.
>> >
>> > Hmm, then I have a quandary because I can easily solve the nasty bumpy
>> > upload graphs by keeping the advertised receive window on the server 
>> capped
>> > low, however then, paradoxically, there is no more sign of buffer bloat 
>> in
>> > the result, at least for the upload phase.
>> >
>> > (The graph under the upload/download graphs for my results shows almost 
>> no
>> > latency increase during the upload phase, now).
>> >
>> > Or, I can crank it back open again, serving people with fiber connections
>> > without having to run heaps of streams in parallel -- and then have 
>> people
>> > complain that the upload result is inefficient, or bumpy, vs what they
>> > expect.
>> 
>> well, many people expect it to be bumpy (I've heard ISPs explain to 
>> customers
>> that when a link is full it is bumpy, that's just the way things work)
>> 
>> > And I can't offer an option, because the server receive window (I think)
>> > cannot be set on a case by case basis. You set it for all TCP and forget 
>> it.
>> 
>> I think you are right
>> 
>> > I suspect you guys are going to say the server should be left with a 
>> large
>> > max receive window.. and let people complain to find out what their issue
>> > is.
>> 
>> what is your customer base? how important is it to provide faster service 
>> to teh
>> fiber users? Are they transferring ISO images so the difference is 
>> significant
>> to them? or are they downloading web pages where it's the difference 
>> between a
>> half second and a quarter second? remember that you are seeing this on the
>> upload side.
>> 
>> in the long run, fixing the problem at the client side is the best thing to 
>> do,
>> but in the meantime, you sometimes have to work around broken customer 
>> stuff.
>> 
>> > BTW my setup is wire to billion 7800N, which is a DSL modem and router. I
>> > believe it is a linux based (judging from the system log) device.
>> 
>> if it's linux based, it would be interesting to learn what sort of settings 
>> it
>> has. It may be one of the rarer devices that has something in place already 
>> to
>> do active queue management.
>> 
>> David Lang
>> 
>> > cheers,
>> > -Justin
>> >
>> > On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
>> >
>> >> On Tue, 21 Apr 2015, jb wrote:
>> >>
>> >>  I've discovered something perhaps you guys can explain it better or 
>> shed
>> >>> some light.
>> >>> It isn't specifically to do with buffer bloat but it is to do with TCP
>> >>> tuning.
>> >>>
>> >>> Attached is two pictures of my upload to New York speed test server 
>> with 1
>> >>> stream.
>> >>> It doesn't make any difference if it is 1 stream or 8 streams, the 
>> picture
>> >>> and behaviour remains the same.
>> >>> I am 200ms from new york so it qualifies as a fairly long (but not very
>> >>> fat) pipe.
>> >>>
>> >>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535' 
>> (on
>> >>> the server)
>> >>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864' 
>> (on
>> >>> the server)
>> >>>
>> >>> It actually doesn't matter what that last huge number is, once it goes
>> >>> much
>> >>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
>> >>> upload speed.
>> >>>
>> >>> Now as I understand this setting, it is the tcp receive window that 
>> Linux
>> >>> advertises, and the last number sets the maximum size it can get to 
>> (for
>> >>> one TCP stream).
>> >>>
>> >>> For users with very fast upload speeds, they do not see an ugly bumpy
>> >>> upload graph, it is smooth and sustained.
>> >>> But for the majority of users (like me) with uploads less than 5 to
>> >>> 10mbit,
>> >>> we frequently see the ugly graph.
>> >>>
>> >>> The second tcp_rmem setting is how I have been running the speed test
>> >>> servers.
>> >>>
>> >>> Up to now I thought this was just the distance of the speedtest from 
>> the
>> >>> interface: perhaps the browser was buffering a lot, and didn't feed 
>> back
>> >>> progress but now I realise the bumpy one is actually being influenced 
>> by
>> >>> the server receive window.
>> >>>
>> >>> I guess my question is this: Why does ALLOWING a large receive window
>> >>> appear to encourage problems with upload smoothness??
>> >>>
>> >>> This implies that setting the receive window should be done on a
>> >>> connection
>> >>> by connection basis: small for slow connections, large, for high speed,
>> >>> long distance connections.
>> >>>
>> >>
>> >> This is classic bufferbloat
>> >>
>> >> the receiver advertizes a large receive window, so the sender doesn't
>> >> pause until there is that much data outstanding, or they get a timeout 
>> of a
>> >> packet as a signal to slow down.
>> >>
>> >> and because you have a gig-E link locally, your machine generates 
>> traffic
>> >> very rapidly, until all that data is 'in flight'. but it's really 
>> sitting
>> >> in the buffer of a router trying to get through.
>> >>
>> >> then when a packet times out, the sender slows down a smidge and
>> >> retransmits it. But the old packet is still sitting in a queue, eating
>> >> bandwidth. the packets behind it are also going to timeout and be
>> >> retransmitted before your first retransmitted packet gets through, so 
>> you
>> >> have a large slug of data that's being retransmitted, and the first of 
>> the
>> >> replacement data can't get through until the last of the old (timed out)
>> >> data is transmitted.
>> >>
>> >> then when data starts flowing again, the sender again tries to fill up 
>> the
>> >> window with data in flight.
>> >>
>> >>  In addition, if I cap it to 65k, for reasons of smoothness,
>> >>> that means the bandwidth delay product will keep maximum speed per 
>> upload
>> >>> stream quite low. So a symmetric or gigabit connection is going to need 
>> a
>> >>> ton of parallel streams to see full speed.
>> >>>
>> >>> Most puzzling is why would anything special be required on the Client 
>> -->
>> >>> Server side of the equation
>> >>> but nothing much appears wrong with the Server --> Client side, whether
>> >>> speeds are very low (GPRS) or very high (gigabit).
>> >>>
>> >>
>> >> but what window sizes are these clients advertising?
>> >>
>> >>
>> >>  Note that also I am not yet sure if smoothness == better throughput. I
>> >>> have
>> >>> noticed upload speeds for some people often being under their claimed 
>> sync
>> >>> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
>> >>> inefficiency. Maybe.
>> >>>
>> >>
>> >> If you were to do a packet capture on the server side, you would see 
>> that
>> >> you have a bunch of packets that are arriving multiple times, but the 
>> first
>> >> time "does't count" because the replacement is already on the way.
>> >>
>> >> so your overall throughput is lower for two reasons
>> >>
>> >> 1. it's bursty, and there are times when the connection actually is idle
>> >> (after you have a lot of timed out packets, the sender needs to ramp up
>> >> it's speed again)
>> >>
>> >> 2. you are sending some packets multiple times, consuming more total
>> >> bandwidth for the same 'goodput' (effective throughput)
>> >>
>> >> David Lang
>> >>
>> >>
>> >>  help!
>> >>>
>> >>>
>> >>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net>
>> >>> wrote:
>> >>>
>> >>>  One thing users understand is slow web access.  Perhaps translating 
>> the
>> >>>> latency measurement into 'a typical web page will take X seconds 
>> longer
>> >>>> to
>> >>>> load', or even stating the impact as 'this latency causes a typical 
>> web
>> >>>> page to load slower, as if your connection was only YY% of the 
>> measured
>> >>>> speed.'
>> >>>>
>> >>>> Simon
>> >>>>
>> >>>> Sent with AquaMail for Android
>> >>>> http://www.aqua-mail.com
>> >>>>
>> >>>>
>> >>>>
>> >>>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
>> >>>> wrote:
>> >>>>
>> >>>>>>>> Frequency readouts are probably more accessible to the latter.
>> >>>>
>> >>>>>
>> >>>>>>>>     The frequency domain more accessible to laypersons? I have my
>> >>>>>>>>
>> >>>>>>> doubts ;)
>> >>>>>
>> >>>>>>
>> >>>>>>> Gamers, at least, are familiar with “frames per second” and how 
>> that
>> >>>>>>>
>> >>>>>> corresponds to their monitor’s refresh rate.
>> >>>>>
>> >>>>>>
>> >>>>>>       I am sure they can easily transform back into time domain to 
>> get
>> >>>>>>
>> >>>>> the frame period ;) .  I am partly kidding, I think your idea is 
>> great
>> >>>>> in
>> >>>>> that it is a truly positive value which could lend itself to being 
>> used
>> >>>>> in
>> >>>>> ISP/router manufacturer advertising, and hence might work in the real
>> >>>>> work;
>> >>>>> on the other hand I like to keep data as “raw” as possible (not that
>> >>>>> ^(-1)
>> >>>>> is a transformation worthy of being called data massage).
>> >>>>>
>> >>>>>>
>> >>>>>>  The desirable range of latencies, when converted to Hz, happens to 
>> be
>> >>>>>>>
>> >>>>>> roughly the same as the range of desirable frame rates.
>> >>>>>
>> >>>>>>
>> >>>>>>       Just to play devils advocate, the interesting part is time or
>> >>>>>>
>> >>>>> saving time so seconds or milliseconds are also intuitively
>> >>>>> understandable
>> >>>>> and can be easily added ;)
>> >>>>>
>> >>>>> Such readouts are certainly interesting to people like us.  I have no
>> >>>>> objection to them being reported alongside a frequency readout.  But 
>> I
>> >>>>> think most people are not interested in “time savings” measured in
>> >>>>> milliseconds; they’re much more aware of the minute- and hour-level 
>> time
>> >>>>> savings associated with greater bandwidth.
>> >>>>>
>> >>>>>  - Jonathan Morton
>> >>>>>
>> >>>>> _______________________________________________
>> >>>>> Bloat mailing list
>> >>>>> Bloat@lists.bufferbloat.net
>> >>>>> https://lists.bufferbloat.net/listinfo/bloat
>> >>>>>
>> >>>>>
>> >>>>
>> >>>> _______________________________________________
>> >>>> Bloat mailing list
>> >>>> Bloat@lists.bufferbloat.net
>> >>>> https://lists.bufferbloat.net/listinfo/bloat
>> >>>>
>> >>>>
>> >> _______________________________________________
>> >> Bloat mailing list
>> >> Bloat@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/bloat
>> >>
>> >>
>> >
>> 
>> 
>> ----------
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>> 
>
>
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22 17:28                                     ` MUSCARIELLO Luca IMT/OLN
@ 2015-04-22 17:45                                       ` MUSCARIELLO Luca IMT/OLN
  2015-04-23  5:27                                         ` MUSCARIELLO Luca IMT/OLN
  2015-04-22 18:22                                       ` Eric Dumazet
  1 sibling, 1 reply; 127+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-22 17:45 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

I remember a paper by Stefan Savage of about 15 years ago where he 
substantiates this in clearer terms.
If I find the paper I'll send the reference to the list.


On 04/22/2015 07:28 PM, MUSCARIELLO Luca IMT/OLN wrote:
> Exactly. Two same CC modules competing on the same link, one w pacing 
> the other one w/o pacing.
> The latter will have negative impact on the former in FIFO. Not in FQ 
> (fq_codel to clarify).
> And that's my incentive argument which comes from the flow isolation 
> feature of FQ (_codel). 


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22 17:28                                     ` MUSCARIELLO Luca IMT/OLN
  2015-04-22 17:45                                       ` MUSCARIELLO Luca IMT/OLN
@ 2015-04-22 18:22                                       ` Eric Dumazet
  2015-04-22 18:39                                         ` [Bloat] Pacing --- was " MUSCARIELLO Luca IMT/OLN
  1 sibling, 1 reply; 127+ messages in thread
From: Eric Dumazet @ 2015-04-22 18:22 UTC (permalink / raw)
  To: MUSCARIELLO Luca IMT/OLN; +Cc: bloat

On Wed, 2015-04-22 at 19:28 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> On 04/22/2015 07:16 PM, Eric Dumazet wrote:

> >
> > sch_fq adds *pacing*, which in itself has benefits, regardless of fair
> > queues : Smaller bursts, less self inflicted drops.
> 
> This I understand. But it can't protect from non self inflicted drops.

It really does.

This is why we deployed sch_fq and let our competitors find this later.

> 
> >
> > If flows are competing, this is the role of Congestion Control module,
> > not packet schedulers / AQM.
> 
> Exactly. Two same CC modules competing on the same link, one w pacing 
> the other one w/o pacing.
> The latter will have negative impact on the former in FIFO. Not in FQ 
> (fq_codel to clarify).

Not on modern linux kernels, thanks to TCP Small Queues.

> And that's my incentive argument which comes from the flow isolation 
> feature of FQ (_codel).

fq_codel is not something you can deploy on the backbone routers, for
known reasons.

sch_fq is something you can deploy on hosts, where the codel part is
irrelevant anyway (because of TCP Small Queues in modern linux kernels)




^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] Pacing --- was DSLReports Speed Test has latency measurement built-in
  2015-04-22 18:22                                       ` Eric Dumazet
@ 2015-04-22 18:39                                         ` MUSCARIELLO Luca IMT/OLN
  2015-04-22 19:05                                           ` Jonathan Morton
  0 siblings, 1 reply; 127+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-22 18:39 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

This is not clear to me in general.
I can understand that for the first shot of the IW>=10  pacing is always 
a win strategy no matter what queuing system you have
because it reduces the loss probability in that window. Still fq_codel 
would reduce that probability even more.

But for long flows going far beyond that phase isn't clear to me why the 
paced flow is not penalized by the non paced flow in FIFO.
TCP will start filling the pipe at some point in bursts and that would hurt.

Now, I forgot how sch_fq pacing rate is initialized to be effective from 
the very first window.

On 04/22/2015 08:22 PM, Eric Dumazet wrote:
> It really does.
>
> This is why we deployed sch_fq and let our competitors find this later.


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] Pacing --- was DSLReports Speed Test has latency measurement built-in
  2015-04-22 18:39                                         ` [Bloat] Pacing --- was " MUSCARIELLO Luca IMT/OLN
@ 2015-04-22 19:05                                           ` Jonathan Morton
  0 siblings, 0 replies; 127+ messages in thread
From: Jonathan Morton @ 2015-04-22 19:05 UTC (permalink / raw)
  To: MUSCARIELLO Luca IMT/OLN; +Cc: bloat


> On 22 Apr, 2015, at 21:39, MUSCARIELLO Luca IMT/OLN <luca.muscariello@orange.com> wrote:
> 
> Now, I forgot how sch_fq pacing rate is initialized to be effective from the very first window.

IIRC, it’s basically a measurement of the RTT during the handshake, and you then pace to deliver the congestion window during that RTT (or, in practice, some large fraction of it).  Subsequent changes in cwnd and RTT alter the pacing rate accordingly.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22 17:35                         ` David Lang
@ 2015-04-23  1:37                           ` Simon Barber
  2015-04-24 16:54                             ` David Lang
  0 siblings, 1 reply; 127+ messages in thread
From: Simon Barber @ 2015-04-23  1:37 UTC (permalink / raw)
  To: David Lang; +Cc: bloat

Does this happen even with Sack?

Simon

Sent with AquaMail for Android
http://www.aqua-mail.com


On April 22, 2015 10:36:11 AM David Lang <david@lang.hm> wrote:

> Data that's received and not used doesn't really matter (a tree falls in the
> woods type of thing).
>
> The head of line blocking can cause a chunk of packets to be retransmitted, 
> even
> though the receiving machine got them the first time. So looking at the 
> received
> bytes gives you a false picture of what is going on.
>
> David Lang
>
> On Wed, 22 Apr 2015, Simon Barber wrote:
>
> > The bumps are due to packet loss causing head of line blocking. Until the
> > lost packet is retransmitted the receiver can't release any subsequent
> > received packets to the application due to the requirement for in order
> > delivery. If you counted received bytes with a packet counter rather than
> > looking at application level you would be able to illustrate that data was
> > being received smoothly (even though out of order).
> >
> > Simon
> >
> > Sent with AquaMail for Android
> > http://www.aqua-mail.com
> >
> >
> > On April 21, 2015 7:21:09 AM David Lang <david@lang.hm> wrote:
> >
> >> On Tue, 21 Apr 2015, jb wrote:
> >>
> >> >> the receiver advertizes a large receive window, so the sender doesn't
> >> > pause > until there is that much data outstanding, or they get a timeout
> >> of
> >> > a packet as > a signal to slow down.
> >> >
> >> >> and because you have a gig-E link locally, your machine generates
> >> traffic
> >> > \
> >> >> very rapidly, until all that data is 'in flight'. but it's really
> >> sitting
> >> > in the buffer of
> >> >> router trying to get through.
> >> >
> >> > Hmm, then I have a quandary because I can easily solve the nasty bumpy
> >> > upload graphs by keeping the advertised receive window on the server
> >> capped
> >> > low, however then, paradoxically, there is no more sign of buffer bloat
> >> in
> >> > the result, at least for the upload phase.
> >> >
> >> > (The graph under the upload/download graphs for my results shows almost
> >> no
> >> > latency increase during the upload phase, now).
> >> >
> >> > Or, I can crank it back open again, serving people with fiber connections
> >> > without having to run heaps of streams in parallel -- and then have
> >> people
> >> > complain that the upload result is inefficient, or bumpy, vs what they
> >> > expect.
> >>
> >> well, many people expect it to be bumpy (I've heard ISPs explain to
> >> customers
> >> that when a link is full it is bumpy, that's just the way things work)
> >>
> >> > And I can't offer an option, because the server receive window (I think)
> >> > cannot be set on a case by case basis. You set it for all TCP and forget
> >> it.
> >>
> >> I think you are right
> >>
> >> > I suspect you guys are going to say the server should be left with a
> >> large
> >> > max receive window.. and let people complain to find out what their issue
> >> > is.
> >>
> >> what is your customer base? how important is it to provide faster service
> >> to teh
> >> fiber users? Are they transferring ISO images so the difference is
> >> significant
> >> to them? or are they downloading web pages where it's the difference
> >> between a
> >> half second and a quarter second? remember that you are seeing this on the
> >> upload side.
> >>
> >> in the long run, fixing the problem at the client side is the best thing to
> >> do,
> >> but in the meantime, you sometimes have to work around broken customer
> >> stuff.
> >>
> >> > BTW my setup is wire to billion 7800N, which is a DSL modem and router. I
> >> > believe it is a linux based (judging from the system log) device.
> >>
> >> if it's linux based, it would be interesting to learn what sort of settings
> >> it
> >> has. It may be one of the rarer devices that has something in place already
> >> to
> >> do active queue management.
> >>
> >> David Lang
> >>
> >> > cheers,
> >> > -Justin
> >> >
> >> > On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
> >> >
> >> >> On Tue, 21 Apr 2015, jb wrote:
> >> >>
> >> >>  I've discovered something perhaps you guys can explain it better or
> >> shed
> >> >>> some light.
> >> >>> It isn't specifically to do with buffer bloat but it is to do with TCP
> >> >>> tuning.
> >> >>>
> >> >>> Attached is two pictures of my upload to New York speed test server
> >> with 1
> >> >>> stream.
> >> >>> It doesn't make any difference if it is 1 stream or 8 streams, the
> >> picture
> >> >>> and behaviour remains the same.
> >> >>> I am 200ms from new york so it qualifies as a fairly long (but not very
> >> >>> fat) pipe.
> >> >>>
> >> >>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535'
> >> (on
> >> >>> the server)
> >> >>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864'
> >> (on
> >> >>> the server)
> >> >>>
> >> >>> It actually doesn't matter what that last huge number is, once it goes
> >> >>> much
> >> >>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
> >> >>> upload speed.
> >> >>>
> >> >>> Now as I understand this setting, it is the tcp receive window that
> >> Linux
> >> >>> advertises, and the last number sets the maximum size it can get to
> >> (for
> >> >>> one TCP stream).
> >> >>>
> >> >>> For users with very fast upload speeds, they do not see an ugly bumpy
> >> >>> upload graph, it is smooth and sustained.
> >> >>> But for the majority of users (like me) with uploads less than 5 to
> >> >>> 10mbit,
> >> >>> we frequently see the ugly graph.
> >> >>>
> >> >>> The second tcp_rmem setting is how I have been running the speed test
> >> >>> servers.
> >> >>>
> >> >>> Up to now I thought this was just the distance of the speedtest from
> >> the
> >> >>> interface: perhaps the browser was buffering a lot, and didn't feed
> >> back
> >> >>> progress but now I realise the bumpy one is actually being influenced
> >> by
> >> >>> the server receive window.
> >> >>>
> >> >>> I guess my question is this: Why does ALLOWING a large receive window
> >> >>> appear to encourage problems with upload smoothness??
> >> >>>
> >> >>> This implies that setting the receive window should be done on a
> >> >>> connection
> >> >>> by connection basis: small for slow connections, large, for high speed,
> >> >>> long distance connections.
> >> >>>
> >> >>
> >> >> This is classic bufferbloat
> >> >>
> >> >> the receiver advertizes a large receive window, so the sender doesn't
> >> >> pause until there is that much data outstanding, or they get a timeout
> >> of a
> >> >> packet as a signal to slow down.
> >> >>
> >> >> and because you have a gig-E link locally, your machine generates
> >> traffic
> >> >> very rapidly, until all that data is 'in flight'. but it's really
> >> sitting
> >> >> in the buffer of a router trying to get through.
> >> >>
> >> >> then when a packet times out, the sender slows down a smidge and
> >> >> retransmits it. But the old packet is still sitting in a queue, eating
> >> >> bandwidth. the packets behind it are also going to timeout and be
> >> >> retransmitted before your first retransmitted packet gets through, so
> >> you
> >> >> have a large slug of data that's being retransmitted, and the first of
> >> the
> >> >> replacement data can't get through until the last of the old (timed out)
> >> >> data is transmitted.
> >> >>
> >> >> then when data starts flowing again, the sender again tries to fill up
> >> the
> >> >> window with data in flight.
> >> >>
> >> >>  In addition, if I cap it to 65k, for reasons of smoothness,
> >> >>> that means the bandwidth delay product will keep maximum speed per
> >> upload
> >> >>> stream quite low. So a symmetric or gigabit connection is going to need
> >> a
> >> >>> ton of parallel streams to see full speed.
> >> >>>
> >> >>> Most puzzling is why would anything special be required on the Client
> >> -->
> >> >>> Server side of the equation
> >> >>> but nothing much appears wrong with the Server --> Client side, whether
> >> >>> speeds are very low (GPRS) or very high (gigabit).
> >> >>>
> >> >>
> >> >> but what window sizes are these clients advertising?
> >> >>
> >> >>
> >> >>  Note that also I am not yet sure if smoothness == better throughput. I
> >> >>> have
> >> >>> noticed upload speeds for some people often being under their claimed
> >> sync
> >> >>> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
> >> >>> inefficiency. Maybe.
> >> >>>
> >> >>
> >> >> If you were to do a packet capture on the server side, you would see
> >> that
> >> >> you have a bunch of packets that are arriving multiple times, but the
> >> first
> >> >> time "does't count" because the replacement is already on the way.
> >> >>
> >> >> so your overall throughput is lower for two reasons
> >> >>
> >> >> 1. it's bursty, and there are times when the connection actually is idle
> >> >> (after you have a lot of timed out packets, the sender needs to ramp up
> >> >> it's speed again)
> >> >>
> >> >> 2. you are sending some packets multiple times, consuming more total
> >> >> bandwidth for the same 'goodput' (effective throughput)
> >> >>
> >> >> David Lang
> >> >>
> >> >>
> >> >>  help!
> >> >>>
> >> >>>
> >> >>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net>
> >> >>> wrote:
> >> >>>
> >> >>>  One thing users understand is slow web access.  Perhaps translating
> >> the
> >> >>>> latency measurement into 'a typical web page will take X seconds
> >> longer
> >> >>>> to
> >> >>>> load', or even stating the impact as 'this latency causes a typical
> >> web
> >> >>>> page to load slower, as if your connection was only YY% of the
> >> measured
> >> >>>> speed.'
> >> >>>>
> >> >>>> Simon
> >> >>>>
> >> >>>> Sent with AquaMail for Android
> >> >>>> http://www.aqua-mail.com
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
> >> >>>> wrote:
> >> >>>>
> >> >>>>>>>> Frequency readouts are probably more accessible to the latter.
> >> >>>>
> >> >>>>>
> >> >>>>>>>>     The frequency domain more accessible to laypersons? I have my
> >> >>>>>>>>
> >> >>>>>>> doubts ;)
> >> >>>>>
> >> >>>>>>
> >> >>>>>>> Gamers, at least, are familiar with “frames per second” and how
> >> that
> >> >>>>>>>
> >> >>>>>> corresponds to their monitor’s refresh rate.
> >> >>>>>
> >> >>>>>>
> >> >>>>>>       I am sure they can easily transform back into time domain to
> >> get
> >> >>>>>>
> >> >>>>> the frame period ;) .  I am partly kidding, I think your idea is
> >> great
> >> >>>>> in
> >> >>>>> that it is a truly positive value which could lend itself to being
> >> used
> >> >>>>> in
> >> >>>>> ISP/router manufacturer advertising, and hence might work in the real
> >> >>>>> work;
> >> >>>>> on the other hand I like to keep data as “raw” as possible (not that
> >> >>>>> ^(-1)
> >> >>>>> is a transformation worthy of being called data massage).
> >> >>>>>
> >> >>>>>>
> >> >>>>>>  The desirable range of latencies, when converted to Hz, happens to
> >> be
> >> >>>>>>>
> >> >>>>>> roughly the same as the range of desirable frame rates.
> >> >>>>>
> >> >>>>>>
> >> >>>>>>       Just to play devils advocate, the interesting part is time or
> >> >>>>>>
> >> >>>>> saving time so seconds or milliseconds are also intuitively
> >> >>>>> understandable
> >> >>>>> and can be easily added ;)
> >> >>>>>
> >> >>>>> Such readouts are certainly interesting to people like us.  I have no
> >> >>>>> objection to them being reported alongside a frequency readout.  But
> >> I
> >> >>>>> think most people are not interested in “time savings” measured in
> >> >>>>> milliseconds; they’re much more aware of the minute- and hour-level
> >> time
> >> >>>>> savings associated with greater bandwidth.
> >> >>>>>
> >> >>>>>  - Jonathan Morton
> >> >>>>>
> >> >>>>> _______________________________________________
> >> >>>>> Bloat mailing list
> >> >>>>> Bloat@lists.bufferbloat.net
> >> >>>>> https://lists.bufferbloat.net/listinfo/bloat
> >> >>>>>
> >> >>>>>
> >> >>>>
> >> >>>> _______________________________________________
> >> >>>> Bloat mailing list
> >> >>>> Bloat@lists.bufferbloat.net
> >> >>>> https://lists.bufferbloat.net/listinfo/bloat
> >> >>>>
> >> >>>>
> >> >> _______________________________________________
> >> >> Bloat mailing list
> >> >> Bloat@lists.bufferbloat.net
> >> >> https://lists.bufferbloat.net/listinfo/bloat
> >> >>
> >> >>
> >> >
> >>
> >>
> >> ----------
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >>
> >
> >
> >



^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-22 17:45                                       ` MUSCARIELLO Luca IMT/OLN
@ 2015-04-23  5:27                                         ` MUSCARIELLO Luca IMT/OLN
  2015-04-23  6:48                                           ` Eric Dumazet
  0 siblings, 1 reply; 127+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-23  5:27 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

one reference with pdf publicly available. On the website there are 
various papers
on this topic. Others might me more relevant but I did not check all of 
them.

Understanding the Performance of TCP Pacing,
Amit Aggarwal, Stefan Savage, and Tom Anderson,
IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.

http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf

On 04/22/2015 07:45 PM, MUSCARIELLO Luca IMT/OLN wrote:
> I remember a paper by Stefan Savage of about 15 years ago where he 
> substantiates this in clearer terms.
> If I find the paper I'll send the reference to the list.
>
>
> On 04/22/2015 07:28 PM, MUSCARIELLO Luca IMT/OLN wrote:
>> Exactly. Two same CC modules competing on the same link, one w pacing 
>> the other one w/o pacing.
>> The latter will have negative impact on the former in FIFO. Not in FQ 
>> (fq_codel to clarify).
>> And that's my incentive argument which comes from the flow isolation 
>> feature of FQ (_codel). 
>


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-23  5:27                                         ` MUSCARIELLO Luca IMT/OLN
@ 2015-04-23  6:48                                           ` Eric Dumazet
       [not found]                                             ` <CAH3Ss96VwE_fWNMOMOY4AgaEnVFtCP3rPDHSudOcHxckSDNMqQ@mail.gmail.com>
                                                               ` (2 more replies)
  0 siblings, 3 replies; 127+ messages in thread
From: Eric Dumazet @ 2015-04-23  6:48 UTC (permalink / raw)
  To: MUSCARIELLO Luca IMT/OLN; +Cc: bloat

Wait, this is a 15 years old experiment using Reno and a single test
bed, using ns simulator.

Naive TCP pacing implementations were tried, and probably failed.

Pacing individual packet is quite bad, this is the first lesson one
learns when implementing TCP pacing, especially if you try to drive a
40Gbps NIC.

https://lwn.net/Articles/564978/

Also note we use usec based rtt samples, and nanosec high resolution
timers in fq. I suspect the ns simulator experiment had sync issues
because of using low resolution timers or simulation artifact, without
any jitter source.

Billions of flows are now 'paced', but keep in mind most packets are not
paced. We do not pace in slow start, and we do not pace when tcp is ACK
clocked.

Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
eventually have all packets being paced, using TSO 'clusters' for TCP.



On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> one reference with pdf publicly available. On the website there are 
> various papers
> on this topic. Others might me more relevant but I did not check all of 
> them.

> Understanding the Performance of TCP Pacing,
> Amit Aggarwal, Stefan Savage, and Tom Anderson,
> IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
> 
> http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf



^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
       [not found]                                             ` <CAH3Ss96VwE_fWNMOMOY4AgaEnVFtCP3rPDHSudOcHxckSDNMqQ@mail.gmail.com>
@ 2015-04-23 10:08                                               ` jb
  2015-04-24  8:18                                                 ` Sebastian Moeller
  0 siblings, 1 reply; 127+ messages in thread
From: jb @ 2015-04-23 10:08 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 2596 bytes --]

This is how I've changed the graph of latency under load per input from you
guys.

Taken away log axis.

Put in two bands. Yellow starts at double the idle latency, and goes to 4x
the idle latency
red starts there, and goes to the top. No red shows if no bars reach into
it.
And no yellow band shows if no bars get into that zone.

Is it more descriptive?

(sorry to the list moderator, gmail keeps sending under the wrong email and
I get a moderator message)

On Thu, Apr 23, 2015 at 8:05 PM, jb <justinbeech@gmail.com> wrote:

> This is how I've changed the graph of latency under load per input from
> you guys.
>
> Taken away log axis.
>
> Put in two bands. Yellow starts at double the idle latency, and goes to 4x
> the idle latency
> red starts there, and goes to the top. No red shows if no bars reach into
> it.
> And no yellow band shows if no bars get into that zone.
>
> Is it more descriptive?
>
>
> On Thu, Apr 23, 2015 at 4:48 PM, Eric Dumazet <eric.dumazet@gmail.com>
> wrote:
>
>> Wait, this is a 15 years old experiment using Reno and a single test
>> bed, using ns simulator.
>>
>> Naive TCP pacing implementations were tried, and probably failed.
>>
>> Pacing individual packet is quite bad, this is the first lesson one
>> learns when implementing TCP pacing, especially if you try to drive a
>> 40Gbps NIC.
>>
>> https://lwn.net/Articles/564978/
>>
>> Also note we use usec based rtt samples, and nanosec high resolution
>> timers in fq. I suspect the ns simulator experiment had sync issues
>> because of using low resolution timers or simulation artifact, without
>> any jitter source.
>>
>> Billions of flows are now 'paced', but keep in mind most packets are not
>> paced. We do not pace in slow start, and we do not pace when tcp is ACK
>> clocked.
>>
>> Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
>> eventually have all packets being paced, using TSO 'clusters' for TCP.
>>
>>
>>
>> On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
>> > one reference with pdf publicly available. On the website there are
>> > various papers
>> > on this topic. Others might me more relevant but I did not check all of
>> > them.
>>
>> > Understanding the Performance of TCP Pacing,
>> > Amit Aggarwal, Stefan Savage, and Tom Anderson,
>> > IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
>> >
>> > http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>

[-- Attachment #2: Type: text/html, Size: 3958 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-23  6:48                                           ` Eric Dumazet
       [not found]                                             ` <CAH3Ss96VwE_fWNMOMOY4AgaEnVFtCP3rPDHSudOcHxckSDNMqQ@mail.gmail.com>
@ 2015-04-23 10:17                                             ` renaud sallantin
  2015-04-23 14:10                                               ` Eric Dumazet
  2015-04-23 13:17                                             ` MUSCARIELLO Luca IMT/OLN
  2 siblings, 1 reply; 127+ messages in thread
From: renaud sallantin @ 2015-04-23 10:17 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 2472 bytes --]

Hi,

2015-04-23 8:48 GMT+02:00 Eric Dumazet <eric.dumazet@gmail.com>:

> Wait, this is a 15 years old experiment using Reno and a single test
> bed, using ns simulator.
>
> Naive TCP pacing implementations were tried, and probably failed.
>
> Pacing individual packet is quite bad, this is the first lesson one
> learns when implementing TCP pacing, especially if you try to drive a
> 40Gbps NIC.
>
> https://lwn.net/Articles/564978/
>
> Also note we use usec based rtt samples, and nanosec high resolution
> timers in fq. I suspect the ns simulator experiment had sync issues
> because of using low resolution timers or simulation artifact, without
> any jitter source.
>
> Billions of flows are now 'paced', but keep in mind most packets are not
> paced. We do not pace in slow start, and we do not pace when tcp is ACK
> clocked.
>

We did an extensive work on the Pacing in slow start and notably during a
large IW transmission.
Benefits are really outstanding! Our last implementation is just a slight
modification of FQ/pacing

   - Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.; Beylot,
   A.-L., "Initial spreading: A fast Start-Up TCP mechanism," *Local
   Computer Networks (LCN), 2013 IEEE 38th Conference on* , vol., no.,
   pp.492,499, 21-24 Oct. 2013


   - Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.; Beylot,
   A.-L., "A TCP model for short-lived flows to validate initial
spreading," *Local
   Computer Networks (LCN), 2014 IEEE 39th Conference on* , vol., no.,
   pp.177,184, 8-11 Sept. 2014


   -

   draft-sallantin-tcpm-initial-spreading, safe increase of the TCP's IW


Did you consider using it or something similar?



>
> Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
> eventually have all packets being paced, using TSO 'clusters' for TCP.
>
>
>
> On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> > one reference with pdf publicly available. On the website there are
> > various papers
> > on this topic. Others might me more relevant but I did not check all of
> > them.
>
> > Understanding the Performance of TCP Pacing,
> > Amit Aggarwal, Stefan Savage, and Tom Anderson,
> > IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
> >
> > http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 3762 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-23  6:48                                           ` Eric Dumazet
       [not found]                                             ` <CAH3Ss96VwE_fWNMOMOY4AgaEnVFtCP3rPDHSudOcHxckSDNMqQ@mail.gmail.com>
  2015-04-23 10:17                                             ` renaud sallantin
@ 2015-04-23 13:17                                             ` MUSCARIELLO Luca IMT/OLN
  2 siblings, 0 replies; 127+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-23 13:17 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

On 04/23/2015 08:48 AM, Eric Dumazet wrote:
> Wait, this is a 15 years old experiment using Reno and a single test
> bed, using ns simulator.

from that paper to nowadays several other studies have been made
and confirmed those first results. I did not check all the literature 
though.

>
> Naive TCP pacing implementations were tried, and probably failed.

Except for the scaling levels that sch_fq is pushing to nowadays growth,
the concept was well analyzed in the past.

>
> Pacing individual packet is quite bad, this is the first lesson one
> learns when implementing TCP pacing, especially if you try to drive a
> 40Gbps NIC.

this is the main difference I think between 2000 and 2015 and main 
source of misunderstanding.

>
> https://lwn.net/Articles/564978/

is there any other documentation other than this article?

>
> Also note we use usec based rtt samples, and nanosec high resolution
> timers in fq. I suspect the ns simulator experiment had sync issues
> because of using low resolution timers or simulation artifact, without
> any jitter source.

I suspect that the main difference was that all packets were paced.
The rates of the first experiments were made at a very low rate compared 
to now
so the resolution was not supposed to be a problem.

> Billions of flows are now 'paced', but keep in mind most packets are not
> paced. We do not pace in slow start, and we do not pace when tcp is ACK
> clocked.
All right. I think this clarifies a lot to me. I did not find this 
information anywhere though.
I guess I need to go through the internals to find all the active 
features and possible
working configurations.

In short, by removing slow start + ack clocked phases, the mechanism 
avoids the cwnd-size
line rate burst of packets which has a high probability to experience a 
big loss phenomenon  somewhere
along the path and maybe in the same local NIC, not necessarily in the 
user access bottleneck.

This is something that happens in these days because of hardware 
assisted framing and very high speed NICs
like what you mention. But 15 years ago none of those things existed and 
TCP did not push such huge bursts.
In some cases I suspect no buffer today could accommodate such bursts 
and the loss would be almost sure.
Then I wonder why hardware assisted framing implementations did not take 
into account that.

I personally don't have the equipment to test in such cases but I see 
the phenomenon.

Still, I believe that Savage's approach would have the merit to produce 
very small queues
in the network (and all the benefits from that) but would be fragile, as 
reported, and require fq(_codel)
in the network, at least in the access, to create incentives to do that 
pacing.



>
> Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
> eventually have all packets being paced, using TSO 'clusters' for TCP.
>
>
>
> On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
>> one reference with pdf publicly available. On the website there are
>> various papers
>> on this topic. Others might me more relevant but I did not check all of
>> them.
>> Understanding the Performance of TCP Pacing,
>> Amit Aggarwal, Stefan Savage, and Tom Anderson,
>> IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
>>
>> http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
>
> .
>


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-23 10:17                                             ` renaud sallantin
@ 2015-04-23 14:10                                               ` Eric Dumazet
  2015-04-23 14:38                                                 ` renaud sallantin
  0 siblings, 1 reply; 127+ messages in thread
From: Eric Dumazet @ 2015-04-23 14:10 UTC (permalink / raw)
  To: renaud sallantin; +Cc: bloat

On Thu, 2015-04-23 at 12:17 +0200, renaud sallantin wrote:
> Hi, 

> ...
> 
> We did an extensive work on the Pacing in slow start and notably
> during a large IW transmission. 
> 
> Benefits are really outstanding! Our last implementation is just a
> slight modification of FQ/pacing 
>       * Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.;
>         Beylot, A.-L., "Initial spreading: A fast Start-Up TCP
>         mechanism," Local Computer Networks (LCN), 2013 IEEE 38th
>         Conference on , vol., no., pp.492,499, 21-24 Oct. 2013
>       * Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.;
>         Beylot, A.-L., "A TCP model for short-lived flows to validate
>         initial spreading," Local Computer Networks (LCN), 2014 IEEE
>         39th Conference on , vol., no., pp.177,184, 8-11 Sept. 2014
>         draft-sallantin-tcpm-initial-spreading, safe increase of the TCP's IW
> Did you consider using it or something similar? 


Absolutely. We play a lot with these parameters, but the real work is on
CC front now we have correct host queues and packet scheduler control.

Drops are no longer directly correlated to congestion on modern
networks, cubic has to be replaced.





^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-23 14:10                                               ` Eric Dumazet
@ 2015-04-23 14:38                                                 ` renaud sallantin
  2015-04-23 15:52                                                   ` Jonathan Morton
  0 siblings, 1 reply; 127+ messages in thread
From: renaud sallantin @ 2015-04-23 14:38 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1820 bytes --]

Le 23 avr. 2015 16:10, "Eric Dumazet" <eric.dumazet@gmail.com> a écrit :
>
> On Thu, 2015-04-23 at 12:17 +0200, renaud sallantin wrote:
> > Hi,
>
> > ...
> >
> > We did an extensive work on the Pacing in slow start and notably
> > during a large IW transmission.
> >
> > Benefits are really outstanding! Our last implementation is just a
> > slight modification of FQ/pacing
> >       * Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.;
> >         Beylot, A.-L., "Initial spreading: A fast Start-Up TCP
> >         mechanism," Local Computer Networks (LCN), 2013 IEEE 38th
> >         Conference on , vol., no., pp.492,499, 21-24 Oct. 2013
> >       * Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.;
> >         Beylot, A.-L., "A TCP model for short-lived flows to validate
> >         initial spreading," Local Computer Networks (LCN), 2014 IEEE
> >         39th Conference on , vol., no., pp.177,184, 8-11 Sept. 2014
> >         draft-sallantin-tcpm-initial-spreading, safe increase of the
TCP's IW
> > Did you consider using it or something similar?
>
>
> Absolutely. We play a lot with these parameters, but the real work is on
> CC front now we have correct host queues and packet scheduler control.
>

Do you really consider that the slow start is efficient?
I may miss something but RFC6928 has been pushed by Google because there
were a real need to update it. Results are very good, except when one
bottleneck link is shared by several connections. We demonstrated that an
appropriate spreading of the IW solves this and improves RFC6928
performance.

> Drops are no longer directly correlated to congestion on modern
> networks, cubic has to be replaced.
>
>
>

By curiosity, what is now responsible for the drops if not the congestion?

[-- Attachment #2: Type: text/html, Size: 2288 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-23 14:38                                                 ` renaud sallantin
@ 2015-04-23 15:52                                                   ` Jonathan Morton
  2015-04-23 16:00                                                     ` Simon Barber
  0 siblings, 1 reply; 127+ messages in thread
From: Jonathan Morton @ 2015-04-23 15:52 UTC (permalink / raw)
  To: renaud sallantin; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 603 bytes --]

> By curiosity, what is now responsible for the drops if not the congestion?

I think the point was not that observed drops are not caused by congestion,
but that congestion doesn't reliably cause drops. Correlation is not
causation.

There are also cases when drops are in fact caused by something other than
congestion, including faulty ADSL phone lines. Some local loop providers
have been known to explicitly consider several percent of packet loss due
to line conditions as "not a fault", to the consternation of the actual ISP
who was trying to provide a decent device over it.

- Jonathan Morton

[-- Attachment #2: Type: text/html, Size: 696 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-23 15:52                                                   ` Jonathan Morton
@ 2015-04-23 16:00                                                     ` Simon Barber
  0 siblings, 0 replies; 127+ messages in thread
From: Simon Barber @ 2015-04-23 16:00 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 1094 bytes --]

Same thing applies for WiFi - oftentimes WiFi with poor signal levels 
will cause drops, without congestion. This is something I'm working to 
fix from the WiFi / L2 side. What are the solutions in L3? Some kind of 
hybrid delay & drop based CC?

Simon

On 4/23/2015 8:52 AM, Jonathan Morton wrote:
>
> > By curiosity, what is now responsible for the drops if not the 
> congestion?
>
> I think the point was not that observed drops are not caused by 
> congestion, but that congestion doesn't reliably cause drops. 
> Correlation is not causation.
>
> There are also cases when drops are in fact caused by something other 
> than congestion, including faulty ADSL phone lines. Some local loop 
> providers have been known to explicitly consider several percent of 
> packet loss due to line conditions as "not a fault", to the 
> consternation of the actual ISP who was trying to provide a decent 
> device over it.
>
> - Jonathan Morton
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


[-- Attachment #2: Type: text/html, Size: 1896 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-23 10:08                                               ` jb
@ 2015-04-24  8:18                                                 ` Sebastian Moeller
  2015-04-24  8:29                                                   ` Toke Høiland-Jørgensen
  2015-04-25  2:24                                                   ` Simon Barber
  0 siblings, 2 replies; 127+ messages in thread
From: Sebastian Moeller @ 2015-04-24  8:18 UTC (permalink / raw)
  To: jb; +Cc: bloat

Hi jb,

this looks great!

On Apr 23, 2015, at 12:08 , jb <justin@dslr.net> wrote:

> This is how I've changed the graph of latency under load per input from you guys.
> 
> Taken away log axis.
> 
> Put in two bands. Yellow starts at double the idle latency, and goes to 4x the idle latency
> red starts there, and goes to the top. No red shows if no bars reach into it.
> And no yellow band shows if no bars get into that zone.
> 
> Is it more descriptive?

	Mmmh, so the delay we see consists out of the delay caused by the distance to the server and the delay of the access technology, meaning the un-loaded latency can range from a few milliseconds to several 100s of milliseconds (for the poor sods behind a satellite link…). Any further latency developing under load should be independent of distance and access technology as those are already factored in the bade latency. In both the extreme cases multiples of the base-latency do not seem to be relevant measures of bloat, so I would like to argue that the yellow and the red zones should be based on fixed increments and not as a ratio of the base-latency. This is relevant as people on a slow/high-access-latency link have a much smaller tolerance for additional latency than people on a fast link if certain latency guarantees need to be met, and thresholds as a function of base-latency do not reflect this.
	Now ideally the colors should not be based on the base-latency at all but should be at fixed total values, like 200 to 300 ms for voip (according to ITU-T G.114 for voip one-way delay <= 150 ms is recommended) in yellow, and say 400 to 600 ms in orange, 400ms is upper bound for good voip and 600ms for decent voip (according to ITU-T G.114,users are very satisfied up to 200 ms one way delay and satisfied up to roughly 300ms) so anything above 600 in deep red?
	I know this is not perfect and the numbers will probably require severe "bike-shedding” (and I am not sure that ITU-T G.114 really iOS good source for the thresholds), but to get a discussion started here are the numbers again:
0 	to 100 ms 	no color
101 	to 200 ms		green
201	to 400 ms		yellow
401	to 600 ms		orange
601 	to 1000 ms	red
1001 to infinity		purple (or better marina red?)

Best Regards
	Sebastian


> 
> (sorry to the list moderator, gmail keeps sending under the wrong email and I get a moderator message)
> 
> On Thu, Apr 23, 2015 at 8:05 PM, jb <justinbeech@gmail.com> wrote:
> This is how I've changed the graph of latency under load per input from you guys.
> 
> Taken away log axis.
> 
> Put in two bands. Yellow starts at double the idle latency, and goes to 4x the idle latency
> red starts there, and goes to the top. No red shows if no bars reach into it.
> And no yellow band shows if no bars get into that zone.
> 
> Is it more descriptive?
> 
> 
> On Thu, Apr 23, 2015 at 4:48 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> Wait, this is a 15 years old experiment using Reno and a single test
> bed, using ns simulator.
> 
> Naive TCP pacing implementations were tried, and probably failed.
> 
> Pacing individual packet is quite bad, this is the first lesson one
> learns when implementing TCP pacing, especially if you try to drive a
> 40Gbps NIC.
> 
> https://lwn.net/Articles/564978/
> 
> Also note we use usec based rtt samples, and nanosec high resolution
> timers in fq. I suspect the ns simulator experiment had sync issues
> because of using low resolution timers or simulation artifact, without
> any jitter source.
> 
> Billions of flows are now 'paced', but keep in mind most packets are not
> paced. We do not pace in slow start, and we do not pace when tcp is ACK
> clocked.
> 
> Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
> eventually have all packets being paced, using TSO 'clusters' for TCP.
> 
> 
> 
> On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> > one reference with pdf publicly available. On the website there are
> > various papers
> > on this topic. Others might me more relevant but I did not check all of
> > them.
> 
> > Understanding the Performance of TCP Pacing,
> > Amit Aggarwal, Stefan Savage, and Tom Anderson,
> > IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
> >
> > http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
> 
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> 
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-24  8:18                                                 ` Sebastian Moeller
@ 2015-04-24  8:29                                                   ` Toke Høiland-Jørgensen
  2015-04-24  8:55                                                     ` Sebastian Moeller
  2015-04-24 15:20                                                     ` Bill Ver Steeg (versteb)
  2015-04-25  2:24                                                   ` Simon Barber
  1 sibling, 2 replies; 127+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-24  8:29 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

Sebastian Moeller <moeller0@gmx.de> writes:

> I know this is not perfect and the numbers will probably require
> severe "bike-shedding”

Since you're literally asking for it... ;)


In this case we're talking about *added* latency. So the ambition should
be zero, or so close to it as to be indiscernible. Furthermore, we know
that proper application of a good queue management algorithm can keep it
pretty close to this. Certainly under 20-30 ms of added latency. So from
this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.

The other increments I have less opinions about, but 100 ms does seem to
be a nice round number, so do yellow from 30-100 ms, then start with the
reds somewhere above that, and range up into the deep red / purple /
black with skulls and fiery death as we go nearer and above one second?


I very much think that raising peoples expectations and being quite
ambitious about what to expect is an important part of this. Of course
the base latency is going to vary, but the added latency shouldn't. And
sine we have the technology to make sure it doesn't, calling out bad
results when we see them is reasonable!

-Toke

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-24  8:29                                                   ` Toke Høiland-Jørgensen
@ 2015-04-24  8:55                                                     ` Sebastian Moeller
  2015-04-24  9:02                                                       ` Toke Høiland-Jørgensen
                                                                         ` (2 more replies)
  2015-04-24 15:20                                                     ` Bill Ver Steeg (versteb)
  1 sibling, 3 replies; 127+ messages in thread
From: Sebastian Moeller @ 2015-04-24  8:55 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: bloat

Hi Toke,

On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:

> Sebastian Moeller <moeller0@gmx.de> writes:
> 
>> I know this is not perfect and the numbers will probably require
>> severe "bike-shedding”
> 
> Since you're literally asking for it... ;)
> 
> 
> In this case we're talking about *added* latency. So the ambition should
> be zero, or so close to it as to be indiscernible. Furthermore, we know
> that proper application of a good queue management algorithm can keep it
> pretty close to this. Certainly under 20-30 ms of added latency. So from
> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.

	Oh, I can get behind that easily, I just thought basing the limits on externally relevant total latency thresholds would directly tell the user which applications might run well on his link. Sure this means that people on a satellite link most likely will miss out the acceptable voip threshold by their base-latency alone, but guess what telephony via satellite leaves something to be desired. That said if the alternative is no telephony I would take 1 second one-way delay any day ;). 
	What I liked about fixed thresholds is that the test would give a good indication what kind of uses are going to work well on the link under load, given that during load both base and induced latency come into play. I agree that 300ms as first threshold is rather unambiguous though (and I am certain that remote X11 will require a massively lower RTT unless one likes to think of remote desktop as an oil tanker simulator ;) )

> 
> The other increments I have less opinions about, but 100 ms does seem to
> be a nice round number, so do yellow from 30-100 ms, then start with the
> reds somewhere above that, and range up into the deep red / purple /
> black with skulls and fiery death as we go nearer and above one second?
> 
> 
> I very much think that raising peoples expectations and being quite
> ambitious about what to expect is an important part of this. Of course
> the base latency is going to vary, but the added latency shouldn't. And
> sine we have the technology to make sure it doesn't, calling out bad
> results when we see them is reasonable!

	Okay so this would turn into:

base latency to base latency + 30 ms:				green
base latency + 31 ms to base latency + 100 ms:		yellow
base latency + 101 ms to base latency + 200 ms:		orange?
base latency + 201 ms to base latency + 500 ms:		red
base latency + 501 ms to base latency + 1000 ms:	fire
base latency + 1001 ms to infinity:					fire & brimstone

correct?


> 
> -Toke


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-24  8:55                                                     ` Sebastian Moeller
@ 2015-04-24  9:02                                                       ` Toke Høiland-Jørgensen
  2015-04-24 13:32                                                         ` jb
  2015-04-25  3:15                                                       ` Simon Barber
  2015-04-25  3:23                                                       ` Simon Barber
  2 siblings, 1 reply; 127+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-24  9:02 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

Sebastian Moeller <moeller0@gmx.de> writes:

> 	Oh, I can get behind that easily, I just thought basing the
> limits on externally relevant total latency thresholds would directly
> tell the user which applications might run well on his link. Sure this
> means that people on a satellite link most likely will miss out the
> acceptable voip threshold by their base-latency alone, but guess what
> telephony via satellite leaves something to be desired. That said if
> the alternative is no telephony I would take 1 second one-way delay
> any day ;).

Well I agree that this is relevant information in relation to the total
link latency. But keeping the issues separate has value, I think,
because you can potentially fix your bufferbloat, but increasing the
speed of light to get better base latency on your satellite link is
probably out of scope for now (or at least for a couple of hundred more
years: http://theinfosphere.org/Speed_of_light).

> 	What I liked about fixed thresholds is that the test would give
> a good indication what kind of uses are going to work well on the link
> under load, given that during load both base and induced latency come
> into play. I agree that 300ms as first threshold is rather unambiguous
> though (and I am certain that remote X11 will require a massively
> lower RTT unless one likes to think of remote desktop as an oil tanker
> simulator ;) )

Oh, I'm all for fixed thresholds! As I said, the goal should be (close
to) zero added latency...

> 	Okay so this would turn into:
>
> base latency to base latency + 30 ms:				green
> base latency + 31 ms to base latency + 100 ms:		yellow
> base latency + 101 ms to base latency + 200 ms:		orange?
> base latency + 201 ms to base latency + 500 ms:		red
> base latency + 501 ms to base latency + 1000 ms:	fire
> base latency + 1001 ms to infinity:					fire & brimstone
>
> correct?

Yup, something like that :)

-Toke

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-24  9:02                                                       ` Toke Høiland-Jørgensen
@ 2015-04-24 13:32                                                         ` jb
  2015-04-24 13:58                                                           ` Toke Høiland-Jørgensen
  2015-04-24 16:51                                                           ` David Lang
  0 siblings, 2 replies; 127+ messages in thread
From: jb @ 2015-04-24 13:32 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 3523 bytes --]

Don't you want to accuse the size of the buffer, rather than the latency?

For example, say someone has some hardware and their line is fairly slow.
it might be RED on the graph because the buffer is quite big relative to the
bandwidth delay product of the line. A test is telling them they have
bloated buffers.

Then they upgrade their product speed to a much faster product, and suddenly
that buffer is fairly small, the incremental latency is low, and no longer
shows
RED on a test.

What changed? the hardware didn't change. Just the speed changed. So the
test is saying that for your particular speed, the buffers are too big. But
for a
higher speed, they may be quite ok.

If you add 100ms to a 1gigabit product the buffer has to be what, ~10mb?
but adding 100ms to my feeble line is quite easy, the billion router can
have
a buffer of just 100kb and it is too high. But that same billion in front
of a
gigabit modem is only going to add at most 1ms to latency and nobody
would complain.

Ok I think I talked myself around in a complete circle: a buffer is only
bad IF
it increases latency under load. Not because of its size. It might explain
why
these fiber connection tests don't show much latency change, because
their buffers are really inconsequential at those higher speeds?


On Fri, Apr 24, 2015 at 7:02 PM, Toke Høiland-Jørgensen <toke@toke.dk>
wrote:

> Sebastian Moeller <moeller0@gmx.de> writes:
>
> >       Oh, I can get behind that easily, I just thought basing the
> > limits on externally relevant total latency thresholds would directly
> > tell the user which applications might run well on his link. Sure this
> > means that people on a satellite link most likely will miss out the
> > acceptable voip threshold by their base-latency alone, but guess what
> > telephony via satellite leaves something to be desired. That said if
> > the alternative is no telephony I would take 1 second one-way delay
> > any day ;).
>
> Well I agree that this is relevant information in relation to the total
> link latency. But keeping the issues separate has value, I think,
> because you can potentially fix your bufferbloat, but increasing the
> speed of light to get better base latency on your satellite link is
> probably out of scope for now (or at least for a couple of hundred more
> years: http://theinfosphere.org/Speed_of_light).
>
> >       What I liked about fixed thresholds is that the test would give
> > a good indication what kind of uses are going to work well on the link
> > under load, given that during load both base and induced latency come
> > into play. I agree that 300ms as first threshold is rather unambiguous
> > though (and I am certain that remote X11 will require a massively
> > lower RTT unless one likes to think of remote desktop as an oil tanker
> > simulator ;) )
>
> Oh, I'm all for fixed thresholds! As I said, the goal should be (close
> to) zero added latency...
>
> >       Okay so this would turn into:
> >
> > base latency to base latency + 30 ms:                         green
> > base latency + 31 ms to base latency + 100 ms:                yellow
> > base latency + 101 ms to base latency + 200 ms:               orange?
> > base latency + 201 ms to base latency + 500 ms:               red
> > base latency + 501 ms to base latency + 1000 ms:      fire
> > base latency + 1001 ms to infinity:
>  fire & brimstone
> >
> > correct?
>
> Yup, something like that :)
>
> -Toke
>

[-- Attachment #2: Type: text/html, Size: 4573 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-24 13:32                                                         ` jb
@ 2015-04-24 13:58                                                           ` Toke Høiland-Jørgensen
  2015-04-24 16:51                                                           ` David Lang
  1 sibling, 0 replies; 127+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-24 13:58 UTC (permalink / raw)
  To: jb; +Cc: bloat

jb <justin@dslr.net> writes:

> Ok I think I talked myself around in a complete circle: a buffer is
> only bad IF it increases latency under load. Not because of its size.

Exactly! :)

Some buffering is actually needed to absorb transient bursts. This is
also the reason why smart queue management is needed instead of just
adjusting the size of the buffer (setting aside that you don't always
know which speed to size it for).

> It might explain why these fiber connection tests don't show much
> latency change, because their buffers are really inconsequential at
> those higher speeds?

Well, bufferbloat certainly tends to be *worse* at lower speeds. But it
can occur at gigabit speeds as well. For instance, running two ports
into one on a gigabit switch can add quite a bit of latency.

For some devices, *driving* a fast link can be challenging, though. So
for fibre connections you may not actually bottleneck on the bloated
link, but on the CPU, some other link that's not as bloated as the
access link, etc...

-Toke

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-24  8:29                                                   ` Toke Høiland-Jørgensen
  2015-04-24  8:55                                                     ` Sebastian Moeller
@ 2015-04-24 15:20                                                     ` Bill Ver Steeg (versteb)
  1 sibling, 0 replies; 127+ messages in thread
From: Bill Ver Steeg (versteb) @ 2015-04-24 15:20 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen, Sebastian Moeller; +Cc: bloat

For a very low speed link, I suggest that 100ms is not the right target. At 1 Mbps (which is a downstream number that I occasionally see in an ISP), 100ms is only nine 1400 byte packets. 

Non-paced IW10 would suggest that you need to have at least a 10-deep target. Concurrent flows probably drive the target a bit higher. An FQ_AQM solution would have a different target than an AQM solution.

Does anybody have data that quantifies the best target delay for FQ_Codel and Codel/PIE?

Bvs


-----Original Message-----
From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Toke Høiland-Jørgensen
Sent: Friday, April 24, 2015 4:29 AM
To: Sebastian Moeller
Cc: bloat
Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in

Sebastian Moeller <moeller0@gmx.de> writes:

> I know this is not perfect and the numbers will probably require 
> severe "bike-shedding”

Since you're literally asking for it... ;)


In this case we're talking about *added* latency. So the ambition should be zero, or so close to it as to be indiscernible. Furthermore, we know that proper application of a good queue management algorithm can keep it pretty close to this. Certainly under 20-30 ms of added latency. So from this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.

The other increments I have less opinions about, but 100 ms does seem to be a nice round number, so do yellow from 30-100 ms, then start with the reds somewhere above that, and range up into the deep red / purple / black with skulls and fiery death as we go nearer and above one second?


I very much think that raising peoples expectations and being quite ambitious about what to expect is an important part of this. Of course the base latency is going to vary, but the added latency shouldn't. And sine we have the technology to make sure it doesn't, calling out bad results when we see them is reasonable!

-Toke
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-24 13:32                                                         ` jb
  2015-04-24 13:58                                                           ` Toke Høiland-Jørgensen
@ 2015-04-24 16:51                                                           ` David Lang
  1 sibling, 0 replies; 127+ messages in thread
From: David Lang @ 2015-04-24 16:51 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: TEXT/Plain, Size: 3936 bytes --]

On Fri, 24 Apr 2015, jb wrote:

> Don't you want to accuse the size of the buffer, rather than the latency?

The size of the buffer really doesn't matter. The latency is what hurts.

in theory, you could have a massive buffer for some low-priority non-TCP bulk 
protocol (non-TCP so that it can do it's own retries of lost blocks rather then 
the TCP method) with no problem and no impact on the user experience.

the problem is how the buffer is managed, not that it exists.

David Lang

> For example, say someone has some hardware and their line is fairly slow.
> it might be RED on the graph because the buffer is quite big relative to the
> bandwidth delay product of the line. A test is telling them they have
> bloated buffers.
>
> Then they upgrade their product speed to a much faster product, and suddenly
> that buffer is fairly small, the incremental latency is low, and no longer
> shows
> RED on a test.
>
> What changed? the hardware didn't change. Just the speed changed. So the
> test is saying that for your particular speed, the buffers are too big. But
> for a
> higher speed, they may be quite ok.
>
> If you add 100ms to a 1gigabit product the buffer has to be what, ~10mb?
> but adding 100ms to my feeble line is quite easy, the billion router can
> have
> a buffer of just 100kb and it is too high. But that same billion in front
> of a
> gigabit modem is only going to add at most 1ms to latency and nobody
> would complain.
>
> Ok I think I talked myself around in a complete circle: a buffer is only
> bad IF
> it increases latency under load. Not because of its size. It might explain
> why
> these fiber connection tests don't show much latency change, because
> their buffers are really inconsequential at those higher speeds?
>
>
> On Fri, Apr 24, 2015 at 7:02 PM, Toke Høiland-Jørgensen <toke@toke.dk>
> wrote:
>
>> Sebastian Moeller <moeller0@gmx.de> writes:
>>
>>>       Oh, I can get behind that easily, I just thought basing the
>>> limits on externally relevant total latency thresholds would directly
>>> tell the user which applications might run well on his link. Sure this
>>> means that people on a satellite link most likely will miss out the
>>> acceptable voip threshold by their base-latency alone, but guess what
>>> telephony via satellite leaves something to be desired. That said if
>>> the alternative is no telephony I would take 1 second one-way delay
>>> any day ;).
>>
>> Well I agree that this is relevant information in relation to the total
>> link latency. But keeping the issues separate has value, I think,
>> because you can potentially fix your bufferbloat, but increasing the
>> speed of light to get better base latency on your satellite link is
>> probably out of scope for now (or at least for a couple of hundred more
>> years: http://theinfosphere.org/Speed_of_light).
>>
>>>       What I liked about fixed thresholds is that the test would give
>>> a good indication what kind of uses are going to work well on the link
>>> under load, given that during load both base and induced latency come
>>> into play. I agree that 300ms as first threshold is rather unambiguous
>>> though (and I am certain that remote X11 will require a massively
>>> lower RTT unless one likes to think of remote desktop as an oil tanker
>>> simulator ;) )
>>
>> Oh, I'm all for fixed thresholds! As I said, the goal should be (close
>> to) zero added latency...
>>
>>>       Okay so this would turn into:
>>>
>>> base latency to base latency + 30 ms:                         green
>>> base latency + 31 ms to base latency + 100 ms:                yellow
>>> base latency + 101 ms to base latency + 200 ms:               orange?
>>> base latency + 201 ms to base latency + 500 ms:               red
>>> base latency + 501 ms to base latency + 1000 ms:      fire
>>> base latency + 1001 ms to infinity:
>>  fire & brimstone
>>>
>>> correct?
>>
>> Yup, something like that :)
>>
>> -Toke
>>
>

[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-23  1:37                           ` Simon Barber
@ 2015-04-24 16:54                             ` David Lang
  2015-04-24 17:00                               ` Rick Jones
  0 siblings, 1 reply; 127+ messages in thread
From: David Lang @ 2015-04-24 16:54 UTC (permalink / raw)
  To: Simon Barber; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 13587 bytes --]

Good question. I don't know.

However, it seems to me that if the receiver starts accepting and acking data 
out of order, all sorts of other issues come up (what does this do to sequence 
number randomization and the ability for an attacker to spew random data that 
will show up somewhere in the window for example)

David Lang


On Wed, 22 Apr 2015, Simon Barber wrote:

> Does this happen even with Sack?
>
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
>
> On April 22, 2015 10:36:11 AM David Lang <david@lang.hm> wrote:
>
>> Data that's received and not used doesn't really matter (a tree falls in 
>> the
>> woods type of thing).
>> 
>> The head of line blocking can cause a chunk of packets to be retransmitted, 
>> even
>> though the receiving machine got them the first time. So looking at the 
>> received
>> bytes gives you a false picture of what is going on.
>> 
>> David Lang
>> 
>> On Wed, 22 Apr 2015, Simon Barber wrote:
>> 
>> > The bumps are due to packet loss causing head of line blocking. Until the
>> > lost packet is retransmitted the receiver can't release any subsequent
>> > received packets to the application due to the requirement for in order
>> > delivery. If you counted received bytes with a packet counter rather than
>> > looking at application level you would be able to illustrate that data 
>> was
>> > being received smoothly (even though out of order).
>> >
>> > Simon
>> >
>> > Sent with AquaMail for Android
>> > http://www.aqua-mail.com
>> >
>> >
>> > On April 21, 2015 7:21:09 AM David Lang <david@lang.hm> wrote:
>> >
>> >> On Tue, 21 Apr 2015, jb wrote:
>> >>
>> >> >> the receiver advertizes a large receive window, so the sender doesn't
>> >> > pause > until there is that much data outstanding, or they get a 
>> timeout
>> >> of
>> >> > a packet as > a signal to slow down.
>> >> >
>> >> >> and because you have a gig-E link locally, your machine generates
>> >> traffic
>> >> > \
>> >> >> very rapidly, until all that data is 'in flight'. but it's really
>> >> sitting
>> >> > in the buffer of
>> >> >> router trying to get through.
>> >> >
>> >> > Hmm, then I have a quandary because I can easily solve the nasty bumpy
>> >> > upload graphs by keeping the advertised receive window on the server
>> >> capped
>> >> > low, however then, paradoxically, there is no more sign of buffer 
>> bloat
>> >> in
>> >> > the result, at least for the upload phase.
>> >> >
>> >> > (The graph under the upload/download graphs for my results shows 
>> almost
>> >> no
>> >> > latency increase during the upload phase, now).
>> >> >
>> >> > Or, I can crank it back open again, serving people with fiber 
>> connections
>> >> > without having to run heaps of streams in parallel -- and then have
>> >> people
>> >> > complain that the upload result is inefficient, or bumpy, vs what they
>> >> > expect.
>> >>
>> >> well, many people expect it to be bumpy (I've heard ISPs explain to
>> >> customers
>> >> that when a link is full it is bumpy, that's just the way things work)
>> >>
>> >> > And I can't offer an option, because the server receive window (I 
>> think)
>> >> > cannot be set on a case by case basis. You set it for all TCP and 
>> forget
>> >> it.
>> >>
>> >> I think you are right
>> >>
>> >> > I suspect you guys are going to say the server should be left with a
>> >> large
>> >> > max receive window.. and let people complain to find out what their 
>> issue
>> >> > is.
>> >>
>> >> what is your customer base? how important is it to provide faster 
>> service
>> >> to teh
>> >> fiber users? Are they transferring ISO images so the difference is
>> >> significant
>> >> to them? or are they downloading web pages where it's the difference
>> >> between a
>> >> half second and a quarter second? remember that you are seeing this on 
>> the
>> >> upload side.
>> >>
>> >> in the long run, fixing the problem at the client side is the best thing 
>> to
>> >> do,
>> >> but in the meantime, you sometimes have to work around broken customer
>> >> stuff.
>> >>
>> >> > BTW my setup is wire to billion 7800N, which is a DSL modem and 
>> router. I
>> >> > believe it is a linux based (judging from the system log) device.
>> >>
>> >> if it's linux based, it would be interesting to learn what sort of 
>> settings
>> >> it
>> >> has. It may be one of the rarer devices that has something in place 
>> already
>> >> to
>> >> do active queue management.
>> >>
>> >> David Lang
>> >>
>> >> > cheers,
>> >> > -Justin
>> >> >
>> >> > On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
>> >> >
>> >> >> On Tue, 21 Apr 2015, jb wrote:
>> >> >>
>> >> >>  I've discovered something perhaps you guys can explain it better or
>> >> shed
>> >> >>> some light.
>> >> >>> It isn't specifically to do with buffer bloat but it is to do with 
>> TCP
>> >> >>> tuning.
>> >> >>>
>> >> >>> Attached is two pictures of my upload to New York speed test server
>> >> with 1
>> >> >>> stream.
>> >> >>> It doesn't make any difference if it is 1 stream or 8 streams, the
>> >> picture
>> >> >>> and behaviour remains the same.
>> >> >>> I am 200ms from new york so it qualifies as a fairly long (but not 
>> very
>> >> >>> fat) pipe.
>> >> >>>
>> >> >>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535'
>> >> (on
>> >> >>> the server)
>> >> >>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 
>> 67108864'
>> >> (on
>> >> >>> the server)
>> >> >>>
>> >> >>> It actually doesn't matter what that last huge number is, once it 
>> goes
>> >> >>> much
>> >> >>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on 
>> the
>> >> >>> upload speed.
>> >> >>>
>> >> >>> Now as I understand this setting, it is the tcp receive window that
>> >> Linux
>> >> >>> advertises, and the last number sets the maximum size it can get to
>> >> (for
>> >> >>> one TCP stream).
>> >> >>>
>> >> >>> For users with very fast upload speeds, they do not see an ugly 
>> bumpy
>> >> >>> upload graph, it is smooth and sustained.
>> >> >>> But for the majority of users (like me) with uploads less than 5 to
>> >> >>> 10mbit,
>> >> >>> we frequently see the ugly graph.
>> >> >>>
>> >> >>> The second tcp_rmem setting is how I have been running the speed 
>> test
>> >> >>> servers.
>> >> >>>
>> >> >>> Up to now I thought this was just the distance of the speedtest from
>> >> the
>> >> >>> interface: perhaps the browser was buffering a lot, and didn't feed
>> >> back
>> >> >>> progress but now I realise the bumpy one is actually being 
>> influenced
>> >> by
>> >> >>> the server receive window.
>> >> >>>
>> >> >>> I guess my question is this: Why does ALLOWING a large receive 
>> window
>> >> >>> appear to encourage problems with upload smoothness??
>> >> >>>
>> >> >>> This implies that setting the receive window should be done on a
>> >> >>> connection
>> >> >>> by connection basis: small for slow connections, large, for high 
>> speed,
>> >> >>> long distance connections.
>> >> >>>
>> >> >>
>> >> >> This is classic bufferbloat
>> >> >>
>> >> >> the receiver advertizes a large receive window, so the sender doesn't
>> >> >> pause until there is that much data outstanding, or they get a 
>> timeout
>> >> of a
>> >> >> packet as a signal to slow down.
>> >> >>
>> >> >> and because you have a gig-E link locally, your machine generates
>> >> traffic
>> >> >> very rapidly, until all that data is 'in flight'. but it's really
>> >> sitting
>> >> >> in the buffer of a router trying to get through.
>> >> >>
>> >> >> then when a packet times out, the sender slows down a smidge and
>> >> >> retransmits it. But the old packet is still sitting in a queue, 
>> eating
>> >> >> bandwidth. the packets behind it are also going to timeout and be
>> >> >> retransmitted before your first retransmitted packet gets through, so
>> >> you
>> >> >> have a large slug of data that's being retransmitted, and the first 
>> of
>> >> the
>> >> >> replacement data can't get through until the last of the old (timed 
>> out)
>> >> >> data is transmitted.
>> >> >>
>> >> >> then when data starts flowing again, the sender again tries to fill 
>> up
>> >> the
>> >> >> window with data in flight.
>> >> >>
>> >> >>  In addition, if I cap it to 65k, for reasons of smoothness,
>> >> >>> that means the bandwidth delay product will keep maximum speed per
>> >> upload
>> >> >>> stream quite low. So a symmetric or gigabit connection is going to 
>> need
>> >> a
>> >> >>> ton of parallel streams to see full speed.
>> >> >>>
>> >> >>> Most puzzling is why would anything special be required on the 
>> Client
>> >> -->
>> >> >>> Server side of the equation
>> >> >>> but nothing much appears wrong with the Server --> Client side, 
>> whether
>> >> >>> speeds are very low (GPRS) or very high (gigabit).
>> >> >>>
>> >> >>
>> >> >> but what window sizes are these clients advertising?
>> >> >>
>> >> >>
>> >> >>  Note that also I am not yet sure if smoothness == better throughput. 
>> I
>> >> >>> have
>> >> >>> noticed upload speeds for some people often being under their 
>> claimed
>> >> sync
>> >> >>> rate by 10 or 20% but I've no logs that show the bumpy graph is 
>> showing
>> >> >>> inefficiency. Maybe.
>> >> >>>
>> >> >>
>> >> >> If you were to do a packet capture on the server side, you would see
>> >> that
>> >> >> you have a bunch of packets that are arriving multiple times, but the
>> >> first
>> >> >> time "does't count" because the replacement is already on the way.
>> >> >>
>> >> >> so your overall throughput is lower for two reasons
>> >> >>
>> >> >> 1. it's bursty, and there are times when the connection actually is 
>> idle
>> >> >> (after you have a lot of timed out packets, the sender needs to ramp 
>> up
>> >> >> it's speed again)
>> >> >>
>> >> >> 2. you are sending some packets multiple times, consuming more total
>> >> >> bandwidth for the same 'goodput' (effective throughput)
>> >> >>
>> >> >> David Lang
>> >> >>
>> >> >>
>> >> >>  help!
>> >> >>>
>> >> >>>
>> >> >>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber 
>> <simon@superduper.net>
>> >> >>> wrote:
>> >> >>>
>> >> >>>  One thing users understand is slow web access.  Perhaps translating
>> >> the
>> >> >>>> latency measurement into 'a typical web page will take X seconds
>> >> longer
>> >> >>>> to
>> >> >>>> load', or even stating the impact as 'this latency causes a typical
>> >> web
>> >> >>>> page to load slower, as if your connection was only YY% of the
>> >> measured
>> >> >>>> speed.'
>> >> >>>>
>> >> >>>> Simon
>> >> >>>>
>> >> >>>> Sent with AquaMail for Android
>> >> >>>> http://www.aqua-mail.com
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>> On April 19, 2015 1:54:19 PM Jonathan Morton 
>> <chromatix99@gmail.com>
>> >> >>>> wrote:
>> >> >>>>
>> >> >>>>>>>> Frequency readouts are probably more accessible to the latter.
>> >> >>>>
>> >> >>>>>
>> >> >>>>>>>>     The frequency domain more accessible to laypersons? I have 
>> my
>> >> >>>>>>>>
>> >> >>>>>>> doubts ;)
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>>> Gamers, at least, are familiar with “frames per second” and how
>> >> that
>> >> >>>>>>>
>> >> >>>>>> corresponds to their monitor’s refresh rate.
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>>       I am sure they can easily transform back into time domain 
>> to
>> >> get
>> >> >>>>>>
>> >> >>>>> the frame period ;) .  I am partly kidding, I think your idea is
>> >> great
>> >> >>>>> in
>> >> >>>>> that it is a truly positive value which could lend itself to being
>> >> used
>> >> >>>>> in
>> >> >>>>> ISP/router manufacturer advertising, and hence might work in the 
>> real
>> >> >>>>> work;
>> >> >>>>> on the other hand I like to keep data as “raw” as possible (not 
>> that
>> >> >>>>> ^(-1)
>> >> >>>>> is a transformation worthy of being called data massage).
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>>  The desirable range of latencies, when converted to Hz, happens 
>> to
>> >> be
>> >> >>>>>>>
>> >> >>>>>> roughly the same as the range of desirable frame rates.
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>>       Just to play devils advocate, the interesting part is time 
>> or
>> >> >>>>>>
>> >> >>>>> saving time so seconds or milliseconds are also intuitively
>> >> >>>>> understandable
>> >> >>>>> and can be easily added ;)
>> >> >>>>>
>> >> >>>>> Such readouts are certainly interesting to people like us.  I have 
>> no
>> >> >>>>> objection to them being reported alongside a frequency readout. 
>> But
>> >> I
>> >> >>>>> think most people are not interested in “time savings” measured in
>> >> >>>>> milliseconds; they’re much more aware of the minute- and 
>> hour-level
>> >> time
>> >> >>>>> savings associated with greater bandwidth.
>> >> >>>>>
>> >> >>>>>  - Jonathan Morton
>> >> >>>>>
>> >> >>>>> _______________________________________________
>> >> >>>>> Bloat mailing list
>> >> >>>>> Bloat@lists.bufferbloat.net
>> >> >>>>> https://lists.bufferbloat.net/listinfo/bloat
>> >> >>>>>
>> >> >>>>>
>> >> >>>>
>> >> >>>> _______________________________________________
>> >> >>>> Bloat mailing list
>> >> >>>> Bloat@lists.bufferbloat.net
>> >> >>>> https://lists.bufferbloat.net/listinfo/bloat
>> >> >>>>
>> >> >>>>
>> >> >> _______________________________________________
>> >> >> Bloat mailing list
>> >> >> Bloat@lists.bufferbloat.net
>> >> >> https://lists.bufferbloat.net/listinfo/bloat
>> >> >>
>> >> >>
>> >> >
>> >>
>> >>
>> >> ----------
>> >> _______________________________________________
>> >> Bloat mailing list
>> >> Bloat@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/bloat
>> >>
>> >
>> >
>> >
>
>
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-24 16:54                             ` David Lang
@ 2015-04-24 17:00                               ` Rick Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Rick Jones @ 2015-04-24 17:00 UTC (permalink / raw)
  To: bloat

Selective ACKnowledgement in TCP does not change the in-order semantics 
of TCP as seen by applications using it.  Data is always presented to 
the receiving application in order.  What SACK does is make it more 
likely that holes in the sequence of data will be filled-in sooner via 
retransmissions, and help avoid retransmitting data already received but 
past the first "hole" in the data sequence.

rick jones

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-24  8:18                                                 ` Sebastian Moeller
  2015-04-24  8:29                                                   ` Toke Høiland-Jørgensen
@ 2015-04-25  2:24                                                   ` Simon Barber
  1 sibling, 0 replies; 127+ messages in thread
From: Simon Barber @ 2015-04-25  2:24 UTC (permalink / raw)
  To: Sebastian Moeller, jb; +Cc: bloat

Perhaps where the green is should depend on the customer's access type. For 
instance someone on fiber should have a much better ping than someone on 
3G. But I agree this should be a fixed scale, not dependent on idle ping 
time. Although VoIP might be good up to 100ms, gamers would want lower values.

Simon

Sent with AquaMail for Android
http://www.aqua-mail.com


On April 24, 2015 1:19:08 AM Sebastian Moeller <moeller0@gmx.de> wrote:

> Hi jb,
>
> this looks great!
>
> On Apr 23, 2015, at 12:08 , jb <justin@dslr.net> wrote:
>
> > This is how I've changed the graph of latency under load per input from 
> you guys.
> >
> > Taken away log axis.
> >
> > Put in two bands. Yellow starts at double the idle latency, and goes to 
> 4x the idle latency
> > red starts there, and goes to the top. No red shows if no bars reach into it.
> > And no yellow band shows if no bars get into that zone.
> >
> > Is it more descriptive?
>
> 	Mmmh, so the delay we see consists out of the delay caused by the distance 
> to the server and the delay of the access technology, meaning the un-loaded 
> latency can range from a few milliseconds to several 100s of milliseconds 
> (for the poor sods behind a satellite link…). Any further latency 
> developing under load should be independent of distance and access 
> technology as those are already factored in the bade latency. In both the 
> extreme cases multiples of the base-latency do not seem to be relevant 
> measures of bloat, so I would like to argue that the yellow and the red 
> zones should be based on fixed increments and not as a ratio of the 
> base-latency. This is relevant as people on a slow/high-access-latency link 
> have a much smaller tolerance for additional latency than people on a fast 
> link if certain latency guarantees need to be met, and thresholds as a 
> function of base-latency do not reflect this.
> 	Now ideally the colors should not be based on the base-latency at all but 
> should be at fixed total values, like 200 to 300 ms for voip (according to 
> ITU-T G.114 for voip one-way delay <= 150 ms is recommended) in yellow, and 
> say 400 to 600 ms in orange, 400ms is upper bound for good voip and 600ms 
> for decent voip (according to ITU-T G.114,users are very satisfied up to 
> 200 ms one way delay and satisfied up to roughly 300ms) so anything above 
> 600 in deep red?
> 	I know this is not perfect and the numbers will probably require severe 
> "bike-shedding” (and I am not sure that ITU-T G.114 really iOS good source 
> for the thresholds), but to get a discussion started here are the numbers 
> again:
> 0 	to 100 ms 	no color
> 101 	to 200 ms		green
> 201	to 400 ms		yellow
> 401	to 600 ms		orange
> 601 	to 1000 ms	red
> 1001 to infinity		purple (or better marina red?)
>
> Best Regards
> 	Sebastian
>
>
> >
> > (sorry to the list moderator, gmail keeps sending under the wrong email 
> and I get a moderator message)
> >
> > On Thu, Apr 23, 2015 at 8:05 PM, jb <justinbeech@gmail.com> wrote:
> > This is how I've changed the graph of latency under load per input from 
> you guys.
> >
> > Taken away log axis.
> >
> > Put in two bands. Yellow starts at double the idle latency, and goes to 
> 4x the idle latency
> > red starts there, and goes to the top. No red shows if no bars reach into it.
> > And no yellow band shows if no bars get into that zone.
> >
> > Is it more descriptive?
> >
> >
> > On Thu, Apr 23, 2015 at 4:48 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> > Wait, this is a 15 years old experiment using Reno and a single test
> > bed, using ns simulator.
> >
> > Naive TCP pacing implementations were tried, and probably failed.
> >
> > Pacing individual packet is quite bad, this is the first lesson one
> > learns when implementing TCP pacing, especially if you try to drive a
> > 40Gbps NIC.
> >
> > https://lwn.net/Articles/564978/
> >
> > Also note we use usec based rtt samples, and nanosec high resolution
> > timers in fq. I suspect the ns simulator experiment had sync issues
> > because of using low resolution timers or simulation artifact, without
> > any jitter source.
> >
> > Billions of flows are now 'paced', but keep in mind most packets are not
> > paced. We do not pace in slow start, and we do not pace when tcp is ACK
> > clocked.
> >
> > Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
> > eventually have all packets being paced, using TSO 'clusters' for TCP.
> >
> >
> >
> > On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> > > one reference with pdf publicly available. On the website there are
> > > various papers
> > > on this topic. Others might me more relevant but I did not check all of
> > > them.
> >
> > > Understanding the Performance of TCP Pacing,
> > > Amit Aggarwal, Stefan Savage, and Tom Anderson,
> > > IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
> > >
> > > http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-24  8:55                                                     ` Sebastian Moeller
  2015-04-24  9:02                                                       ` Toke Høiland-Jørgensen
@ 2015-04-25  3:15                                                       ` Simon Barber
  2015-04-25  4:04                                                         ` Dave Taht
  2015-04-25  3:23                                                       ` Simon Barber
  2 siblings, 1 reply; 127+ messages in thread
From: Simon Barber @ 2015-04-25  3:15 UTC (permalink / raw)
  To: bloat, justin

I think it might be useful to have a 'latency guide' for users. It would 
say things like

100ms - VoIP applications work well
250ms - VoIP applications - conversation is not as natural as it could 
be, although users may not notice this.
500ms - VoIP applications begin to have awkward pauses in conversation.
1000ms - VoIP applications have significant annoying pauses in conversation.
2000ms - VoIP unusable for most interactive conversations.

0-50ms - web pages load snappily
250ms - web pages can often take an extra second to appear, even on the 
highest bandwidth links
1000ms - web pages load significantly slower than they should, taking 
several extra seconds to appear, even on the highest bandwidth links
2000ms+ - web browsing is heavily slowed, with many seconds or even 10s 
of seconds of delays for pages to load, even on the highest bandwidth links.

Gaming.... some kind of guide here....

Simon



On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
> Hi Toke,
>
> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>
>> Sebastian Moeller <moeller0@gmx.de> writes:
>>
>>> I know this is not perfect and the numbers will probably require
>>> severe "bike-shedding”
>> Since you're literally asking for it... ;)
>>
>>
>> In this case we're talking about *added* latency. So the ambition should
>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>> that proper application of a good queue management algorithm can keep it
>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
> 	Oh, I can get behind that easily, I just thought basing the limits on externally relevant total latency thresholds would directly tell the user which applications might run well on his link. Sure this means that people on a satellite link most likely will miss out the acceptable voip threshold by their base-latency alone, but guess what telephony via satellite leaves something to be desired. That said if the alternative is no telephony I would take 1 second one-way delay any day ;).
> 	What I liked about fixed thresholds is that the test would give a good indication what kind of uses are going to work well on the link under load, given that during load both base and induced latency come into play. I agree that 300ms as first threshold is rather unambiguous though (and I am certain that remote X11 will require a massively lower RTT unless one likes to think of remote desktop as an oil tanker simulator ;) )
>
>> The other increments I have less opinions about, but 100 ms does seem to
>> be a nice round number, so do yellow from 30-100 ms, then start with the
>> reds somewhere above that, and range up into the deep red / purple /
>> black with skulls and fiery death as we go nearer and above one second?
>>
>>
>> I very much think that raising peoples expectations and being quite
>> ambitious about what to expect is an important part of this. Of course
>> the base latency is going to vary, but the added latency shouldn't. And
>> sine we have the technology to make sure it doesn't, calling out bad
>> results when we see them is reasonable!
> 	Okay so this would turn into:
>
> base latency to base latency + 30 ms:				green
> base latency + 31 ms to base latency + 100 ms:		yellow
> base latency + 101 ms to base latency + 200 ms:		orange?
> base latency + 201 ms to base latency + 500 ms:		red
> base latency + 501 ms to base latency + 1000 ms:	fire
> base latency + 1001 ms to infinity:					fire & brimstone
>
> correct?
>
>
>> -Toke
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-24  8:55                                                     ` Sebastian Moeller
  2015-04-24  9:02                                                       ` Toke Høiland-Jørgensen
  2015-04-25  3:15                                                       ` Simon Barber
@ 2015-04-25  3:23                                                       ` Simon Barber
  2 siblings, 0 replies; 127+ messages in thread
From: Simon Barber @ 2015-04-25  3:23 UTC (permalink / raw)
  To: Sebastian Moeller, Toke Høiland-Jørgensen; +Cc: bloat



On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
> Okay so this would turn into: base latency to base latency + 30 ms: 
> green base latency + 31 ms to base latency + 100 ms: yellow base 
> latency + 101 ms to base latency + 200 ms: orange? base latency + 201 
> ms to base latency + 500 ms: red base latency + 501 ms to base latency 
> + 1000 ms: fire base latency + 1001 ms to infinity: fire & brimstone 
> correct?

I don't think the reference should be a measured 'base latency' - but it 
should be a fixed value that is different for different access types. 
E.G. Satellite access should show green up to about 650 or 700ms, but 
fiber should show green up to 50ms max. Perhaps add in speed of light to 
account for distance from the user to the test server.

Simon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-25  3:15                                                       ` Simon Barber
@ 2015-04-25  4:04                                                         ` Dave Taht
  2015-04-25  4:26                                                           ` Simon Barber
  0 siblings, 1 reply; 127+ messages in thread
From: Dave Taht @ 2015-04-25  4:04 UTC (permalink / raw)
  To: Simon Barber; +Cc: bloat

simon all your numbers are too large by at least a factor of 2. I
think also you are thinking about total latency, rather than induced
latency and jitter.

Please see my earlier email laying out the bands. And gettys' manifesto.

If you are thinking in terms of voip, less than 30ms *jitter* is what
you want, and a latency increase of 30ms is a proxy for also holding
jitter that low.


On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
> I think it might be useful to have a 'latency guide' for users. It would say
> things like
>
> 100ms - VoIP applications work well
> 250ms - VoIP applications - conversation is not as natural as it could be,
> although users may not notice this.
> 500ms - VoIP applications begin to have awkward pauses in conversation.
> 1000ms - VoIP applications have significant annoying pauses in conversation.
> 2000ms - VoIP unusable for most interactive conversations.
>
> 0-50ms - web pages load snappily
> 250ms - web pages can often take an extra second to appear, even on the
> highest bandwidth links
> 1000ms - web pages load significantly slower than they should, taking
> several extra seconds to appear, even on the highest bandwidth links
> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
> seconds of delays for pages to load, even on the highest bandwidth links.
>
> Gaming.... some kind of guide here....
>
> Simon
>
>
>
>
> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>
>> Hi Toke,
>>
>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>
>>> Sebastian Moeller <moeller0@gmx.de> writes:
>>>
>>>> I know this is not perfect and the numbers will probably require
>>>> severe "bike-shedding”
>>>
>>> Since you're literally asking for it... ;)
>>>
>>>
>>> In this case we're talking about *added* latency. So the ambition should
>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>> that proper application of a good queue management algorithm can keep it
>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>
>>         Oh, I can get behind that easily, I just thought basing the limits
>> on externally relevant total latency thresholds would directly tell the user
>> which applications might run well on his link. Sure this means that people
>> on a satellite link most likely will miss out the acceptable voip threshold
>> by their base-latency alone, but guess what telephony via satellite leaves
>> something to be desired. That said if the alternative is no telephony I
>> would take 1 second one-way delay any day ;).
>>         What I liked about fixed thresholds is that the test would give a
>> good indication what kind of uses are going to work well on the link under
>> load, given that during load both base and induced latency come into play. I
>> agree that 300ms as first threshold is rather unambiguous though (and I am
>> certain that remote X11 will require a massively lower RTT unless one likes
>> to think of remote desktop as an oil tanker simulator ;) )
>>
>>> The other increments I have less opinions about, but 100 ms does seem to
>>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>> reds somewhere above that, and range up into the deep red / purple /
>>> black with skulls and fiery death as we go nearer and above one second?
>>>
>>>
>>> I very much think that raising peoples expectations and being quite
>>> ambitious about what to expect is an important part of this. Of course
>>> the base latency is going to vary, but the added latency shouldn't. And
>>> sine we have the technology to make sure it doesn't, calling out bad
>>> results when we see them is reasonable!
>>
>>         Okay so this would turn into:
>>
>> base latency to base latency + 30 ms:                           green
>> base latency + 31 ms to base latency + 100 ms:          yellow
>> base latency + 101 ms to base latency + 200 ms:         orange?
>> base latency + 201 ms to base latency + 500 ms:         red
>> base latency + 501 ms to base latency + 1000 ms:        fire
>> base latency + 1001 ms to infinity:
>> fire & brimstone
>>
>> correct?
>>
>>
>>> -Toke
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-25  4:04                                                         ` Dave Taht
@ 2015-04-25  4:26                                                           ` Simon Barber
  2015-04-25  6:03                                                             ` Sebastian Moeller
  0 siblings, 1 reply; 127+ messages in thread
From: Simon Barber @ 2015-04-25  4:26 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

Certainly the VoIP numbers are for peak total latency, and while Justin is 
measuring total latency because he is only taking a few samples the peak 
values will be a little higher.

Simon

Sent with AquaMail for Android
http://www.aqua-mail.com


On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:

> simon all your numbers are too large by at least a factor of 2. I
> think also you are thinking about total latency, rather than induced
> latency and jitter.
>
> Please see my earlier email laying out the bands. And gettys' manifesto.
>
> If you are thinking in terms of voip, less than 30ms *jitter* is what
> you want, and a latency increase of 30ms is a proxy for also holding
> jitter that low.
>
>
> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
> > I think it might be useful to have a 'latency guide' for users. It would say
> > things like
> >
> > 100ms - VoIP applications work well
> > 250ms - VoIP applications - conversation is not as natural as it could be,
> > although users may not notice this.
> > 500ms - VoIP applications begin to have awkward pauses in conversation.
> > 1000ms - VoIP applications have significant annoying pauses in conversation.
> > 2000ms - VoIP unusable for most interactive conversations.
> >
> > 0-50ms - web pages load snappily
> > 250ms - web pages can often take an extra second to appear, even on the
> > highest bandwidth links
> > 1000ms - web pages load significantly slower than they should, taking
> > several extra seconds to appear, even on the highest bandwidth links
> > 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
> > seconds of delays for pages to load, even on the highest bandwidth links.
> >
> > Gaming.... some kind of guide here....
> >
> > Simon
> >
> >
> >
> >
> > On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
> >>
> >> Hi Toke,
> >>
> >> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> >>
> >>> Sebastian Moeller <moeller0@gmx.de> writes:
> >>>
> >>>> I know this is not perfect and the numbers will probably require
> >>>> severe "bike-shedding”
> >>>
> >>> Since you're literally asking for it... ;)
> >>>
> >>>
> >>> In this case we're talking about *added* latency. So the ambition should
> >>> be zero, or so close to it as to be indiscernible. Furthermore, we know
> >>> that proper application of a good queue management algorithm can keep it
> >>> pretty close to this. Certainly under 20-30 ms of added latency. So from
> >>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
> >>
> >>         Oh, I can get behind that easily, I just thought basing the limits
> >> on externally relevant total latency thresholds would directly tell the user
> >> which applications might run well on his link. Sure this means that people
> >> on a satellite link most likely will miss out the acceptable voip threshold
> >> by their base-latency alone, but guess what telephony via satellite leaves
> >> something to be desired. That said if the alternative is no telephony I
> >> would take 1 second one-way delay any day ;).
> >>         What I liked about fixed thresholds is that the test would give a
> >> good indication what kind of uses are going to work well on the link under
> >> load, given that during load both base and induced latency come into play. I
> >> agree that 300ms as first threshold is rather unambiguous though (and I am
> >> certain that remote X11 will require a massively lower RTT unless one likes
> >> to think of remote desktop as an oil tanker simulator ;) )
> >>
> >>> The other increments I have less opinions about, but 100 ms does seem to
> >>> be a nice round number, so do yellow from 30-100 ms, then start with the
> >>> reds somewhere above that, and range up into the deep red / purple /
> >>> black with skulls and fiery death as we go nearer and above one second?
> >>>
> >>>
> >>> I very much think that raising peoples expectations and being quite
> >>> ambitious about what to expect is an important part of this. Of course
> >>> the base latency is going to vary, but the added latency shouldn't. And
> >>> sine we have the technology to make sure it doesn't, calling out bad
> >>> results when we see them is reasonable!
> >>
> >>         Okay so this would turn into:
> >>
> >> base latency to base latency + 30 ms:                           green
> >> base latency + 31 ms to base latency + 100 ms:          yellow
> >> base latency + 101 ms to base latency + 200 ms:         orange?
> >> base latency + 201 ms to base latency + 500 ms:         red
> >> base latency + 501 ms to base latency + 1000 ms:        fire
> >> base latency + 1001 ms to infinity:
> >> fire & brimstone
> >>
> >> correct?
> >>
> >>
> >>> -Toke
> >>
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67



^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-25  4:26                                                           ` Simon Barber
@ 2015-04-25  6:03                                                             ` Sebastian Moeller
  2015-04-27 16:39                                                               ` Dave Taht
  2015-05-06  5:08                                                               ` Simon Barber
  0 siblings, 2 replies; 127+ messages in thread
From: Sebastian Moeller @ 2015-04-25  6:03 UTC (permalink / raw)
  To: Simon Barber; +Cc: bloat

Hi Simon, hi List

On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:

> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.

	If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )

Best Regards
	Sebastian


> 
> Simon
> 
> Sent with AquaMail for Android
> http://www.aqua-mail.com
> 
> 
> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
> 
>> simon all your numbers are too large by at least a factor of 2. I
>> think also you are thinking about total latency, rather than induced
>> latency and jitter.
>> 
>> Please see my earlier email laying out the bands. And gettys' manifesto.
>> 
>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>> you want, and a latency increase of 30ms is a proxy for also holding
>> jitter that low.
>> 
>> 
>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>> > I think it might be useful to have a 'latency guide' for users. It would say
>> > things like
>> >
>> > 100ms - VoIP applications work well
>> > 250ms - VoIP applications - conversation is not as natural as it could be,
>> > although users may not notice this.

	The only way to detect whether a conversation is natural is if users notice, I would say...

>> > 500ms - VoIP applications begin to have awkward pauses in conversation.
>> > 1000ms - VoIP applications have significant annoying pauses in conversation.
>> > 2000ms - VoIP unusable for most interactive conversations.
>> >
>> > 0-50ms - web pages load snappily
>> > 250ms - web pages can often take an extra second to appear, even on the
>> > highest bandwidth links
>> > 1000ms - web pages load significantly slower than they should, taking
>> > several extra seconds to appear, even on the highest bandwidth links
>> > 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>> > seconds of delays for pages to load, even on the highest bandwidth links.
>> >
>> > Gaming.... some kind of guide here....
>> >
>> > Simon
>> >
>> >
>> >
>> >
>> > On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>> >>
>> >> Hi Toke,
>> >>
>> >> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>> >>
>> >>> Sebastian Moeller <moeller0@gmx.de> writes:
>> >>>
>> >>>> I know this is not perfect and the numbers will probably require
>> >>>> severe "bike-shedding”
>> >>>
>> >>> Since you're literally asking for it... ;)
>> >>>
>> >>>
>> >>> In this case we're talking about *added* latency. So the ambition should
>> >>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>> >>> that proper application of a good queue management algorithm can keep it
>> >>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>> >>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>> >>
>> >>         Oh, I can get behind that easily, I just thought basing the limits
>> >> on externally relevant total latency thresholds would directly tell the user
>> >> which applications might run well on his link. Sure this means that people
>> >> on a satellite link most likely will miss out the acceptable voip threshold
>> >> by their base-latency alone, but guess what telephony via satellite leaves
>> >> something to be desired. That said if the alternative is no telephony I
>> >> would take 1 second one-way delay any day ;).
>> >>         What I liked about fixed thresholds is that the test would give a
>> >> good indication what kind of uses are going to work well on the link under
>> >> load, given that during load both base and induced latency come into play. I
>> >> agree that 300ms as first threshold is rather unambiguous though (and I am
>> >> certain that remote X11 will require a massively lower RTT unless one likes
>> >> to think of remote desktop as an oil tanker simulator ;) )
>> >>
>> >>> The other increments I have less opinions about, but 100 ms does seem to
>> >>> be a nice round number, so do yellow from 30-100 ms, then start with the
>> >>> reds somewhere above that, and range up into the deep red / purple /
>> >>> black with skulls and fiery death as we go nearer and above one second?
>> >>>
>> >>>
>> >>> I very much think that raising peoples expectations and being quite
>> >>> ambitious about what to expect is an important part of this. Of course
>> >>> the base latency is going to vary, but the added latency shouldn't. And
>> >>> sine we have the technology to make sure it doesn't, calling out bad
>> >>> results when we see them is reasonable!
>> >>
>> >>         Okay so this would turn into:
>> >>
>> >> base latency to base latency + 30 ms:                           green
>> >> base latency + 31 ms to base latency + 100 ms:          yellow
>> >> base latency + 101 ms to base latency + 200 ms:         orange?
>> >> base latency + 201 ms to base latency + 500 ms:         red
>> >> base latency + 501 ms to base latency + 1000 ms:        fire
>> >> base latency + 1001 ms to infinity:
>> >> fire & brimstone
>> >>
>> >> correct?
>> >>
>> >>
>> >>> -Toke
>> >>
>> >> _______________________________________________
>> >> Bloat mailing list
>> >> Bloat@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/bloat
>> >
>> >
>> > _______________________________________________
>> > Bloat mailing list
>> > Bloat@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/bloat
>> 
>> 
>> 
>> --
>> Dave Täht
>> Open Networking needs **Open Source Hardware**
>> 
>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> 
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-25  6:03                                                             ` Sebastian Moeller
@ 2015-04-27 16:39                                                               ` Dave Taht
  2015-04-28  7:18                                                                 ` Sebastian Moeller
  2015-05-06  5:08                                                               ` Simon Barber
  1 sibling, 1 reply; 127+ messages in thread
From: Dave Taht @ 2015-04-27 16:39 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

On Fri, Apr 24, 2015 at 11:03 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Simon, hi List
>
> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
>
>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
>
>         If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )

At one level I am utterly prepared to set new (And lower) standards
for latency, and not necessarily pay attention to compromise driven
standards processes established in the 70s and 80s, but to the actual
user experience numbers that jim cited in the fq+aqm manefesto on his
blog.

I consider induced latencies of 30ms as a "green" band because that is
the outer limit of the range modern aqm technologies can achieve (fq
can get closer to 0). There was a lot of debate about 20ms being the
right figure for induced latency and/or jitter, a year or two back,
and we settled on 30ms for both, so that number is already a
compromise figure.

It is highly likely that folk here are not aware of the extra-ordinary
amount of debate that went into deciding the ultimate ATM cell size
back in the day. The eu wanted 32 bytes, the US 48, both because that
was basically a good size for the local continental distance and echo
cancellation stuff, at the time.

In the case of voip, jitter is actually more important than latency.
Modern codecs and coding techniques can tolerate 30ms of jitter, just
barely, without sound artifacts. >60ms, boom, crackle, hiss.


> Best Regards
>         Sebastian
>
>
>>
>> Simon
>>
>> Sent with AquaMail for Android
>> http://www.aqua-mail.com
>>
>>
>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>>> simon all your numbers are too large by at least a factor of 2. I
>>> think also you are thinking about total latency, rather than induced
>>> latency and jitter.
>>>
>>> Please see my earlier email laying out the bands. And gettys' manifesto.
>>>
>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>>> you want, and a latency increase of 30ms is a proxy for also holding
>>> jitter that low.
>>>
>>>
>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>>> > I think it might be useful to have a 'latency guide' for users. It would say
>>> > things like
>>> >
>>> > 100ms - VoIP applications work well
>>> > 250ms - VoIP applications - conversation is not as natural as it could be,
>>> > although users may not notice this.
>
>         The only way to detect whether a conversation is natural is if users notice, I would say...
>
>>> > 500ms - VoIP applications begin to have awkward pauses in conversation.
>>> > 1000ms - VoIP applications have significant annoying pauses in conversation.
>>> > 2000ms - VoIP unusable for most interactive conversations.
>>> >
>>> > 0-50ms - web pages load snappily
>>> > 250ms - web pages can often take an extra second to appear, even on the
>>> > highest bandwidth links
>>> > 1000ms - web pages load significantly slower than they should, taking
>>> > several extra seconds to appear, even on the highest bandwidth links
>>> > 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>>> > seconds of delays for pages to load, even on the highest bandwidth links.
>>> >
>>> > Gaming.... some kind of guide here....
>>> >
>>> > Simon
>>> >
>>> >
>>> >
>>> >
>>> > On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>> >>
>>> >> Hi Toke,
>>> >>
>>> >> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>> >>
>>> >>> Sebastian Moeller <moeller0@gmx.de> writes:
>>> >>>
>>> >>>> I know this is not perfect and the numbers will probably require
>>> >>>> severe "bike-shedding”
>>> >>>
>>> >>> Since you're literally asking for it... ;)
>>> >>>
>>> >>>
>>> >>> In this case we're talking about *added* latency. So the ambition should
>>> >>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>> >>> that proper application of a good queue management algorithm can keep it
>>> >>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>> >>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>> >>
>>> >>         Oh, I can get behind that easily, I just thought basing the limits
>>> >> on externally relevant total latency thresholds would directly tell the user
>>> >> which applications might run well on his link. Sure this means that people
>>> >> on a satellite link most likely will miss out the acceptable voip threshold
>>> >> by their base-latency alone, but guess what telephony via satellite leaves
>>> >> something to be desired. That said if the alternative is no telephony I
>>> >> would take 1 second one-way delay any day ;).
>>> >>         What I liked about fixed thresholds is that the test would give a
>>> >> good indication what kind of uses are going to work well on the link under
>>> >> load, given that during load both base and induced latency come into play. I
>>> >> agree that 300ms as first threshold is rather unambiguous though (and I am
>>> >> certain that remote X11 will require a massively lower RTT unless one likes
>>> >> to think of remote desktop as an oil tanker simulator ;) )
>>> >>
>>> >>> The other increments I have less opinions about, but 100 ms does seem to
>>> >>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>> >>> reds somewhere above that, and range up into the deep red / purple /
>>> >>> black with skulls and fiery death as we go nearer and above one second?
>>> >>>
>>> >>>
>>> >>> I very much think that raising peoples expectations and being quite
>>> >>> ambitious about what to expect is an important part of this. Of course
>>> >>> the base latency is going to vary, but the added latency shouldn't. And
>>> >>> sine we have the technology to make sure it doesn't, calling out bad
>>> >>> results when we see them is reasonable!
>>> >>
>>> >>         Okay so this would turn into:
>>> >>
>>> >> base latency to base latency + 30 ms:                           green
>>> >> base latency + 31 ms to base latency + 100 ms:          yellow
>>> >> base latency + 101 ms to base latency + 200 ms:         orange?
>>> >> base latency + 201 ms to base latency + 500 ms:         red
>>> >> base latency + 501 ms to base latency + 1000 ms:        fire
>>> >> base latency + 1001 ms to infinity:
>>> >> fire & brimstone
>>> >>
>>> >> correct?
>>> >>
>>> >>
>>> >>> -Toke
>>> >>
>>> >> _______________________________________________
>>> >> Bloat mailing list
>>> >> Bloat@lists.bufferbloat.net
>>> >> https://lists.bufferbloat.net/listinfo/bloat
>>> >
>>> >
>>> > _______________________________________________
>>> > Bloat mailing list
>>> > Bloat@lists.bufferbloat.net
>>> > https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>>
>>> --
>>> Dave Täht
>>> Open Networking needs **Open Source Hardware**
>>>
>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-27 16:39                                                               ` Dave Taht
@ 2015-04-28  7:18                                                                 ` Sebastian Moeller
  2015-04-28  8:01                                                                   ` David Lang
  2015-04-28 14:02                                                                   ` Dave Taht
  0 siblings, 2 replies; 127+ messages in thread
From: Sebastian Moeller @ 2015-04-28  7:18 UTC (permalink / raw)
  To: Dave Täht; +Cc: bloat

Hi Dave,

On Apr 27, 2015, at 18:39 , Dave Taht <dave.taht@gmail.com> wrote:

> On Fri, Apr 24, 2015 at 11:03 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
>> Hi Simon, hi List
>> 
>> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
>> 
>>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
>> 
>>        If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
> 
> At one level I am utterly prepared to set new (And lower) standards
> for latency, and not necessarily pay attention to compromise driven
> standards processes established in the 70s and 80s, but to the actual
> user experience numbers that jim cited in the fq+aqm manefesto on his
> blog.

	I am not sure I git the right one, could you please post a link to the document you are referring to? My personal issue with new standards is that it is going to be harder to convince others that these are real and not simply selected to push our agenda., hence using other peoples numbers, preferably numbers backed up by research ;) I also note that in the ITU numbers I dragged into the discussion the measurement pretends to be mouth to ear (one way) delay, so for intermediate buffering the thresholds need to be lower to allow for sampling interval (I think typically 10ms for the usual codecs G.711 and G.722), further sender processing and receiver processing, so I guess for the ITU thresholds we should subtract say 30ms for processing and then doube it to go from one-way delay to RTT. Now I am amazed how large the resulting RTTs actually are, so I assume I need to scrutinize the psycophysics experiments that hopefully underlay those numbers...

> 
> I consider induced latencies of 30ms as a "green" band because that is
> the outer limit of the range modern aqm technologies can achieve (fq
> can get closer to 0). There was a lot of debate about 20ms being the
> right figure for induced latency and/or jitter, a year or two back,
> and we settled on 30ms for both, so that number is already a
> compromise figure.

	Ah, I think someone brought this up already, do we need to make allowances for slow links? If a full packet traversal is already 16ms can we really expect 30ms? And should we even care, I mean, a slow link is a slow link and will have some drawbacks maybe we should just expose those instead of rationalizing them away? On the other hand I tend to think that in the end it is all about the cumulative performance of the link for most users, i.e. if the link allows glitch-free voip while heavy up- and downloads go on, normal users should not care one iota what the induced latency actually is (aqm or no aqm as long as the link behaves well nothing needs changing)

> 
> It is highly likely that folk here are not aware of the extra-ordinary
> amount of debate that went into deciding the ultimate ATM cell size
> back in the day. The eu wanted 32 bytes, the US 48, both because that
> was basically a good size for the local continental distance and echo
> cancellation stuff, at the time.
> 
> In the case of voip, jitter is actually more important than latency.
> Modern codecs and coding techniques can tolerate 30ms of jitter, just
> barely, without sound artifacts. >60ms, boom, crackle, hiss.

	Ah, and here is were I understand why my simplistic model from above fails; induced latency will contribute significantly to jitter and hence is a good proxy for link-suitability for real-time applications. So I agree using the induced latency as measure to base the color bands from sounds like a good approach.


> 
> 
>> Best Regards
>>        Sebastian
>> 
>> 
>>> 
>>> Simon
>>> 
>>> Sent with AquaMail for Android
>>> http://www.aqua-mail.com
>>> 
>>> 
>>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
>>> 
>>>> simon all your numbers are too large by at least a factor of 2. I
>>>> think also you are thinking about total latency, rather than induced
>>>> latency and jitter.
>>>> 
>>>> Please see my earlier email laying out the bands. And gettys' manifesto.
>>>> 
>>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>>>> you want, and a latency increase of 30ms is a proxy for also holding
>>>> jitter that low.
>>>> 
>>>> 
>>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>>>>> I think it might be useful to have a 'latency guide' for users. It would say
>>>>> things like
>>>>> 
>>>>> 100ms - VoIP applications work well
>>>>> 250ms - VoIP applications - conversation is not as natural as it could be,
>>>>> although users may not notice this.
>> 
>>        The only way to detect whether a conversation is natural is if users notice, I would say...
>> 
>>>>> 500ms - VoIP applications begin to have awkward pauses in conversation.
>>>>> 1000ms - VoIP applications have significant annoying pauses in conversation.
>>>>> 2000ms - VoIP unusable for most interactive conversations.
>>>>> 
>>>>> 0-50ms - web pages load snappily
>>>>> 250ms - web pages can often take an extra second to appear, even on the
>>>>> highest bandwidth links
>>>>> 1000ms - web pages load significantly slower than they should, taking
>>>>> several extra seconds to appear, even on the highest bandwidth links
>>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>>>>> seconds of delays for pages to load, even on the highest bandwidth links.
>>>>> 
>>>>> Gaming.... some kind of guide here....
>>>>> 
>>>>> Simon
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>>>>> 
>>>>>> Hi Toke,
>>>>>> 
>>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>>>>> 
>>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
>>>>>>> 
>>>>>>>> I know this is not perfect and the numbers will probably require
>>>>>>>> severe "bike-shedding”
>>>>>>> 
>>>>>>> Since you're literally asking for it... ;)
>>>>>>> 
>>>>>>> 
>>>>>>> In this case we're talking about *added* latency. So the ambition should
>>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>>>>>> that proper application of a good queue management algorithm can keep it
>>>>>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>>>>> 
>>>>>>        Oh, I can get behind that easily, I just thought basing the limits
>>>>>> on externally relevant total latency thresholds would directly tell the user
>>>>>> which applications might run well on his link. Sure this means that people
>>>>>> on a satellite link most likely will miss out the acceptable voip threshold
>>>>>> by their base-latency alone, but guess what telephony via satellite leaves
>>>>>> something to be desired. That said if the alternative is no telephony I
>>>>>> would take 1 second one-way delay any day ;).
>>>>>>        What I liked about fixed thresholds is that the test would give a
>>>>>> good indication what kind of uses are going to work well on the link under
>>>>>> load, given that during load both base and induced latency come into play. I
>>>>>> agree that 300ms as first threshold is rather unambiguous though (and I am
>>>>>> certain that remote X11 will require a massively lower RTT unless one likes
>>>>>> to think of remote desktop as an oil tanker simulator ;) )
>>>>>> 
>>>>>>> The other increments I have less opinions about, but 100 ms does seem to
>>>>>>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>>>>>> reds somewhere above that, and range up into the deep red / purple /
>>>>>>> black with skulls and fiery death as we go nearer and above one second?
>>>>>>> 
>>>>>>> 
>>>>>>> I very much think that raising peoples expectations and being quite
>>>>>>> ambitious about what to expect is an important part of this. Of course
>>>>>>> the base latency is going to vary, but the added latency shouldn't. And
>>>>>>> sine we have the technology to make sure it doesn't, calling out bad
>>>>>>> results when we see them is reasonable!
>>>>>> 
>>>>>>        Okay so this would turn into:
>>>>>> 
>>>>>> base latency to base latency + 30 ms:                           green
>>>>>> base latency + 31 ms to base latency + 100 ms:          yellow
>>>>>> base latency + 101 ms to base latency + 200 ms:         orange?
>>>>>> base latency + 201 ms to base latency + 500 ms:         red
>>>>>> base latency + 501 ms to base latency + 1000 ms:        fire
>>>>>> base latency + 1001 ms to infinity:
>>>>>> fire & brimstone
>>>>>> 
>>>>>> correct?
>>>>>> 
>>>>>> 
>>>>>>> -Toke
>>>>>> 
>>>>>> _______________________________________________
>>>>>> Bloat mailing list
>>>>>> Bloat@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Dave Täht
>>>> Open Networking needs **Open Source Hardware**
>>>> 
>>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>> 
>>> 
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>> 
> 
> 
> 
> -- 
> Dave Täht
> Open Networking needs **Open Source Hardware**
> 
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28  7:18                                                                 ` Sebastian Moeller
@ 2015-04-28  8:01                                                                   ` David Lang
  2015-04-28  8:19                                                                     ` Toke Høiland-Jørgensen
                                                                                       ` (2 more replies)
  2015-04-28 14:02                                                                   ` Dave Taht
  1 sibling, 3 replies; 127+ messages in thread
From: David Lang @ 2015-04-28  8:01 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

On Tue, 28 Apr 2015, Sebastian Moeller wrote:

>>
>> I consider induced latencies of 30ms as a "green" band because that is
>> the outer limit of the range modern aqm technologies can achieve (fq
>> can get closer to 0). There was a lot of debate about 20ms being the
>> right figure for induced latency and/or jitter, a year or two back,
>> and we settled on 30ms for both, so that number is already a
>> compromise figure.
>
> 	Ah, I think someone brought this up already, do we need to make 
> allowances for slow links? If a full packet traversal is already 16ms can we 
> really expect 30ms? And should we even care, I mean, a slow link is a slow 
> link and will have some drawbacks maybe we should just expose those instead of 
> rationalizing them away? On the other hand I tend to think that in the end it 
> is all about the cumulative performance of the link for most users, i.e. if 
> the link allows glitch-free voip while heavy up- and downloads go on, normal 
> users should not care one iota what the induced latency actually is (aqm or no 
> aqm as long as the link behaves well nothing needs changing)
>
>>
>> It is highly likely that folk here are not aware of the extra-ordinary
>> amount of debate that went into deciding the ultimate ATM cell size
>> back in the day. The eu wanted 32 bytes, the US 48, both because that
>> was basically a good size for the local continental distance and echo
>> cancellation stuff, at the time.
>>
>> In the case of voip, jitter is actually more important than latency.
>> Modern codecs and coding techniques can tolerate 30ms of jitter, just
>> barely, without sound artifacts. >60ms, boom, crackle, hiss.
>
> 	Ah, and here is were I understand why my simplistic model from above 
> fails; induced latency will contribute significantly to jitter and hence is a 
> good proxy for link-suitability for real-time applications. So I agree using 
> the induced latency as measure to base the color bands from sounds like a good 
> approach.
>

Voice is actually remarkably tolerant of pure latency. While 60ms of jitter 
makes a connection almost unusalbe, a few hundred ms of consistant latency 
isn't a problem. IIRC (from my college days when ATM was the new, hot 
technology) you have to get up to around a second of latency before 
pure-consistant latency starts to break things.

Gaming and high frequency trading care about the minimum latency a LOT. but most 
other things are far more sentitive to jitter than pure latency. [1]

The trouble with bufferbloat induced latency is that it is highly variable based 
on exactly how much other data is in the queue, so under the wrong conditions, 
all latency caused by buffering shows up as jitter.

David Lang

[1] pure latency will degrade the experience for many things, but usually in a 
fairly graceful manner.


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28  8:01                                                                   ` David Lang
@ 2015-04-28  8:19                                                                     ` Toke Høiland-Jørgensen
  2015-04-28 15:42                                                                       ` David Lang
  2015-04-28  8:38                                                                     ` Sebastian Moeller
  2015-04-28 11:04                                                                     ` Mikael Abrahamsson
  2 siblings, 1 reply; 127+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-28  8:19 UTC (permalink / raw)
  To: David Lang; +Cc: bloat

David Lang <david@lang.hm> writes:

> Voice is actually remarkably tolerant of pure latency. While 60ms of
> jitter makes a connection almost unusalbe, a few hundred ms of
> consistant latency isn't a problem. IIRC (from my college days when
> ATM was the new, hot technology) you have to get up to around a second
> of latency before pure-consistant latency starts to break things.

Well isn't that more a case of "the human brain will compensate for the
latency". Sure, you *can* talk to someone with half a second of delay,
but it's bloody *annoying*. :P

That, for me, is the main reason to go with lower figures. I don't want
to just be able to physically talk with someone without the codec
breaking, I want to be able to *enjoy* the experience and not be totally
exhausted by latency fatigue afterwards.

One of the things that really struck a chord with me was hearing the
people from the LoLa project
(http://www.conservatorio.trieste.it/artistica/ricerca/progetto-lola-low-latency/lola-case-study.pdf)
talk about how using their big fancy concert video conferencing system
to just talk to each other, it was like having a real face-to-face
conversation with none of the annoyances of regular video chat.

-Toke

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28  8:01                                                                   ` David Lang
  2015-04-28  8:19                                                                     ` Toke Høiland-Jørgensen
@ 2015-04-28  8:38                                                                     ` Sebastian Moeller
  2015-04-28 12:09                                                                       ` Rich Brown
  2015-04-28 15:39                                                                       ` David Lang
  2015-04-28 11:04                                                                     ` Mikael Abrahamsson
  2 siblings, 2 replies; 127+ messages in thread
From: Sebastian Moeller @ 2015-04-28  8:38 UTC (permalink / raw)
  To: David Lang; +Cc: bloat

Hi David,

On Apr 28, 2015, at 10:01 , David Lang <david@lang.hm> wrote:

> On Tue, 28 Apr 2015, Sebastian Moeller wrote:
> 
>>> 
>>> I consider induced latencies of 30ms as a "green" band because that is
>>> the outer limit of the range modern aqm technologies can achieve (fq
>>> can get closer to 0). There was a lot of debate about 20ms being the
>>> right figure for induced latency and/or jitter, a year or two back,
>>> and we settled on 30ms for both, so that number is already a
>>> compromise figure.
>> 
>> 	Ah, I think someone brought this up already, do we need to make allowances for slow links? If a full packet traversal is already 16ms can we really expect 30ms? And should we even care, I mean, a slow link is a slow link and will have some drawbacks maybe we should just expose those instead of rationalizing them away? On the other hand I tend to think that in the end it is all about the cumulative performance of the link for most users, i.e. if the link allows glitch-free voip while heavy up- and downloads go on, normal users should not care one iota what the induced latency actually is (aqm or no aqm as long as the link behaves well nothing needs changing)
>> 
>>> 
>>> It is highly likely that folk here are not aware of the extra-ordinary
>>> amount of debate that went into deciding the ultimate ATM cell size
>>> back in the day. The eu wanted 32 bytes, the US 48, both because that
>>> was basically a good size for the local continental distance and echo
>>> cancellation stuff, at the time.
>>> 
>>> In the case of voip, jitter is actually more important than latency.
>>> Modern codecs and coding techniques can tolerate 30ms of jitter, just
>>> barely, without sound artifacts. >60ms, boom, crackle, hiss.
>> 
>> 	Ah, and here is were I understand why my simplistic model from above fails; induced latency will contribute significantly to jitter and hence is a good proxy for link-suitability for real-time applications. So I agree using the induced latency as measure to base the color bands from sounds like a good approach.
>> 
> 
> Voice is actually remarkably tolerant of pure latency. While 60ms of jitter makes a connection almost unusalbe, a few hundred ms of consistant latency isn't a problem. IIRC (from my college days when ATM was the new, hot technology) you have to get up to around a second of latency before pure-consistant latency starts to break things.

	Well, what I want to see is a study, preferably psychophysics not modeling ;), showing the different latency “tolerances” of humans. I am certain that humans can adjust to even dozens of seconds de;ays if need be, but the goal should be fluent and seamless conversation not interleaved monologues. Thanks for giving a bound for jitter, do you have any reference for perceptional jitter thresholds or some such?


> 
> Gaming and high frequency trading care about the minimum latency a LOT. but most other things are far more sentitive to jitter than pure latency. [1]

	Sure, but it is easy to “loose” latency but impossible to reclaim, so we should aim for lowest latency ;) . Now as long as jitter has a bound one can trade jitter for latency, by simply buffering more at the receiver thereby ironing out (a part of the) the jitter while introducing additional latency. One reason why I still thing that absolute latency thresholds have some value as they would allow to assess how much of a “budget” one has to flatten out jitter, but I digress. I also think now, that conflating absolute latency and buffer bloat will not really help (unless everybody understands induced latency by heart ;) )….

> 
> The trouble with bufferbloat induced latency is that it is highly variable based on exactly how much other data is in the queue, so under the wrong conditions, all latency caused by buffering shows up as jitter.

	That is how I understood Dave’s mail, thanks for confirming that.

Best Regards
	Sebastian

> 
> David Lang
> 
> [1] pure latency will degrade the experience for many things, but usually in a fairly graceful manner.
> 


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28  8:01                                                                   ` David Lang
  2015-04-28  8:19                                                                     ` Toke Høiland-Jørgensen
  2015-04-28  8:38                                                                     ` Sebastian Moeller
@ 2015-04-28 11:04                                                                     ` Mikael Abrahamsson
  2015-04-28 11:49                                                                       ` Sebastian Moeller
  2015-04-28 14:06                                                                       ` Dave Taht
  2 siblings, 2 replies; 127+ messages in thread
From: Mikael Abrahamsson @ 2015-04-28 11:04 UTC (permalink / raw)
  To: David Lang; +Cc: bloat

On Tue, 28 Apr 2015, David Lang wrote:

> Voice is actually remarkably tolerant of pure latency. While 60ms of jitter 
> makes a connection almost unusalbe, a few hundred ms of consistant latency 
> isn't a problem. IIRC (from my college days when ATM was the new, hot 
> technology) you have to get up to around a second of latency before 
> pure-consistant latency starts to break things.

I would say most people start to get trouble when talking to each other 
when the RTT exceeds around 500-600ms.

I mostly agree with 
http://www.cisco.com/c/en/us/support/docs/voice/voice-quality/5125-delay-details.html 
but RTT of over 500ms is not fun. You basically can't have a heated 
argument/discussion when the RTT is higher than this :P

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28 11:04                                                                     ` Mikael Abrahamsson
@ 2015-04-28 11:49                                                                       ` Sebastian Moeller
  2015-04-28 12:24                                                                         ` Mikael Abrahamsson
  2015-04-28 14:06                                                                       ` Dave Taht
  1 sibling, 1 reply; 127+ messages in thread
From: Sebastian Moeller @ 2015-04-28 11:49 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: bloat

Hi Mikhail,


On Apr 28, 2015, at 13:04 , Mikael Abrahamsson <swmike@swm.pp.se> wrote:

> On Tue, 28 Apr 2015, David Lang wrote:
> 
>> Voice is actually remarkably tolerant of pure latency. While 60ms of jitter makes a connection almost unusalbe, a few hundred ms of consistant latency isn't a problem. IIRC (from my college days when ATM was the new, hot technology) you have to get up to around a second of latency before pure-consistant latency starts to break things.
> 
> I would say most people start to get trouble when talking to each other when the RTT exceeds around 500-600ms.
> 
> I mostly agree with http://www.cisco.com/c/en/us/support/docs/voice/voice-quality/5125-delay-details.html but RTT of over 500ms is not fun. You basically can't have a heated argument/discussion when the RTT is higher than this :P

	From "Table 4.1 Delay Specifications” of that link we basically have a recapitulation of the ITU-T G.114 source, one-way mouth to ear latency thresholds for acceptable voip performance. The rest of the link discusses additional sources of latency and should allow to come up with a reasonable estimate how much of the latency budget can be spend on the transit. So in my mind an decent thresholds would be (150ms mouth-to-ear-delay - sender-processing - receiver-processing) * 2. Then again I think the discussion turned to relating buffer-bloat inured latency as jitter source, so the thresholds should be framed in a jitter-budget, not pure latency ;).

Best Regards
	Sebastian


> 
> -- 
> Mikael Abrahamsson    email: swmike@swm.pp.se


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28  8:38                                                                     ` Sebastian Moeller
@ 2015-04-28 12:09                                                                       ` Rich Brown
  2015-04-28 15:26                                                                         ` David Lang
  2015-04-28 15:39                                                                       ` David Lang
  1 sibling, 1 reply; 127+ messages in thread
From: Rich Brown @ 2015-04-28 12:09 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat


On Apr 28, 2015, at 4:38 AM, Sebastian Moeller <moeller0@gmx.de> wrote:

> 	Well, what I want to see is a study, preferably psychophysics not modeling ;), showing the different latency “tolerances” of humans. I am certain that humans can adjust to even dozens of seconds de;ays if need be, but the goal should be fluent and seamless conversation not interleaved monologues. Thanks for giving a bound for jitter, do you have any reference for perceptional jitter thresholds or some such?

An anecdote (we don't need no stinkin' studies :-)

I frequently listen to the same interview on NPR twice: first at say, 6:20 am when the news is breaking, and then at the 8:20am rebroadcast.

The first interview is live, sometimes with significant satellite delays between the two parties. The sound quality is fine. But the pauses between question and answer (waiting for the satellite propagation) sometimes make the responder seem a little "slow witted" - as if they have to struggle to compose their response.

But the rebroadcast gets "tuned up" by NPR audio folks, and those pauses get edited out. I was amazed how the conversation takes on a completely different flavor: any negative impression goes away without that latency.

So, what lesson do I learn from this? Pure latency *does* affect the nature of the conversation - it may not be fluent and seamless if there's a satellite link's worth of latency involved. 

Although not being exhibited in this case, I can believe that jitter plays worse havoc on a conversation. I'll also bet that induced latency is a good proxy for jitter.

Rich

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28 11:49                                                                       ` Sebastian Moeller
@ 2015-04-28 12:24                                                                         ` Mikael Abrahamsson
  2015-04-28 13:44                                                                           ` Sebastian Moeller
  0 siblings, 1 reply; 127+ messages in thread
From: Mikael Abrahamsson @ 2015-04-28 12:24 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1539 bytes --]

On Tue, 28 Apr 2015, Sebastian Moeller wrote:

> 	From "Table 4.1 Delay Specifications” of that link we basically 
> have a recapitulation of the ITU-T G.114 source, one-way mouth to ear 
> latency thresholds for acceptable voip performance. The rest of the link 
> discusses additional sources of latency and should allow to come up with 
> a reasonable estimate how much of the latency budget can be spend on the 
> transit. So in my mind an decent thresholds would be (150ms 
> mouth-to-ear-delay - sender-processing - receiver-processing) * 2. Then 
> again I think the discussion turned to relating buffer-bloat inured 
> latency as jitter source, so the thresholds should be framed in a 
> jitter-budget, not pure latency ;).

Yes, it's all about mouth-to-ear and then back again. I have historically 
been involved a few times in analyzing end-to-end latency when customer 
complaints came in about delay, it seemed that customers started 
complaining around 450-550 ms RTT (mouth-network-ear-mouth-network-ear).

This usually was a result of multiple PDV (Packet Delay Variation, a.k.a 
jitter) buffers due media conversions on the voice path, for instance when 
there was VoIP-TDM-VoIP-ATM-VoIP and potentially even more conversions due 
to VoIP/PSTN/Mobile interaction.

So this is one reason I am interested in the bufferbloat movement, because 
with less bufferbloat then one can get away with smaller PDV buffers, 
which means less end-to-end delay for realtime applications.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28 12:24                                                                         ` Mikael Abrahamsson
@ 2015-04-28 13:44                                                                           ` Sebastian Moeller
  2015-04-28 19:09                                                                             ` Rick Jones
  0 siblings, 1 reply; 127+ messages in thread
From: Sebastian Moeller @ 2015-04-28 13:44 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: bloat

Hi Mikael,


On Apr 28, 2015, at 14:24 , Mikael Abrahamsson <swmike@swm.pp.se> wrote:

> On Tue, 28 Apr 2015, Sebastian Moeller wrote:
> 
>> 	From "Table 4.1 Delay Specifications” of that link we basically have a recapitulation of the ITU-T G.114 source, one-way mouth to ear latency thresholds for acceptable voip performance. The rest of the link discusses additional sources of latency and should allow to come up with a reasonable estimate how much of the latency budget can be spend on the transit. So in my mind an decent thresholds would be (150ms mouth-to-ear-delay - sender-processing - receiver-processing) * 2. Then again I think the discussion turned to relating buffer-bloat inured latency as jitter source, so the thresholds should be framed in a jitter-budget, not pure latency ;).
> 
> Yes, it's all about mouth-to-ear and then back again. I have historically been involved a few times in analyzing end-to-end latency when customer complaints came in about delay, it seemed that customers started complaining around 450-550 ms RTT (mouth-network-ear-mouth-network-ear).

	Ah, this fits with the ITU figure 1 data, at ~250ms one-way delay they switch from “users very satisfied” to “users satisfied”, also showing that the ITU had very patient subjects in their tests… So if we need to allow for sampling and de-jittering at both ends costing say 50ms we end up with a threshold of acceptable total threshold of ~400ms network RTT for decent voip conversations. Actually lets assume the sender takes 30ms for sampling and packetizing, and the recover takes actual jointer ms for its dejittering filter/buffer, then we can draw the threshold as a function of maximum latency under load increase...
	Do you have numbers for acceptable jitter levels?

> 
> This usually was a result of multiple PDV (Packet Delay Variation, a.k.a jitter) buffers due media conversions on the voice path,

	This sucks.

> for instance when there was VoIP-TDM-VoIP-ATM-VoIP and potentially even more conversions due to VoIP/PSTN/Mobile interaction.

	I hope the future will cut this down to at max one transition, or preferably none ;) (with both PSTN and TDM slowly going the way of the Dodo).

Best Regards
	Sebastian

> 
> So this is one reason I am interested in the bufferbloat movement, because with less bufferbloat then one can get away with smaller PDV buffers, which means less end-to-end delay for realtime applications.
> 
> -- 
> Mikael Abrahamsson    email: swmike@swm.pp.se


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28  7:18                                                                 ` Sebastian Moeller
  2015-04-28  8:01                                                                   ` David Lang
@ 2015-04-28 14:02                                                                   ` Dave Taht
  1 sibling, 0 replies; 127+ messages in thread
From: Dave Taht @ 2015-04-28 14:02 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

On Tue, Apr 28, 2015 at 12:18 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Dave,
>
> On Apr 27, 2015, at 18:39 , Dave Taht <dave.taht@gmail.com> wrote:
>
>> On Fri, Apr 24, 2015 at 11:03 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>> Hi Simon, hi List
>>>
>>> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
>>>
>>>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
>>>
>>>        If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
>>
>> At one level I am utterly prepared to set new (And lower) standards
>> for latency, and not necessarily pay attention to compromise driven
>> standards processes established in the 70s and 80s, but to the actual
>> user experience numbers that jim cited in the fq+aqm manefesto on his
>> blog.
>
>         I am not sure I git the right one, could you please post a link to the document you are referring to?

I tend to refer to this as the fq+aqm "manifesto":

https://gettys.wordpress.com/2013/07/10/low-latency-requires-smart-queuing-traditional-aqm-is-not-enough/

Although jim takes too long to get to the fq portion of it. This was
because the uphill battle with the ietf was all about e2e vs aqm
techniques with FQ hardly on the table at all when we started.

Also I view many of the numbers he cited as *outer bounds* for
latency. While some might claim a band can make good music with 30ms
latency, I generally have to stay within 8 feet or less of the
drummer....

>My personal issue with new standards is that it is going to be harder to convince others that these are real and not simply selected to push our agenda., hence using other peoples numbers, preferably numbers backed up by research ;)

Sebastian pointed out to me privately about the old ATM dispute:

        "The way I heard the story, it was France pushing for 32 bytes
as this would allow a national net without the need for echo
cancelation, while the US already required echo cancelation and wanted
64 bytes. In true salomonic fashion 48 bytes was selected, pleasing no
one ;) (see http://cntic03.hit.bme.hu/meres/ATMFAQ/d7.htm). Nice
story."

>I also note that in the ITU numbers I dragged into the discussion the measurement pretends to be mouth to ear (one way) delay, so for intermediate buffering the thresholds need to be lower to allow for sampling interval (I think typically 10ms for the usual codecs G.711 and G.722), further sender processing and receiver processing, so I guess for the ITU thresholds we should subtract say 30ms for processing and then doube it to go from one-way delay to RTT. Now I am amazed how large the resulting RTTs actually are, so I assume I need to scrutinize the psycophysics experiments that hopefully underlay those numbers...

The analogy I use when discussing this with people in the real world,
goes like this: "Here we are discussing this around a lunch table. a
single millisecond is about a foot, and I am about 3 feet from you, so
the ideal latency for conversation is much, much less than the maximum
laid out by multiple standards for voice. Shall we try to have this
conversation from 30ms/feet apart?"

Less latency = more intimacy. How many here have had a whispered
conversation into a lover's ear? Would it have been anywhere near as
good if you were across the hall?

Far be it for me to project internet latency reductions as being key
to achieving world peace and better mutual understanding[1], but this
simple comparison of real world latencies to established standards
makes a ton of sense to me and everyone I have tried this comparison
on.

The existing standards for voice were driven by what was achievable at
the time, more than they were driven by psychoacoustics. I am glad
that the opus voice codec can get as low as 2.7ms latency, and sad
that we have to capture whole frames (~16ms) nowadays for video.
Perhaps we will see scanline video grabbers re-emerge as a viable
videoconferencing tool one day.

>
>>
>> I consider induced latencies of 30ms as a "green" band because that is
>> the outer limit of the range modern aqm technologies can achieve (fq
>> can get closer to 0). There was a lot of debate about 20ms being the
>> right figure for induced latency and/or jitter, a year or two back,
>> and we settled on 30ms for both, so that number is already a
>> compromise figure.
>
>         Ah, I think someone brought this up already, do we need to make allowances for slow links? If a full packet traversal is already 16ms can we really expect 30ms? And should we even care, I mean, a slow link is a slow link and will have some drawbacks maybe we should just expose those instead of rationalizing them away? On the other hand I tend to think that in the end it is all about the cumulative performance of the link for most users, i.e. if the link allows glitch-free voip while heavy up- and downloads go on, normal users should not care one iota what the induced latency actually is (aqm or no aqm as long as the link behaves well nothing needs changing)
>
>>
>> It is highly likely that folk here are not aware of the extra-ordinary
>> amount of debate that went into deciding the ultimate ATM cell size
>> back in the day. The eu wanted 32 bytes, the US 48, both because that
>> was basically a good size for the local continental distance and echo
>> cancellation stuff, at the time.
>>
>> In the case of voip, jitter is actually more important than latency.
>> Modern codecs and coding techniques can tolerate 30ms of jitter, just
>> barely, without sound artifacts. >60ms, boom, crackle, hiss.
>
>         Ah, and here is were I understand why my simplistic model from above fails; induced latency will contribute significantly to jitter and hence is a good proxy for link-suitability for real-time applications. So I agree using the induced latency as measure to base the color bands from sounds like a good approach.

Yea! Let's do that!

[1] http://the-edge.blogspot.com/2003_07_27_the-edge_archive.html#105975402040143728
>
>
>>
>>
>>> Best Regards
>>>        Sebastian
>>>
>>>
>>>>
>>>> Simon
>>>>
>>>> Sent with AquaMail for Android
>>>> http://www.aqua-mail.com
>>>>
>>>>
>>>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
>>>>
>>>>> simon all your numbers are too large by at least a factor of 2. I
>>>>> think also you are thinking about total latency, rather than induced
>>>>> latency and jitter.
>>>>>
>>>>> Please see my earlier email laying out the bands. And gettys' manifesto.
>>>>>
>>>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>>>>> you want, and a latency increase of 30ms is a proxy for also holding
>>>>> jitter that low.
>>>>>
>>>>>
>>>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>>>>>> I think it might be useful to have a 'latency guide' for users. It would say
>>>>>> things like
>>>>>>
>>>>>> 100ms - VoIP applications work well
>>>>>> 250ms - VoIP applications - conversation is not as natural as it could be,
>>>>>> although users may not notice this.
>>>
>>>        The only way to detect whether a conversation is natural is if users notice, I would say...
>>>
>>>>>> 500ms - VoIP applications begin to have awkward pauses in conversation.
>>>>>> 1000ms - VoIP applications have significant annoying pauses in conversation.
>>>>>> 2000ms - VoIP unusable for most interactive conversations.
>>>>>>
>>>>>> 0-50ms - web pages load snappily
>>>>>> 250ms - web pages can often take an extra second to appear, even on the
>>>>>> highest bandwidth links
>>>>>> 1000ms - web pages load significantly slower than they should, taking
>>>>>> several extra seconds to appear, even on the highest bandwidth links
>>>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>>>>>> seconds of delays for pages to load, even on the highest bandwidth links.
>>>>>>
>>>>>> Gaming.... some kind of guide here....
>>>>>>
>>>>>> Simon
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>>>>>>
>>>>>>> Hi Toke,
>>>>>>>
>>>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>>>>>>
>>>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
>>>>>>>>
>>>>>>>>> I know this is not perfect and the numbers will probably require
>>>>>>>>> severe "bike-shedding”
>>>>>>>>
>>>>>>>> Since you're literally asking for it... ;)
>>>>>>>>
>>>>>>>>
>>>>>>>> In this case we're talking about *added* latency. So the ambition should
>>>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>>>>>>> that proper application of a good queue management algorithm can keep it
>>>>>>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>>>>>>
>>>>>>>        Oh, I can get behind that easily, I just thought basing the limits
>>>>>>> on externally relevant total latency thresholds would directly tell the user
>>>>>>> which applications might run well on his link. Sure this means that people
>>>>>>> on a satellite link most likely will miss out the acceptable voip threshold
>>>>>>> by their base-latency alone, but guess what telephony via satellite leaves
>>>>>>> something to be desired. That said if the alternative is no telephony I
>>>>>>> would take 1 second one-way delay any day ;).
>>>>>>>        What I liked about fixed thresholds is that the test would give a
>>>>>>> good indication what kind of uses are going to work well on the link under
>>>>>>> load, given that during load both base and induced latency come into play. I
>>>>>>> agree that 300ms as first threshold is rather unambiguous though (and I am
>>>>>>> certain that remote X11 will require a massively lower RTT unless one likes
>>>>>>> to think of remote desktop as an oil tanker simulator ;) )
>>>>>>>
>>>>>>>> The other increments I have less opinions about, but 100 ms does seem to
>>>>>>>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>>>>>>> reds somewhere above that, and range up into the deep red / purple /
>>>>>>>> black with skulls and fiery death as we go nearer and above one second?
>>>>>>>>
>>>>>>>>
>>>>>>>> I very much think that raising peoples expectations and being quite
>>>>>>>> ambitious about what to expect is an important part of this. Of course
>>>>>>>> the base latency is going to vary, but the added latency shouldn't. And
>>>>>>>> sine we have the technology to make sure it doesn't, calling out bad
>>>>>>>> results when we see them is reasonable!
>>>>>>>
>>>>>>>        Okay so this would turn into:
>>>>>>>
>>>>>>> base latency to base latency + 30 ms:                           green
>>>>>>> base latency + 31 ms to base latency + 100 ms:          yellow
>>>>>>> base latency + 101 ms to base latency + 200 ms:         orange?
>>>>>>> base latency + 201 ms to base latency + 500 ms:         red
>>>>>>> base latency + 501 ms to base latency + 1000 ms:        fire
>>>>>>> base latency + 1001 ms to infinity:
>>>>>>> fire & brimstone
>>>>>>>
>>>>>>> correct?
>>>>>>>
>>>>>>>
>>>>>>>> -Toke
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Bloat mailing list
>>>>>>> Bloat@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Bloat mailing list
>>>>>> Bloat@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Dave Täht
>>>>> Open Networking needs **Open Source Hardware**
>>>>>
>>>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>>>
>>>>
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>>
>>
>> --
>> Dave Täht
>> Open Networking needs **Open Source Hardware**
>>
>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28 11:04                                                                     ` Mikael Abrahamsson
  2015-04-28 11:49                                                                       ` Sebastian Moeller
@ 2015-04-28 14:06                                                                       ` Dave Taht
  1 sibling, 0 replies; 127+ messages in thread
From: Dave Taht @ 2015-04-28 14:06 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: bloat

On Tue, Apr 28, 2015 at 4:04 AM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Tue, 28 Apr 2015, David Lang wrote:
>
>> Voice is actually remarkably tolerant of pure latency. While 60ms of
>> jitter makes a connection almost unusalbe, a few hundred ms of consistant
>> latency isn't a problem. IIRC (from my college days when ATM was the new,
>> hot technology) you have to get up to around a second of latency before
>> pure-consistant latency starts to break things.
>
>
> I would say most people start to get trouble when talking to each other when
> the RTT exceeds around 500-600ms.
>
> I mostly agree with
> http://www.cisco.com/c/en/us/support/docs/voice/voice-quality/5125-delay-details.html
> but RTT of over 500ms is not fun. You basically can't have a heated
> argument/discussion when the RTT is higher than this :P

Thx for busting me up this morning!

But what you say is not strictly true. When the RTT goes way, way, way
up (as in email) it becomes much more possible to have an unresolvable
heated argument that cannot be shut down down without invocation of
godwin's law.

Short RTTs (as in personal meetings - and perhaps, one day in a more
bloat-free universe without annoying jitter), make it both more
possible to have a heated argument... and a resolution.

A shared beer, helps too, also. :)

> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28 12:09                                                                       ` Rich Brown
@ 2015-04-28 15:26                                                                         ` David Lang
  0 siblings, 0 replies; 127+ messages in thread
From: David Lang @ 2015-04-28 15:26 UTC (permalink / raw)
  To: Rich Brown; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 2073 bytes --]

On Tue, 28 Apr 2015, Rich Brown wrote:

> On Apr 28, 2015, at 4:38 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
>> 	Well, what I want to see is a study, preferably psychophysics not modeling ;), showing the different latency “tolerances” of humans. I am certain that humans can adjust to even dozens of seconds de;ays if need be, but the goal should be fluent and seamless conversation not interleaved monologues. Thanks for giving a bound for jitter, do you have any reference for perceptional jitter thresholds or some such?
>
> An anecdote (we don't need no stinkin' studies :-)
>
> I frequently listen to the same interview on NPR twice: first at say, 6:20 am when the news is breaking, and then at the 8:20am rebroadcast.
>
> The first interview is live, sometimes with significant satellite delays between the two parties. The sound quality is fine. But the pauses between question and answer (waiting for the satellite propagation) sometimes make the responder seem a little "slow witted" - as if they have to struggle to compose their response.
>
> But the rebroadcast gets "tuned up" by NPR audio folks, and those pauses get edited out. I was amazed how the conversation takes on a completely different flavor: any negative impression goes away without that latency.
>
> So, what lesson do I learn from this? Pure latency *does* affect the nature of the conversation - it may not be fluent and seamless if there's a satellite link's worth of latency involved.
>
> Although not being exhibited in this case, I can believe that jitter plays worse havoc on a conversation. I'll also bet that induced latency is a good proxy for jitter.

satellite round trip latency is on the order of 1 second, which is at the far 
end of what can be tolerated for VoIP.

Go back to the '80s and '90s when the phone companies were looking at converting 
from POTS long-distance lines to digital (with ATM) and there was a lot of work 
done at the time about voice communication and what 'sounds good'. This is a lot 
fo what drove the ATM design, predictable latency.

David Lang

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28  8:38                                                                     ` Sebastian Moeller
  2015-04-28 12:09                                                                       ` Rich Brown
@ 2015-04-28 15:39                                                                       ` David Lang
  1 sibling, 0 replies; 127+ messages in thread
From: David Lang @ 2015-04-28 15:39 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 5376 bytes --]

On Tue, 28 Apr 2015, Sebastian Moeller wrote:

> Hi David,
>
> On Apr 28, 2015, at 10:01 , David Lang <david@lang.hm> wrote:
>
>> On Tue, 28 Apr 2015, Sebastian Moeller wrote:
>>
>>>>
>>>> I consider induced latencies of 30ms as a "green" band because that is
>>>> the outer limit of the range modern aqm technologies can achieve (fq
>>>> can get closer to 0). There was a lot of debate about 20ms being the
>>>> right figure for induced latency and/or jitter, a year or two back,
>>>> and we settled on 30ms for both, so that number is already a
>>>> compromise figure.
>>>
>>> 	Ah, I think someone brought this up already, do we need to make allowances for slow links? If a full packet traversal is already 16ms can we really expect 30ms? And should we even care, I mean, a slow link is a slow link and will have some drawbacks maybe we should just expose those instead of rationalizing them away? On the other hand I tend to think that in the end it is all about the cumulative performance of the link for most users, i.e. if the link allows glitch-free voip while heavy up- and downloads go on, normal users should not care one iota what the induced latency actually is (aqm or no aqm as long as the link behaves well nothing needs changing)
>>>
>>>>
>>>> It is highly likely that folk here are not aware of the extra-ordinary
>>>> amount of debate that went into deciding the ultimate ATM cell size
>>>> back in the day. The eu wanted 32 bytes, the US 48, both because that
>>>> was basically a good size for the local continental distance and echo
>>>> cancellation stuff, at the time.
>>>>
>>>> In the case of voip, jitter is actually more important than latency.
>>>> Modern codecs and coding techniques can tolerate 30ms of jitter, just
>>>> barely, without sound artifacts. >60ms, boom, crackle, hiss.
>>>
>>> 	Ah, and here is were I understand why my simplistic model from above fails; induced latency will contribute significantly to jitter and hence is a good proxy for link-suitability for real-time applications. So I agree using the induced latency as measure to base the color bands from sounds like a good approach.
>>>
>>
>> Voice is actually remarkably tolerant of pure latency. While 60ms of jitter makes a connection almost unusalbe, a few hundred ms of consistant latency isn't a problem. IIRC (from my college days when ATM was the new, hot technology) you have to get up to around a second of latency before pure-consistant latency starts to break things.
>
> 	Well, what I want to see is a study, preferably psychophysics not modeling ;), showing the different latency “tolerances” of humans. I am certain that humans can adjust to even dozens of seconds de;ays if need be, but the goal should be fluent and seamless conversation not interleaved monologues. Thanks for giving a bound for jitter, do you have any reference for perceptional jitter thresholds or some such?


lots of this sort of work was done back in the late '80s when ATM was being 
developed and long-distance digital networks were first being deployed.

>
>>
>> Gaming and high frequency trading care about the minimum latency a LOT. but most other things are far more sentitive to jitter than pure latency. [1]
>
> 	Sure, but it is easy to “loose” latency but impossible to reclaim, so we should aim for lowest latency ;) . Now as long as jitter has a bound one can trade jitter for latency, by simply buffering more at the receiver thereby ironing out (a part of the) the jitter while introducing additional latency. One reason why I still thing that absolute latency thresholds have some value as they would allow to assess how much of a “budget” one has to flatten out jitter, but I digress. I also think now, that conflating absolute latency and buffer bloat will not really help (unless everybody understands induced latency by heart ;) )….

I agree we should be aiming for the lowest latency we can reasonably maintain, 
but this topic started with the question of where to put the 'green' band on a 
latency test, and labeling a point as 'VoIP stops working'

base latency + 30-60 ms of buffer induced latency is a reasonble limit, with a 
cap somewhere in the 300-500 ms range (and it can be survived with a cap close 
to 1 sec, as long as the buffer induced latency that causes jitter remains 
small)

As we are looking at what labels to put on the graphs, we need to decide if we 
want to pick examples based on jitter (i.e. relative to base latency) or 
absolutes (total latency)

given the wide variation in base latency due to different technologies, I think 
it would be best to try and use relative latency examples if we can.


Changing topic slightly, I wonder if it would make sense to have an optional 
second column to the results page, that shows a 'known good' or 'best known' 
report to show the user what they could be getting from their connection if they 
were using the right equipment?

David Lang

>>
>> The trouble with bufferbloat induced latency is that it is highly variable based on exactly how much other data is in the queue, so under the wrong conditions, all latency caused by buffering shows up as jitter.
>
> 	That is how I understood Dave’s mail, thanks for confirming that.
>
> Best Regards
> 	Sebastian
>
>>
>> David Lang
>>
>> [1] pure latency will degrade the experience for many things, but usually in a fairly graceful manner.
>>
>
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28  8:19                                                                     ` Toke Høiland-Jørgensen
@ 2015-04-28 15:42                                                                       ` David Lang
  0 siblings, 0 replies; 127+ messages in thread
From: David Lang @ 2015-04-28 15:42 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1564 bytes --]

On Tue, 28 Apr 2015, Toke Høiland-Jørgensen wrote:

> David Lang <david@lang.hm> writes:
>
>> Voice is actually remarkably tolerant of pure latency. While 60ms of
>> jitter makes a connection almost unusalbe, a few hundred ms of
>> consistant latency isn't a problem. IIRC (from my college days when
>> ATM was the new, hot technology) you have to get up to around a second
>> of latency before pure-consistant latency starts to break things.
>
> Well isn't that more a case of "the human brain will compensate for the
> latency". Sure, you *can* talk to someone with half a second of delay,
> but it's bloody *annoying*. :P

we aren't disagreeing here. "a few hundred ms of consistant latency" is starts 
to top out around the half second range.

But if we are labeling something "VoIP breaks here", then it needs to be broken, 
not just annoying to some peopel.

David Lang

> That, for me, is the main reason to go with lower figures. I don't want
> to just be able to physically talk with someone without the codec
> breaking, I want to be able to *enjoy* the experience and not be totally
> exhausted by latency fatigue afterwards.
>
> One of the things that really struck a chord with me was hearing the
> people from the LoLa project
> (http://www.conservatorio.trieste.it/artistica/ricerca/progetto-lola-low-latency/lola-case-study.pdf)
> talk about how using their big fancy concert video conferencing system
> to just talk to each other, it was like having a real face-to-face
> conversation with none of the annoyances of regular video chat.
>
> -Toke
>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-28 13:44                                                                           ` Sebastian Moeller
@ 2015-04-28 19:09                                                                             ` Rick Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Rick Jones @ 2015-04-28 19:09 UTC (permalink / raw)
  To: bloat

On 04/28/2015 06:44 AM, Sebastian Moeller wrote:
> Ah, this fits with the ITU figure 1 data, at ~250ms one-way delay
> they switch from “users very satisfied” to “users satisfied”, also
> showing that the ITU had very patient subjects in their tests…

And/Or didn't want to upset constituents sending phone calls via GEO 
satellites...

rick jones


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-04-25  6:03                                                             ` Sebastian Moeller
  2015-04-27 16:39                                                               ` Dave Taht
@ 2015-05-06  5:08                                                               ` Simon Barber
  2015-05-06  8:50                                                                 ` Sebastian Moeller
  1 sibling, 1 reply; 127+ messages in thread
From: Simon Barber @ 2015-05-06  5:08 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

Hi Sebastian,

My numbers are what I've personally come up with after working for many 
years with VoIP - they have no other basis. One thing is that you have 
to compare apples to apples - the ITU numbers are for acoustic one way 
delay. The poor state of jitter buffer implementations that almost every 
VoIP app or device has means that to hit these acoustic delay numbers 
you need significantly lower network delays. Also note that these 
numbers are worst case, which must include trip halfway around the globe 
- if you can hit the numbers with half globe propagation then you will 
hit much better numbers for 'local calls'.

Simon


On 4/24/2015 11:03 PM, Sebastian Moeller wrote:
> Hi Simon, hi List
>
> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
>
>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
> 	If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
>
> Best Regards
> 	Sebastian
>
>
>> Simon
>>
>> Sent with AquaMail for Android
>> http://www.aqua-mail.com
>>
>>
>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>>> simon all your numbers are too large by at least a factor of 2. I
>>> think also you are thinking about total latency, rather than induced
>>> latency and jitter.
>>>
>>> Please see my earlier email laying out the bands. And gettys' manifesto.
>>>
>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>>> you want, and a latency increase of 30ms is a proxy for also holding
>>> jitter that low.
>>>
>>>
>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>>>> I think it might be useful to have a 'latency guide' for users. It would say
>>>> things like
>>>>
>>>> 100ms - VoIP applications work well
>>>> 250ms - VoIP applications - conversation is not as natural as it could be,
>>>> although users may not notice this.
> 	The only way to detect whether a conversation is natural is if users notice, I would say...
>
>>>> 500ms - VoIP applications begin to have awkward pauses in conversation.
>>>> 1000ms - VoIP applications have significant annoying pauses in conversation.
>>>> 2000ms - VoIP unusable for most interactive conversations.
>>>>
>>>> 0-50ms - web pages load snappily
>>>> 250ms - web pages can often take an extra second to appear, even on the
>>>> highest bandwidth links
>>>> 1000ms - web pages load significantly slower than they should, taking
>>>> several extra seconds to appear, even on the highest bandwidth links
>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>>>> seconds of delays for pages to load, even on the highest bandwidth links.
>>>>
>>>> Gaming.... some kind of guide here....
>>>>
>>>> Simon
>>>>
>>>>
>>>>
>>>>
>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>>>> Hi Toke,
>>>>>
>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>>>>
>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
>>>>>>
>>>>>>> I know this is not perfect and the numbers will probably require
>>>>>>> severe "bike-shedding”
>>>>>> Since you're literally asking for it... ;)
>>>>>>
>>>>>>
>>>>>> In this case we're talking about *added* latency. So the ambition should
>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>>>>> that proper application of a good queue management algorithm can keep it
>>>>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>>>>          Oh, I can get behind that easily, I just thought basing the limits
>>>>> on externally relevant total latency thresholds would directly tell the user
>>>>> which applications might run well on his link. Sure this means that people
>>>>> on a satellite link most likely will miss out the acceptable voip threshold
>>>>> by their base-latency alone, but guess what telephony via satellite leaves
>>>>> something to be desired. That said if the alternative is no telephony I
>>>>> would take 1 second one-way delay any day ;).
>>>>>          What I liked about fixed thresholds is that the test would give a
>>>>> good indication what kind of uses are going to work well on the link under
>>>>> load, given that during load both base and induced latency come into play. I
>>>>> agree that 300ms as first threshold is rather unambiguous though (and I am
>>>>> certain that remote X11 will require a massively lower RTT unless one likes
>>>>> to think of remote desktop as an oil tanker simulator ;) )
>>>>>
>>>>>> The other increments I have less opinions about, but 100 ms does seem to
>>>>>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>>>>> reds somewhere above that, and range up into the deep red / purple /
>>>>>> black with skulls and fiery death as we go nearer and above one second?
>>>>>>
>>>>>>
>>>>>> I very much think that raising peoples expectations and being quite
>>>>>> ambitious about what to expect is an important part of this. Of course
>>>>>> the base latency is going to vary, but the added latency shouldn't. And
>>>>>> sine we have the technology to make sure it doesn't, calling out bad
>>>>>> results when we see them is reasonable!
>>>>>          Okay so this would turn into:
>>>>>
>>>>> base latency to base latency + 30 ms:                           green
>>>>> base latency + 31 ms to base latency + 100 ms:          yellow
>>>>> base latency + 101 ms to base latency + 200 ms:         orange?
>>>>> base latency + 201 ms to base latency + 500 ms:         red
>>>>> base latency + 501 ms to base latency + 1000 ms:        fire
>>>>> base latency + 1001 ms to infinity:
>>>>> fire & brimstone
>>>>>
>>>>> correct?
>>>>>
>>>>>
>>>>>> -Toke
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>> --
>>> Dave Täht
>>> Open Networking needs **Open Source Hardware**
>>>
>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-06  5:08                                                               ` Simon Barber
@ 2015-05-06  8:50                                                                 ` Sebastian Moeller
  2015-05-06 15:30                                                                   ` Jim Gettys
  0 siblings, 1 reply; 127+ messages in thread
From: Sebastian Moeller @ 2015-05-06  8:50 UTC (permalink / raw)
  To: Simon Barber; +Cc: bloat

Hi Simon,

On May 6, 2015, at 07:08 , Simon Barber <simon@superduper.net> wrote:

> Hi Sebastian,
> 
> My numbers are what I've personally come up with after working for many years with VoIP - they have no other basis.

	I did not intend to be-little such numbers at all, I just wanted to propose that we either use generally accepted scientifically measured numbers or make such measurements our self.

> One thing is that you have to compare apples to apples - the ITU numbers are for acoustic one way delay.

True, and this is why we easily can estimate the delay cost of different stages of the whole voip one-way pipeline to deuce how much latent budget we have for the network (aka buffer bloat on the way), but still bases our numbers on some reference for mouth-to-ear-delay. I think we can conservatively estimate the latency cost of the sampling, sender processing and receiver processing (outside of the de-jitter buffering) seem harder to estimate reliably, to my untrained eye.

> The poor state of jitter buffer implementations that almost every VoIP app or device has means that to hit these acoustic delay numbers you need significantly lower network delays.

	I fully agree, and if we can estimate this I think we can justify deductions from the mouth-to-ear budget. I would as first approximation assume that what we call latency under load increase to be tightly correlated with jitter, so we could take our “bloat-measurement” in ms an directly deduct it from the budget (or if we want to accept occasional voice degradation we can pick a sufficiently high percentile, but that is implementation detail).

> Also note that these numbers are worst case, which must include trip halfway around the globe - if you can hit the numbers with half globe propagation then you will hit much better numbers for 'local calls’.

	We  could turn this around by estimating to what distance voip quality will be good/decent/acceptable/lughable…

Best Regards
	Sebastian

> 
> Simon
> 
> 
> On 4/24/2015 11:03 PM, Sebastian Moeller wrote:
>> Hi Simon, hi List
>> 
>> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
>> 
>>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
>> 	If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
>> 
>> Best Regards
>> 	Sebastian
>> 
>> 
>>> Simon
>>> 
>>> Sent with AquaMail for Android
>>> http://www.aqua-mail.com
>>> 
>>> 
>>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
>>> 
>>>> simon all your numbers are too large by at least a factor of 2. I
>>>> think also you are thinking about total latency, rather than induced
>>>> latency and jitter.
>>>> 
>>>> Please see my earlier email laying out the bands. And gettys' manifesto.
>>>> 
>>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>>>> you want, and a latency increase of 30ms is a proxy for also holding
>>>> jitter that low.
>>>> 
>>>> 
>>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>>>>> I think it might be useful to have a 'latency guide' for users. It would say
>>>>> things like
>>>>> 
>>>>> 100ms - VoIP applications work well
>>>>> 250ms - VoIP applications - conversation is not as natural as it could be,
>>>>> although users may not notice this.
>> 	The only way to detect whether a conversation is natural is if users notice, I would say...
>> 
>>>>> 500ms - VoIP applications begin to have awkward pauses in conversation.
>>>>> 1000ms - VoIP applications have significant annoying pauses in conversation.
>>>>> 2000ms - VoIP unusable for most interactive conversations.
>>>>> 
>>>>> 0-50ms - web pages load snappily
>>>>> 250ms - web pages can often take an extra second to appear, even on the
>>>>> highest bandwidth links
>>>>> 1000ms - web pages load significantly slower than they should, taking
>>>>> several extra seconds to appear, even on the highest bandwidth links
>>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>>>>> seconds of delays for pages to load, even on the highest bandwidth links.
>>>>> 
>>>>> Gaming.... some kind of guide here....
>>>>> 
>>>>> Simon
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>>>>> Hi Toke,
>>>>>> 
>>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>>>>> 
>>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
>>>>>>> 
>>>>>>>> I know this is not perfect and the numbers will probably require
>>>>>>>> severe "bike-shedding”
>>>>>>> Since you're literally asking for it... ;)
>>>>>>> 
>>>>>>> 
>>>>>>> In this case we're talking about *added* latency. So the ambition should
>>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>>>>>> that proper application of a good queue management algorithm can keep it
>>>>>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>>>>>         Oh, I can get behind that easily, I just thought basing the limits
>>>>>> on externally relevant total latency thresholds would directly tell the user
>>>>>> which applications might run well on his link. Sure this means that people
>>>>>> on a satellite link most likely will miss out the acceptable voip threshold
>>>>>> by their base-latency alone, but guess what telephony via satellite leaves
>>>>>> something to be desired. That said if the alternative is no telephony I
>>>>>> would take 1 second one-way delay any day ;).
>>>>>>         What I liked about fixed thresholds is that the test would give a
>>>>>> good indication what kind of uses are going to work well on the link under
>>>>>> load, given that during load both base and induced latency come into play. I
>>>>>> agree that 300ms as first threshold is rather unambiguous though (and I am
>>>>>> certain that remote X11 will require a massively lower RTT unless one likes
>>>>>> to think of remote desktop as an oil tanker simulator ;) )
>>>>>> 
>>>>>>> The other increments I have less opinions about, but 100 ms does seem to
>>>>>>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>>>>>> reds somewhere above that, and range up into the deep red / purple /
>>>>>>> black with skulls and fiery death as we go nearer and above one second?
>>>>>>> 
>>>>>>> 
>>>>>>> I very much think that raising peoples expectations and being quite
>>>>>>> ambitious about what to expect is an important part of this. Of course
>>>>>>> the base latency is going to vary, but the added latency shouldn't. And
>>>>>>> sine we have the technology to make sure it doesn't, calling out bad
>>>>>>> results when we see them is reasonable!
>>>>>>         Okay so this would turn into:
>>>>>> 
>>>>>> base latency to base latency + 30 ms:                           green
>>>>>> base latency + 31 ms to base latency + 100 ms:          yellow
>>>>>> base latency + 101 ms to base latency + 200 ms:         orange?
>>>>>> base latency + 201 ms to base latency + 500 ms:         red
>>>>>> base latency + 501 ms to base latency + 1000 ms:        fire
>>>>>> base latency + 1001 ms to infinity:
>>>>>> fire & brimstone
>>>>>> 
>>>>>> correct?
>>>>>> 
>>>>>> 
>>>>>>> -Toke
>>>>>> _______________________________________________
>>>>>> Bloat mailing list
>>>>>> Bloat@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>> 
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>> 
>>>> 
>>>> --
>>>> Dave Täht
>>>> Open Networking needs **Open Source Hardware**
>>>> 
>>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>> 
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
> 


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-06  8:50                                                                 ` Sebastian Moeller
@ 2015-05-06 15:30                                                                   ` Jim Gettys
  2015-05-06 18:03                                                                     ` Sebastian Moeller
  2015-05-06 20:25                                                                     ` Jonathan Morton
  0 siblings, 2 replies; 127+ messages in thread
From: Jim Gettys @ 2015-05-06 15:30 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 9871 bytes --]

On Wed, May 6, 2015 at 4:50 AM, Sebastian Moeller <moeller0@gmx.de> wrote:

> Hi Simon,
>
> On May 6, 2015, at 07:08 , Simon Barber <simon@superduper.net> wrote:
>
> > Hi Sebastian,
> >
> > My numbers are what I've personally come up with after working for many
> years with VoIP - they have no other basis.
>
>         I did not intend to be-little such numbers at all, I just wanted
> to propose that we either use generally accepted scientifically measured
> numbers or make such measurements our self.
>
> > One thing is that you have to compare apples to apples - the ITU numbers
> are for acoustic one way delay.
>
> True, and this is why we easily can estimate the delay cost of different
> stages of the whole voip one-way pipeline to deuce how much latent budget
> we have for the network (aka buffer bloat on the way), but still bases our
> numbers on some reference for mouth-to-ear-delay. I think we can
> conservatively estimate the latency cost of the sampling, sender processing
> and receiver processing (outside of the de-jitter buffering) seem harder to
> estimate reliably, to my untrained eye.
>
> > The poor state of jitter buffer implementations that almost every VoIP
> app or device has means that to hit these acoustic delay numbers you need
> significantly lower network delays.
>
>         I fully agree, and if we can estimate this I think we can justify
> deductions from the mouth-to-ear budget. I would as first approximation
> assume that what we call latency under load increase to be tightly
> correlated with jitter, so we could take our “bloat-measurement” in ms an
> directly deduct it from the budget (or if we want to accept occasional
> voice degradation we can pick a sufficiently high percentile, but that is
> implementation detail).
>
> > Also note that these numbers are worst case, which must include trip
> halfway around the globe - if you can hit the numbers with half globe
> propagation then you will hit much better numbers for 'local calls’.
>
>         We  could turn this around by estimating to what distance voip
> quality will be good/decent/acceptable/lughable…
>
> ​
>

​Mean RTT is almost useless for VOIP and teleconferencing.  What matters is
the RTT + jitter; a VOIP or teleconferencing application cannot function at
a given latency unless the "drop outs" caused by late packets is low enough
to not be obnoxious to human perception; there are a number of techniques
to hide such late packet dropouts but all of them (short of FEC) damage the
audio stream.

So ideally, not only do you measure the delay, you also measure the jitter
to be able to figure out a realistic operating point for such applications.
                                          - Jim


​


>
>
>
> >
> > Simon
> >
> >
> > On 4/24/2015 11:03 PM, Sebastian Moeller wrote:
> >> Hi Simon, hi List
> >>
> >> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
> >>
> >>> Certainly the VoIP numbers are for peak total latency, and while
> Justin is measuring total latency because he is only taking a few samples
> the peak values will be a little higher.
> >>      If your voip number are for peak total latency they need
> literature citations to back them up, as they are way shorter than what the
> ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not
> "married” to the ITU numbers but I think we should use generally accepted
> numbers here and not bake our own thresholds (and for all I know your
> numbers are fine, I just don’t know where they are coming from ;) )
> >>
> >> Best Regards
> >>      Sebastian
> >>
> >>
> >>> Simon
> >>>
> >>> Sent with AquaMail for Android
> >>> http://www.aqua-mail.com
> >>>
> >>>
> >>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
> >>>
> >>>> simon all your numbers are too large by at least a factor of 2. I
> >>>> think also you are thinking about total latency, rather than induced
> >>>> latency and jitter.
> >>>>
> >>>> Please see my earlier email laying out the bands. And gettys'
> manifesto.
> >>>>
> >>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
> >>>> you want, and a latency increase of 30ms is a proxy for also holding
> >>>> jitter that low.
> >>>>
> >>>>
> >>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net>
> wrote:
> >>>>> I think it might be useful to have a 'latency guide' for users. It
> would say
> >>>>> things like
> >>>>>
> >>>>> 100ms - VoIP applications work well
> >>>>> 250ms - VoIP applications - conversation is not as natural as it
> could be,
> >>>>> although users may not notice this.
> >>      The only way to detect whether a conversation is natural is if
> users notice, I would say...
> >>
> >>>>> 500ms - VoIP applications begin to have awkward pauses in
> conversation.
> >>>>> 1000ms - VoIP applications have significant annoying pauses in
> conversation.
> >>>>> 2000ms - VoIP unusable for most interactive conversations.
> >>>>>
> >>>>> 0-50ms - web pages load snappily
> >>>>> 250ms - web pages can often take an extra second to appear, even on
> the
> >>>>> highest bandwidth links
> >>>>> 1000ms - web pages load significantly slower than they should, taking
> >>>>> several extra seconds to appear, even on the highest bandwidth links
> >>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even
> 10s of
> >>>>> seconds of delays for pages to load, even on the highest bandwidth
> links.
> >>>>>
> >>>>> Gaming.... some kind of guide here....
> >>>>>
> >>>>> Simon
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
> >>>>>> Hi Toke,
> >>>>>>
> >>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk>
> wrote:
> >>>>>>
> >>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
> >>>>>>>
> >>>>>>>> I know this is not perfect and the numbers will probably require
> >>>>>>>> severe "bike-shedding”
> >>>>>>> Since you're literally asking for it... ;)
> >>>>>>>
> >>>>>>>
> >>>>>>> In this case we're talking about *added* latency. So the ambition
> should
> >>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we
> know
> >>>>>>> that proper application of a good queue management algorithm can
> keep it
> >>>>>>> pretty close to this. Certainly under 20-30 ms of added latency.
> So from
> >>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to
> 30 ms.
> >>>>>>         Oh, I can get behind that easily, I just thought basing the
> limits
> >>>>>> on externally relevant total latency thresholds would directly tell
> the user
> >>>>>> which applications might run well on his link. Sure this means that
> people
> >>>>>> on a satellite link most likely will miss out the acceptable voip
> threshold
> >>>>>> by their base-latency alone, but guess what telephony via satellite
> leaves
> >>>>>> something to be desired. That said if the alternative is no
> telephony I
> >>>>>> would take 1 second one-way delay any day ;).
> >>>>>>         What I liked about fixed thresholds is that the test would
> give a
> >>>>>> good indication what kind of uses are going to work well on the
> link under
> >>>>>> load, given that during load both base and induced latency come
> into play. I
> >>>>>> agree that 300ms as first threshold is rather unambiguous though
> (and I am
> >>>>>> certain that remote X11 will require a massively lower RTT unless
> one likes
> >>>>>> to think of remote desktop as an oil tanker simulator ;) )
> >>>>>>
> >>>>>>> The other increments I have less opinions about, but 100 ms does
> seem to
> >>>>>>> be a nice round number, so do yellow from 30-100 ms, then start
> with the
> >>>>>>> reds somewhere above that, and range up into the deep red / purple
> /
> >>>>>>> black with skulls and fiery death as we go nearer and above one
> second?
> >>>>>>>
> >>>>>>>
> >>>>>>> I very much think that raising peoples expectations and being quite
> >>>>>>> ambitious about what to expect is an important part of this. Of
> course
> >>>>>>> the base latency is going to vary, but the added latency
> shouldn't. And
> >>>>>>> sine we have the technology to make sure it doesn't, calling out
> bad
> >>>>>>> results when we see them is reasonable!
> >>>>>>         Okay so this would turn into:
> >>>>>>
> >>>>>> base latency to base latency + 30 ms:
>  green
> >>>>>> base latency + 31 ms to base latency + 100 ms:          yellow
> >>>>>> base latency + 101 ms to base latency + 200 ms:         orange?
> >>>>>> base latency + 201 ms to base latency + 500 ms:         red
> >>>>>> base latency + 501 ms to base latency + 1000 ms:        fire
> >>>>>> base latency + 1001 ms to infinity:
> >>>>>> fire & brimstone
> >>>>>>
> >>>>>> correct?
> >>>>>>
> >>>>>>
> >>>>>>> -Toke
> >>>>>> _______________________________________________
> >>>>>> Bloat mailing list
> >>>>>> Bloat@lists.bufferbloat.net
> >>>>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>>
> >>>>> _______________________________________________
> >>>>> Bloat mailing list
> >>>>> Bloat@lists.bufferbloat.net
> >>>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>
> >>>>
> >>>> --
> >>>> Dave Täht
> >>>> Open Networking needs **Open Source Hardware**
> >>>>
> >>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> >>>
> >>> _______________________________________________
> >>> Bloat mailing list
> >>> Bloat@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/bloat
> >
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 14751 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-06 15:30                                                                   ` Jim Gettys
@ 2015-05-06 18:03                                                                     ` Sebastian Moeller
  2015-05-06 20:25                                                                     ` Jonathan Morton
  1 sibling, 0 replies; 127+ messages in thread
From: Sebastian Moeller @ 2015-05-06 18:03 UTC (permalink / raw)
  To: Jim Gettys; +Cc: bloat

Hi Jim, hi List,


On May 6, 2015, at 17:30 , Jim Gettys <jg@freedesktop.org> wrote:

> 
> 
> On Wed, May 6, 2015 at 4:50 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Simon,
> 
> On May 6, 2015, at 07:08 , Simon Barber <simon@superduper.net> wrote:
> 
> > Hi Sebastian,
> >
> > My numbers are what I've personally come up with after working for many years with VoIP - they have no other basis.
> 
>         I did not intend to be-little such numbers at all, I just wanted to propose that we either use generally accepted scientifically measured numbers or make such measurements our self.
> 
> > One thing is that you have to compare apples to apples - the ITU numbers are for acoustic one way delay.
> 
> True, and this is why we easily can estimate the delay cost of different stages of the whole voip one-way pipeline to deuce how much latent budget we have for the network (aka buffer bloat on the way), but still bases our numbers on some reference for mouth-to-ear-delay. I think we can conservatively estimate the latency cost of the sampling, sender processing and receiver processing (outside of the de-jitter buffering) seem harder to estimate reliably, to my untrained eye.
> 
> > The poor state of jitter buffer implementations that almost every VoIP app or device has means that to hit these acoustic delay numbers you need significantly lower network delays.
> 
>         I fully agree, and if we can estimate this I think we can justify deductions from the mouth-to-ear budget. I would as first approximation assume that what we call latency under load increase to be tightly correlated with jitter, so we could take our “bloat-measurement” in ms an directly deduct it from the budget (or if we want to accept occasional voice degradation we can pick a sufficiently high percentile, but that is implementation detail).
> 
> > Also note that these numbers are worst case, which must include trip halfway around the globe - if you can hit the numbers with half globe propagation then you will hit much better numbers for 'local calls’.
> 
>         We  could turn this around by estimating to what distance voip quality will be good/decent/acceptable/lughable…
> 
> ​
> 
> ​Mean RTT is almost useless for VOIP and teleconferencing.  What matters is the RTT + jitter; a VOIP or teleconferencing application cannot function at a given latency unless the "drop outs" caused by late packets is low enough to not be obnoxious to human perception; there are a number of techniques to hide such late packet dropouts but all of them (short of FEC) damage the audio stream.
> 
> So ideally, not only do you measure the delay, you also measure the jitter to be able to figure out a realistic operating point for such applications.

	Yes, I fully endorse this! What I tried to propose in a slightly convoluted way, was to use the fact that we have a handle on at least one major jitter source, namely the induced latency under load, so we can take the mean RTT and a proxy for the jitter into account. We basically can say that with a given jitter (assuming a proper dimension) we can estimate how far a given one-way mouth to ear delay will carry a voip call: e.g.: ;et’s take the 150ms ITU number just for a start, subtract an empirically estimated max induced latency of say 60ms (to account for the required de-jitter buffer), as well as 20ms sample-time-per-voip packet (and then just ignore all other processing overhead) and end up with 150-60-20 = 70ms which at 5ms per 1000Km means 70/5 * 1000 = 14000km, which still is decent it also shows that at shorter distances the delay will be shorter. Or to turn this argument around the same system with a induced latency/jitter of 10ms will carry (150-10-20)/5 * 1000 = 24000 km.
	TL;DR with a measured latency under load we have a decent estimator for the jitter and can for example express the gain of beating buffer bloat in increased reach (be it voip call distance or for gamers distance to servers with still acceptable reactivity or some such).


Best Regards
	Sebastian

>                                           - Jim
> 
> 
> ​ 
>  
> ​
> 
> >
> > Simon
> >
> >
> > On 4/24/2015 11:03 PM, Sebastian Moeller wrote:
> >> Hi Simon, hi List
> >>
> >> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
> >>
> >>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
> >>      If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
> >>
> >> Best Regards
> >>      Sebastian
> >>
> >>
> >>> Simon
> >>>
> >>> Sent with AquaMail for Android
> >>> http://www.aqua-mail.com
> >>>
> >>>
> >>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
> >>>
> >>>> simon all your numbers are too large by at least a factor of 2. I
> >>>> think also you are thinking about total latency, rather than induced
> >>>> latency and jitter.
> >>>>
> >>>> Please see my earlier email laying out the bands. And gettys' manifesto.
> >>>>
> >>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
> >>>> you want, and a latency increase of 30ms is a proxy for also holding
> >>>> jitter that low.
> >>>>
> >>>>
> >>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
> >>>>> I think it might be useful to have a 'latency guide' for users. It would say
> >>>>> things like
> >>>>>
> >>>>> 100ms - VoIP applications work well
> >>>>> 250ms - VoIP applications - conversation is not as natural as it could be,
> >>>>> although users may not notice this.
> >>      The only way to detect whether a conversation is natural is if users notice, I would say...
> >>
> >>>>> 500ms - VoIP applications begin to have awkward pauses in conversation.
> >>>>> 1000ms - VoIP applications have significant annoying pauses in conversation.
> >>>>> 2000ms - VoIP unusable for most interactive conversations.
> >>>>>
> >>>>> 0-50ms - web pages load snappily
> >>>>> 250ms - web pages can often take an extra second to appear, even on the
> >>>>> highest bandwidth links
> >>>>> 1000ms - web pages load significantly slower than they should, taking
> >>>>> several extra seconds to appear, even on the highest bandwidth links
> >>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
> >>>>> seconds of delays for pages to load, even on the highest bandwidth links.
> >>>>>
> >>>>> Gaming.... some kind of guide here....
> >>>>>
> >>>>> Simon
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
> >>>>>> Hi Toke,
> >>>>>>
> >>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> >>>>>>
> >>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
> >>>>>>>
> >>>>>>>> I know this is not perfect and the numbers will probably require
> >>>>>>>> severe "bike-shedding”
> >>>>>>> Since you're literally asking for it... ;)
> >>>>>>>
> >>>>>>>
> >>>>>>> In this case we're talking about *added* latency. So the ambition should
> >>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
> >>>>>>> that proper application of a good queue management algorithm can keep it
> >>>>>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
> >>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
> >>>>>>         Oh, I can get behind that easily, I just thought basing the limits
> >>>>>> on externally relevant total latency thresholds would directly tell the user
> >>>>>> which applications might run well on his link. Sure this means that people
> >>>>>> on a satellite link most likely will miss out the acceptable voip threshold
> >>>>>> by their base-latency alone, but guess what telephony via satellite leaves
> >>>>>> something to be desired. That said if the alternative is no telephony I
> >>>>>> would take 1 second one-way delay any day ;).
> >>>>>>         What I liked about fixed thresholds is that the test would give a
> >>>>>> good indication what kind of uses are going to work well on the link under
> >>>>>> load, given that during load both base and induced latency come into play. I
> >>>>>> agree that 300ms as first threshold is rather unambiguous though (and I am
> >>>>>> certain that remote X11 will require a massively lower RTT unless one likes
> >>>>>> to think of remote desktop as an oil tanker simulator ;) )
> >>>>>>
> >>>>>>> The other increments I have less opinions about, but 100 ms does seem to
> >>>>>>> be a nice round number, so do yellow from 30-100 ms, then start with the
> >>>>>>> reds somewhere above that, and range up into the deep red / purple /
> >>>>>>> black with skulls and fiery death as we go nearer and above one second?
> >>>>>>>
> >>>>>>>
> >>>>>>> I very much think that raising peoples expectations and being quite
> >>>>>>> ambitious about what to expect is an important part of this. Of course
> >>>>>>> the base latency is going to vary, but the added latency shouldn't. And
> >>>>>>> sine we have the technology to make sure it doesn't, calling out bad
> >>>>>>> results when we see them is reasonable!
> >>>>>>         Okay so this would turn into:
> >>>>>>
> >>>>>> base latency to base latency + 30 ms:                           green
> >>>>>> base latency + 31 ms to base latency + 100 ms:          yellow
> >>>>>> base latency + 101 ms to base latency + 200 ms:         orange?
> >>>>>> base latency + 201 ms to base latency + 500 ms:         red
> >>>>>> base latency + 501 ms to base latency + 1000 ms:        fire
> >>>>>> base latency + 1001 ms to infinity:
> >>>>>> fire & brimstone
> >>>>>>
> >>>>>> correct?
> >>>>>>
> >>>>>>
> >>>>>>> -Toke
> >>>>>> _______________________________________________
> >>>>>> Bloat mailing list
> >>>>>> Bloat@lists.bufferbloat.net
> >>>>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>>
> >>>>> _______________________________________________
> >>>>> Bloat mailing list
> >>>>> Bloat@lists.bufferbloat.net
> >>>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>
> >>>>
> >>>> --
> >>>> Dave Täht
> >>>> Open Networking needs **Open Source Hardware**
> >>>>
> >>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> >>>
> >>> _______________________________________________
> >>> Bloat mailing list
> >>> Bloat@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/bloat
> >
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-06 15:30                                                                   ` Jim Gettys
  2015-05-06 18:03                                                                     ` Sebastian Moeller
@ 2015-05-06 20:25                                                                     ` Jonathan Morton
  2015-05-06 20:43                                                                       ` Toke Høiland-Jørgensen
                                                                                         ` (2 more replies)
  1 sibling, 3 replies; 127+ messages in thread
From: Jonathan Morton @ 2015-05-06 20:25 UTC (permalink / raw)
  To: Jim Gettys; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 576 bytes --]

So, as a proposed methodology, how does this sound:

Determine a reasonable ballpark figure for typical codec and jitter-buffer
delay (one way). Fix this as a constant value for the benchmark.

Measure the baseline network delays (round trip) to various reference
points worldwide.

Measure the maximum induced delays in each direction.

For each reference point, sum two sets of constant delays, the baseline
network delay, and both directions' induced delays.

Compare these totals to twice the ITU benchmark figures, rate accordingly,
and plot on a map.

- Jonathan Morton

[-- Attachment #2: Type: text/html, Size: 699 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-06 20:25                                                                     ` Jonathan Morton
@ 2015-05-06 20:43                                                                       ` Toke Høiland-Jørgensen
  2015-05-07  7:33                                                                         ` Sebastian Moeller
  2015-05-07  4:29                                                                       ` Mikael Abrahamsson
  2015-05-07  6:19                                                                       ` Sebastian Moeller
  2 siblings, 1 reply; 127+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-05-06 20:43 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: bloat

Jonathan Morton <chromatix99@gmail.com> writes:

> Compare these totals to twice the ITU benchmark figures, rate
> accordingly, and plot on a map.

A nice way of visualising this can be 'radius of reach within n
milliseconds'. Or, 'number of people reachable within n ms'. This paper
uses that (or something very similar) to visualise the benefits of
speed-of-light internet:
http://web.engr.illinois.edu/~singla2/papers/hotnets14.pdf

That same paper uses 30 ms as an 'instant response' number, btw, citing
this: http://plato.stanford.edu/entries/consciousness-temporal/empirical-findings.html

-Toke

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-06 20:25                                                                     ` Jonathan Morton
  2015-05-06 20:43                                                                       ` Toke Høiland-Jørgensen
@ 2015-05-07  4:29                                                                       ` Mikael Abrahamsson
  2015-05-07  7:08                                                                         ` jb
  2015-05-07  6:19                                                                       ` Sebastian Moeller
  2 siblings, 1 reply; 127+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07  4:29 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: bloat

On Wed, 6 May 2015, Jonathan Morton wrote:

> So, as a proposed methodology, how does this sound:
>
> Determine a reasonable ballpark figure for typical codec and jitter-buffer
> delay (one way). Fix this as a constant value for the benchmark.

Commercial grade VoIP systems running in a controlled environment 
typically (in my experience) come with 40ms PDV (Packet Delay Variation, 
let's not call it jitter, the timing people get upset if you call it 
jitter) buffer. These systems typically do not work well over the Internet 
as we here all know, 40ms is quite low PDV on a FIFO based Internet 
access. Applications actually designed to work on the Internet have PDV 
buffers that adapt according to what PDV is seen, and so they can both 
increase and decrease in size over the time of a call.

I'd say ballpark reasonable figure for VoIP and video conferencing of 
reasonable PDV is in the 50-100ms range or so, where lower of course is 
better. It's basically impossible to have really low PDV on a 1 megabit/s 
link because a full size 1500 byte packet will take close to 10ms to 
transmit, but it's perfectly feasable to keep it under 10-20ms when the 
link speed increases. If we say that 1 megabit/s (typical ADSL up speed)is 
the lower bound of speed where one can expect VoIP to work together with 
other Internet traffic, then 50-100ms should be technically attainable if 
the vendor/operator actually tries to reduce bufferbloat/PDV.

> Measure the maximum induced delays in each direction.

Depending on the length of the test, it might make sense to aim for 95th 
or 99th percentile, ie throw away the one or few worst values as these 
might be outliers. But generally I agree with your proposed terminology.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-06 20:25                                                                     ` Jonathan Morton
  2015-05-06 20:43                                                                       ` Toke Høiland-Jørgensen
  2015-05-07  4:29                                                                       ` Mikael Abrahamsson
@ 2015-05-07  6:19                                                                       ` Sebastian Moeller
  2 siblings, 0 replies; 127+ messages in thread
From: Sebastian Moeller @ 2015-05-07  6:19 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: bloat

Hi Jonathan,


On May 6, 2015, at 22:25 , Jonathan Morton <chromatix99@gmail.com> wrote:

> So, as a proposed methodology, how does this sound:
> 
> Determine a reasonable ballpark figure for typical codec and jitter-buffer delay (one way). Fix this as a constant value for the benchmark.

	But we can do better, assuming captive de-jitter buffers (and they better are), we can take the induced latency per direction as first approximation of the required de-jitter buffer size.

> 
> Measure the baseline network delays (round trip) to various reference points worldwide.
> 
> Measure the maximum induced delays in each direction.
> 
> For each reference point, sum two sets of constant delays, the baseline network delay, and both directions' induced delays.

	I think we should not count the de-jitter buffer and the actually PDV twice, as far as I understand the principle of the de-jittering is to introduce a buffer deep enough to smooth out the real variable packet latency, so at best we should count max(induced latency per direction, de-jitter buffer depth per direction), so the induced latency (or a suitable high percentile if we aim for good enough instead of perfect) is the best estimator we have for the jitter-induced delay. But this is not my line of work so I could b out to lunch here...


> 
> Compare these totals to twice the ITU benchmark figures, rate accordingly, and plot on a map.

	I like the map idea (and I think I have seen something like this recently, I think visualizing propagation speed in fiber). Now any map just based on actual distance on the earth’s surface is going to give a lower bound, but that should still be a decent estimate (unless something nefarious like http://research.dyn.com/2013/11/mitm-internet-hijacking/ is going on then all bets are off ;) )

Best Regards
	Sebastian

> 
> - Jonathan Morton


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07  4:29                                                                       ` Mikael Abrahamsson
@ 2015-05-07  7:08                                                                         ` jb
  2015-05-07  7:18                                                                           ` Jonathan Morton
  2015-05-07  7:19                                                                           ` Mikael Abrahamsson
  0 siblings, 2 replies; 127+ messages in thread
From: jb @ 2015-05-07  7:08 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: Jonathan Morton, bloat

[-- Attachment #1: Type: text/plain, Size: 2744 bytes --]

I am working on a multi-location jitter test (sorry PDV!) and it is showing
a lot of promise.
For the purposes of reporting jitter, what kind of time measurement horizon
is acceptable
and what is the +/- output actually based on, statistically ?

For example - is one minute or more of jitter measurements, with the +/-
being
the 2rd std deviation, reasonable or is there some generally accepted
definition ?

ping reports an "mdev" which is
SQRT(SUM(RTT*RTT) / N – (SUM(RTT)/N)^2)
but I've seen jitter defined as maximum and minimum RTT around the average
however that seems very influenced by one outlier measurement.

thanks

On Thu, May 7, 2015 at 2:29 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:

> On Wed, 6 May 2015, Jonathan Morton wrote:
>
>  So, as a proposed methodology, how does this sound:
>>
>> Determine a reasonable ballpark figure for typical codec and jitter-buffer
>> delay (one way). Fix this as a constant value for the benchmark.
>>
>
> Commercial grade VoIP systems running in a controlled environment
> typically (in my experience) come with 40ms PDV (Packet Delay Variation,
> let's not call it jitter, the timing people get upset if you call it
> jitter) buffer. These systems typically do not work well over the Internet
> as we here all know, 40ms is quite low PDV on a FIFO based Internet access.
> Applications actually designed to work on the Internet have PDV buffers
> that adapt according to what PDV is seen, and so they can both increase and
> decrease in size over the time of a call.
>
> I'd say ballpark reasonable figure for VoIP and video conferencing of
> reasonable PDV is in the 50-100ms range or so, where lower of course is
> better. It's basically impossible to have really low PDV on a 1 megabit/s
> link because a full size 1500 byte packet will take close to 10ms to
> transmit, but it's perfectly feasable to keep it under 10-20ms when the
> link speed increases. If we say that 1 megabit/s (typical ADSL up speed)is
> the lower bound of speed where one can expect VoIP to work together with
> other Internet traffic, then 50-100ms should be technically attainable if
> the vendor/operator actually tries to reduce bufferbloat/PDV.
>
>  Measure the maximum induced delays in each direction.
>>
>
> Depending on the length of the test, it might make sense to aim for 95th
> or 99th percentile, ie throw away the one or few worst values as these
> might be outliers. But generally I agree with your proposed terminology.
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 3666 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07  7:08                                                                         ` jb
@ 2015-05-07  7:18                                                                           ` Jonathan Morton
  2015-05-07  7:24                                                                             ` Mikael Abrahamsson
  2015-05-07  7:37                                                                             ` [Bloat] DSLReports Speed Test has latency measurement built-in Sebastian Moeller
  2015-05-07  7:19                                                                           ` Mikael Abrahamsson
  1 sibling, 2 replies; 127+ messages in thread
From: Jonathan Morton @ 2015-05-07  7:18 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 745 bytes --]

It may depend on the application's tolerance to packet loss. A packet
delayed further than the jitter buffer's tolerance counts as lost, so *IF*
jitter is randomly distributed, jitter can be traded off against loss. For
those purposes, standard deviation may be a valid metric.

However the more common characteristic is that delay is sometimes low (link
idle) and sometimes high (buffer full) and rarely in between. In other
words, delay samples are not statistically independent; loss due to jitter
is bursty, and real-time applications like VoIP can't cope with that. For
that reason, and due to your low temporal sampling rate, you should take
the peak delay observed under load and compare it to the average during
idle.

- Jonathan Morton

[-- Attachment #2: Type: text/html, Size: 816 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07  7:08                                                                         ` jb
  2015-05-07  7:18                                                                           ` Jonathan Morton
@ 2015-05-07  7:19                                                                           ` Mikael Abrahamsson
  1 sibling, 0 replies; 127+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07  7:19 UTC (permalink / raw)
  To: jb; +Cc: Jonathan Morton, bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1182 bytes --]

On Thu, 7 May 2015, jb wrote:

> I am working on a multi-location jitter test (sorry PDV!) and it is 
> showing a lot of promise. For the purposes of reporting jitter, what 
> kind of time measurement horizon is acceptable and what is the +/- 
> output actually based on, statistically ?
>
> For example - is one minute or more of jitter measurements, with the +/- 
> being the 2rd std deviation, reasonable or is there some generally 
> accepted definition ?
>
> ping reports an "mdev" which is
> SQRT(SUM(RTT*RTT) / N – (SUM(RTT)/N)^2)
> but I've seen jitter defined as maximum and minimum RTT around the average
> however that seems very influenced by one outlier measurement.

There is no single PDV definition, all of the ones you listed are 
perfectly valid.

If you send one packet every 20ms (simulating a g711 voip call with fairly 
common characteristics) at the same time as you send other traffic, and 
then you present max, 99th percentile, 95th percentile and average pdv, I 
think all of those values are valuable. To a novice user, I would probably 
choose the 99th and/or 95th percentile PDV value from baseline.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07  7:18                                                                           ` Jonathan Morton
@ 2015-05-07  7:24                                                                             ` Mikael Abrahamsson
  2015-05-07  7:40                                                                               ` Sebastian Moeller
  2015-05-07  7:37                                                                             ` [Bloat] DSLReports Speed Test has latency measurement built-in Sebastian Moeller
  1 sibling, 1 reply; 127+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07  7:24 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: bloat

On Thu, 7 May 2015, Jonathan Morton wrote:

> However the more common characteristic is that delay is sometimes low 
> (link idle) and sometimes high (buffer full) and rarely in between. In 
> other words, delay samples are not statistically independent; loss due 
> to jitter is bursty, and real-time applications like VoIP can't cope 
> with that. For that reason, and due to your low temporal sampling rate, 
> you should take the peak delay observed under load and compare it to the 
> average during idle.

Well, some applications will stop playing if the playout-buffer is empty, 
and if the packet arrives late, just start playing again and then increase 
the PDV buffer to whatever gap was observed, and if the PDV buffer has 
sustained fill, start playing it faster or skipping packets to play down 
the PDV buffer fill again.

So you'll observe silence or cutouts, but you'll still hear all sound but 
after this event, your mouth-ear-mouth-ear delay has now increased.

As far as I can tell, for instance Skype has a lot of different ways to 
cope with changing characteristics of the path, which work a lot better 
than a 10 year old classic PSTN-style G.711-over-IP style system with 
static 40ms PDV buffers, which behave exactly as you describe.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-06 20:43                                                                       ` Toke Høiland-Jørgensen
@ 2015-05-07  7:33                                                                         ` Sebastian Moeller
  0 siblings, 0 replies; 127+ messages in thread
From: Sebastian Moeller @ 2015-05-07  7:33 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Jonathan Morton, bloat

Hi Toke,


On May 6, 2015, at 22:43 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:

> Jonathan Morton <chromatix99@gmail.com> writes:
> 
>> Compare these totals to twice the ITU benchmark figures, rate
>> accordingly, and plot on a map.
> 
> A nice way of visualising this can be 'radius of reach within n
> milliseconds'. Or, 'number of people reachable within n ms'. This paper
> uses that (or something very similar) to visualise the benefits of
> speed-of-light internet:
> http://web.engr.illinois.edu/~singla2/papers/hotnets14.pdf
> 
> That same paper uses 30 ms as an 'instant response' number, btw, citing
> this: http://plato.stanford.edu/entries/consciousness-temporal/empirical-findings.html

	This number does not mean what the authors of that paper think it does (assuming that my interpretation is correct)… they at least should have read their reference 7 in full. Yes 30ms will count as instantaneous, but it s far from the upper threshold.
To illustrate, the reference basically shows that if two successive events are spaced further than 30 ms apart they will be (most likely) interpreted as two distinct events instead of one event with a temporal extent. To relate to networks, if one would send successive frames of video without buffering these 30ms would be the time permissible for transmission and presentation of successive frames without people perceiving glitches or a slide show (you would think, but motion perception would still be of odd movement). 
	BUT if we think about the related phenomenon of flicker-fusion frequency it becomes clear that this might well depend on the actual stimuli and the surround luminosity. I think no one is proposing to under buffer so severely that gaps >= 30ms occur, so this number seems not too relevant in my eyes. 
	I think the more relevant question is what delay between an action and the response will people tolerate and find acceptable. I guess I should do a little literature research.

Best Regards
	Sebastian

> 
> -Toke
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07  7:18                                                                           ` Jonathan Morton
  2015-05-07  7:24                                                                             ` Mikael Abrahamsson
@ 2015-05-07  7:37                                                                             ` Sebastian Moeller
  1 sibling, 0 replies; 127+ messages in thread
From: Sebastian Moeller @ 2015-05-07  7:37 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: bloat

Hi Jonathan,


On May 7, 2015, at 09:18 , Jonathan Morton <chromatix99@gmail.com> wrote:

> It may depend on the application's tolerance to packet loss. A packet delayed further than the jitter buffer's tolerance counts as lost, so *IF* jitter is randomly distributed, jitter can be traded off against loss. For those purposes, standard deviation may be a valid metric.

	All valid, but I  think that the diced latency is not a normal distribution, it has a lower bound the min RTT caused by the “speed of light” (I simplify), but no real upper bound (I think we have examples of several seconds), so standard deviation or confidence intervals might not be applicable (at least not formally).


Best Regards
	Sebastian

> 
> However the more common characteristic is that delay is sometimes low (link idle) and sometimes high (buffer full) and rarely in between. In other words, delay samples are not statistically independent; loss due to jitter is bursty, and real-time applications like VoIP can't cope with that. For that reason, and due to your low temporal sampling rate, you should take the peak delay observed under load and compare it to the average during idle.
> 
> - Jonathan Morton
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07  7:24                                                                             ` Mikael Abrahamsson
@ 2015-05-07  7:40                                                                               ` Sebastian Moeller
  2015-05-07  9:16                                                                                 ` Mikael Abrahamsson
  0 siblings, 1 reply; 127+ messages in thread
From: Sebastian Moeller @ 2015-05-07  7:40 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: Jonathan Morton, bloat

Hi Mikhael,

On May 7, 2015, at 09:24 , Mikael Abrahamsson <swmike@swm.pp.se> wrote:

> On Thu, 7 May 2015, Jonathan Morton wrote:
> 
>> However the more common characteristic is that delay is sometimes low (link idle) and sometimes high (buffer full) and rarely in between. In other words, delay samples are not statistically independent; loss due to jitter is bursty, and real-time applications like VoIP can't cope with that. For that reason, and due to your low temporal sampling rate, you should take the peak delay observed under load and compare it to the average during idle.
> 
> Well, some applications will stop playing if the playout-buffer is empty, and if the packet arrives late, just start playing again and then increase the PDV buffer to whatever gap was observed, and if the PDV buffer has sustained fill, start playing it faster or skipping packets to play down the PDV buffer fill again.
> 
> So you'll observe silence or cutouts, but you'll still hear all sound but after this event, your mouth-ear-mouth-ear delay has now increased.
> 
> As far as I can tell, for instance Skype has a lot of different ways to cope with changing characteristics of the path, which work a lot better than a 10 year old classic PSTN-style G.711-over-IP style system with static 40ms PDV buffers, which behave exactly as you describe.

	Is this 40ms sort of set in stone? If so we have a new indicator for bad buffer-bloat if inured latency > 40 ms link is unsuitable for decent voip (using old equipment). Is the newer voip stuff that telcos roll out currently any smarter?

Best Regards
	Sebastian


> 
> -- 
> Mikael Abrahamsson    email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07  7:40                                                                               ` Sebastian Moeller
@ 2015-05-07  9:16                                                                                 ` Mikael Abrahamsson
  2015-05-07 10:44                                                                                   ` jb
  0 siblings, 1 reply; 127+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07  9:16 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: bloat

On Thu, 7 May 2015, Sebastian Moeller wrote:

> 	Is this 40ms sort of set in stone? If so we have a new indicator 
> for bad buffer-bloat if inured latency > 40 ms link is unsuitable for 
> decent voip (using old equipment). Is the newer voip stuff that telcos 
> roll out currently any smarter?

The 40ms is fairly typical for what I encountered 10 years ago. To deploy 
them there was a requirement to have QoS (basically low-latency queuing on 
Cisco) for DSCP EF traffic, otherwise things didn't work on the 0.5-2 
megabit/s connections that were common back then.

I'd say anything people are trying to deploy now for use on the Internet 
without QoS, 40ms just won't work and has never really worked. You need 
adaptive PDV-buffers and they need to be able to handle hundreds of ms of 
PDV.

If you look at this old document from Cisco (10 years old):

http://www.ciscopress.com/articles/article.asp?p=357102

"Voice (Bearer Traffic)

The following list summarizes the key QoS requirements and recommendations 
for voice (bearer traffic):

Voice traffic should be marked to DSCP EF per the QoS Baseline and RFC 
3246.

Loss should be no more than 1 percent.

One-way latency (mouth to ear) should be no more than 150 ms.

Average one-way jitter should be targeted at less than 30 ms.

A range of 21 to 320 kbps of guaranteed priority bandwidth is required per 
call (depending on the sampling rate, the VoIP codec, and Layer 2 media 
overhead).

Voice quality directly is affected by all three QoS quality factors: loss, 
latency, and jitter."

This requirement kind of reflects the requirements of the VoIP systems of 
the day with 40ms PDV buffer. There is also a section a page down or so 
about "Jitter buffers" where there is a recommendation to have adaptive 
jitter buffers, which I didn't encounter back then but I really hope is a 
lot more common today.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07  9:16                                                                                 ` Mikael Abrahamsson
@ 2015-05-07 10:44                                                                                   ` jb
  2015-05-07 11:36                                                                                     ` Sebastian Moeller
                                                                                                       ` (2 more replies)
  0 siblings, 3 replies; 127+ messages in thread
From: jb @ 2015-05-07 10:44 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 3318 bytes --]

There is a web socket based jitter tester now. It is very early stage but
works ok.

http://www.dslreports.com/speedtest?radar=1

So the latency displayed is the mean latency from a rolling 60 sample
buffer,
Minimum latency is also displayed.
and the +/- PDV value is the mean difference between sequential pings in
that same rolling buffer. It is quite similar to the std.dev actually (not
shown).

Anyway because it is talking to 21 servers or whatever it is not doing high
frequency pinging, I think its about 2hz per server (which is about 50
packets
per second and not much bandwidth).

My thought is one might click on a server and focus in on that,
then it could go to a higher frequency. Since it is still TCP, I've got
lingering
doubts that simulating 20ms stream with tcp bursts is the same as UDP,
definitely in the case of packet loss, it would not be.'

There is no way to "load" your connection from this tool, you could open
another
page and run a speed test of course.

I'm still working on it, but since you guys are talking RTT and Jitter
thought I'd
throw it into the topic.

On Thu, May 7, 2015 at 7:16 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:

> On Thu, 7 May 2015, Sebastian Moeller wrote:
>
>          Is this 40ms sort of set in stone? If so we have a new indicator
>> for bad buffer-bloat if inured latency > 40 ms link is unsuitable for
>> decent voip (using old equipment). Is the newer voip stuff that telcos roll
>> out currently any smarter?
>>
>
> The 40ms is fairly typical for what I encountered 10 years ago. To deploy
> them there was a requirement to have QoS (basically low-latency queuing on
> Cisco) for DSCP EF traffic, otherwise things didn't work on the 0.5-2
> megabit/s connections that were common back then.
>
> I'd say anything people are trying to deploy now for use on the Internet
> without QoS, 40ms just won't work and has never really worked. You need
> adaptive PDV-buffers and they need to be able to handle hundreds of ms of
> PDV.
>
> If you look at this old document from Cisco (10 years old):
>
> http://www.ciscopress.com/articles/article.asp?p=357102
>
> "Voice (Bearer Traffic)
>
> The following list summarizes the key QoS requirements and recommendations
> for voice (bearer traffic):
>
> Voice traffic should be marked to DSCP EF per the QoS Baseline and RFC
> 3246.
>
> Loss should be no more than 1 percent.
>
> One-way latency (mouth to ear) should be no more than 150 ms.
>
> Average one-way jitter should be targeted at less than 30 ms.
>
> A range of 21 to 320 kbps of guaranteed priority bandwidth is required per
> call (depending on the sampling rate, the VoIP codec, and Layer 2 media
> overhead).
>
> Voice quality directly is affected by all three QoS quality factors: loss,
> latency, and jitter."
>
> This requirement kind of reflects the requirements of the VoIP systems of
> the day with 40ms PDV buffer. There is also a section a page down or so
> about "Jitter buffers" where there is a recommendation to have adaptive
> jitter buffers, which I didn't encounter back then but I really hope is a
> lot more common today.
>
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 4534 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 10:44                                                                                   ` jb
@ 2015-05-07 11:36                                                                                     ` Sebastian Moeller
  2015-05-07 11:44                                                                                     ` Mikael Abrahamsson
  2015-05-08 13:20                                                                                     ` [Bloat] DSLReports Jitter/PDV test Rich Brown
  2 siblings, 0 replies; 127+ messages in thread
From: Sebastian Moeller @ 2015-05-07 11:36 UTC (permalink / raw)
  To: jb; +Cc: bloat

Hi jb,


On May 7, 2015, at 12:44 , jb <justin@dslr.net> wrote:

> There is a web socket based jitter tester now. It is very early stage but works ok.
> 
> http://www.dslreports.com/speedtest?radar=1

	Looks great.

> 
> So the latency displayed is the mean latency from a rolling 60 sample buffer,
> Minimum latency is also displayed.
> and the +/- PDV value is the mean difference between sequential pings in 
> that same rolling buffer. It is quite similar to the std.dev actually (not shown).

	So it takes RTT(N+1) - RTT(N)? But if due to buffer bloat the latency goes up for several 100s of MS or several seconds would this not register as low PDV? Would it not be better to take the difference withe the minimum? And maybe even remember the minimum for a longer period than 60 seconds? I guess no network path is guaranteed to be stable over time, but if re-routing is rare, maybe even keep the same minimum as long as the tool is running? You could still report some aggregate, like the mean deviation of the sample buffer, but just not taking the difference between consecutive samples (which sort of feels like rater giving the change in PDV than PDV itself, but as always I am a layman in these matters)...

> 
> Anyway because it is talking to 21 servers or whatever it is not doing high
> frequency pinging, I think its about 2hz per server (which is about 50 packets
> per second and not much bandwidth). 
> 
> My thought is one might click on a server and focus in on that,

	That sounds pretty useful, especially giving the already relative broad global coverage ;)

> then it could go to a higher frequency. Since it is still TCP, I've got lingering 
> doubts that simulating 20ms stream with tcp bursts is the same as UDP,
> definitely in the case of packet loss, it would not be.'
> 
> There is no way to "load" your connection from this tool, you could open another
> page and run a speed test of course.

	Could this be used to select a server for the bandwidth test?

Best Regards
	Sebastian

> 
> I'm still working on it, but since you guys are talking RTT and Jitter thought I'd
> throw it into the topic.
> 
> On Thu, May 7, 2015 at 7:16 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 7 May 2015, Sebastian Moeller wrote:
> 
>         Is this 40ms sort of set in stone? If so we have a new indicator for bad buffer-bloat if inured latency > 40 ms link is unsuitable for decent voip (using old equipment). Is the newer voip stuff that telcos roll out currently any smarter?
> 
> The 40ms is fairly typical for what I encountered 10 years ago. To deploy them there was a requirement to have QoS (basically low-latency queuing on Cisco) for DSCP EF traffic, otherwise things didn't work on the 0.5-2 megabit/s connections that were common back then.
> 
> I'd say anything people are trying to deploy now for use on the Internet without QoS, 40ms just won't work and has never really worked. You need adaptive PDV-buffers and they need to be able to handle hundreds of ms of PDV.
> 
> If you look at this old document from Cisco (10 years old):
> 
> http://www.ciscopress.com/articles/article.asp?p=357102
> 
> "Voice (Bearer Traffic)
> 
> The following list summarizes the key QoS requirements and recommendations for voice (bearer traffic):
> 
> Voice traffic should be marked to DSCP EF per the QoS Baseline and RFC 3246.
> 
> Loss should be no more than 1 percent.
> 
> One-way latency (mouth to ear) should be no more than 150 ms.
> 
> Average one-way jitter should be targeted at less than 30 ms.
> 
> A range of 21 to 320 kbps of guaranteed priority bandwidth is required per call (depending on the sampling rate, the VoIP codec, and Layer 2 media overhead).
> 
> Voice quality directly is affected by all three QoS quality factors: loss, latency, and jitter."
> 
> This requirement kind of reflects the requirements of the VoIP systems of the day with 40ms PDV buffer. There is also a section a page down or so about "Jitter buffers" where there is a recommendation to have adaptive jitter buffers, which I didn't encounter back then but I really hope is a lot more common today.
> 
> 
> -- 
> Mikael Abrahamsson    email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> 


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 10:44                                                                                   ` jb
  2015-05-07 11:36                                                                                     ` Sebastian Moeller
@ 2015-05-07 11:44                                                                                     ` Mikael Abrahamsson
  2015-05-07 13:10                                                                                       ` Jim Gettys
  2015-05-07 13:14                                                                                       ` jb
  2015-05-08 13:20                                                                                     ` [Bloat] DSLReports Jitter/PDV test Rich Brown
  2 siblings, 2 replies; 127+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07 11:44 UTC (permalink / raw)
  To: jb; +Cc: bloat

On Thu, 7 May 2015, jb wrote:

> There is a web socket based jitter tester now. It is very early stage but
> works ok.
>
> http://www.dslreports.com/speedtest?radar=1
>
> So the latency displayed is the mean latency from a rolling 60 sample 
> buffer, Minimum latency is also displayed. and the +/- PDV value is the 
> mean difference between sequential pings in that same rolling buffer. It 
> is quite similar to the std.dev actually (not shown).

So I think there are two schools here, either you take average and display 
+ / - from that, but I think I prefer to take the lowest of the last 100 
samples (or something), and then display PDV from that "floor" value, ie 
PDV can't ever be negative, it can only be positive.

Apart from that, the above multi-place RTT test is really really nice, 
thanks for doing this!

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 11:44                                                                                     ` Mikael Abrahamsson
@ 2015-05-07 13:10                                                                                       ` Jim Gettys
  2015-05-07 13:18                                                                                         ` Mikael Abrahamsson
  2015-05-07 13:14                                                                                       ` jb
  1 sibling, 1 reply; 127+ messages in thread
From: Jim Gettys @ 2015-05-07 13:10 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1836 bytes --]

What I don't know is how rapidly VOIP applications will adjust their
latency + jitter window (the operating point that they choose for their
operation).  They can't adjust it instantly, as if they do, the transitions
from one operating point to another will cause problems, and you certainly
won't be doing that adjustment quickly.

So the time period over which one computes jitter statistics should
probably be related to that behavior.

Ideally, we need to get someone involved in WebRTC to help with this, to
present statistics that may be useful to end users to predict the behavior
of their service.

I'll see if I can get someone working on that to join the discussion.
                     - Jim


On Thu, May 7, 2015 at 7:44 AM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:

> On Thu, 7 May 2015, jb wrote:
>
>  There is a web socket based jitter tester now. It is very early stage but
>> works ok.
>>
>> http://www.dslreports.com/speedtest?radar=1
>>
>> So the latency displayed is the mean latency from a rolling 60 sample
>> buffer, Minimum latency is also displayed. and the +/- PDV value is the
>> mean difference between sequential pings in that same rolling buffer. It is
>> quite similar to the std.dev actually (not shown).
>>
>
> So I think there are two schools here, either you take average and display
> + / - from that, but I think I prefer to take the lowest of the last 100
> samples (or something), and then display PDV from that "floor" value, ie
> PDV can't ever be negative, it can only be positive.
>
> Apart from that, the above multi-place RTT test is really really nice,
> thanks for doing this!
>
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 3183 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 11:44                                                                                     ` Mikael Abrahamsson
  2015-05-07 13:10                                                                                       ` Jim Gettys
@ 2015-05-07 13:14                                                                                       ` jb
  2015-05-07 13:26                                                                                         ` Neil Davies
  2015-05-07 14:45                                                                                         ` Simon Barber
  1 sibling, 2 replies; 127+ messages in thread
From: jb @ 2015-05-07 13:14 UTC (permalink / raw)
  To: Mikael Abrahamsson, bloat

[-- Attachment #1: Type: text/plain, Size: 1598 bytes --]

I thought would be more sane too. I see mentioned online that PDV is a
gaussian distribution (around mean) but it looks more like half a bell
curve, with most numbers near the the lowest latency seen, and getting
progressively worse with
less frequency.
At least for DSL connections on good ISPs that scenario seems more frequent.
You "usually" get the best latency and "sometimes" get spikes or fuzz on
top of it.

by the way after I posted I discovered Firefox has an issue with this test
so I had
to block it with a message, my apologies if anyone wasted time trying it
with FF.
Hopefully i can figure out why.


On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:

> On Thu, 7 May 2015, jb wrote:
>
>  There is a web socket based jitter tester now. It is very early stage but
>> works ok.
>>
>> http://www.dslreports.com/speedtest?radar=1
>>
>> So the latency displayed is the mean latency from a rolling 60 sample
>> buffer, Minimum latency is also displayed. and the +/- PDV value is the
>> mean difference between sequential pings in that same rolling buffer. It is
>> quite similar to the std.dev actually (not shown).
>>
>
> So I think there are two schools here, either you take average and display
> + / - from that, but I think I prefer to take the lowest of the last 100
> samples (or something), and then display PDV from that "floor" value, ie
> PDV can't ever be negative, it can only be positive.
>
> Apart from that, the above multi-place RTT test is really really nice,
> thanks for doing this!
>
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
>

[-- Attachment #2: Type: text/html, Size: 2329 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 13:10                                                                                       ` Jim Gettys
@ 2015-05-07 13:18                                                                                         ` Mikael Abrahamsson
  0 siblings, 0 replies; 127+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07 13:18 UTC (permalink / raw)
  To: Jim Gettys; +Cc: bloat

On Thu, 7 May 2015, Jim Gettys wrote:

> Ideally, we need to get someone involved in WebRTC to help with this, to 
> present statistics that may be useful to end users to predict the 
> behavior of their service.

If nothing else, I would really like to be able to expose the realtime 
application and its network experience, to the user.

For the kind of classic PSTNoverIP system I mentioned before, it was 
usually possible to collect statistics such as:

Packet loss (packet was lost completely)
Packet re-ordering (packets arrived out of order)
Packet PDV buffer miss (packet arrived too late to be played on time)
I guess it's possible to get PDV buffer underrun or overrun (depending on 
how one sees it), if I get a bunch of PDV buffer misses and then I halt 
play-out to wait for the PDV buffer to fill up, and then I get 200ms worth 
of packets at once and I don't have 200ms worth of buffer, then I throw 
away sound due to that...

So it's all depending on the whole machinery and how it acts, you need 
different statistics. How to present this in a useful manner to the user 
is a very interesting problem, but it would be nice if most VoIP 
applications at least had a "status window" where these values could be 
seen in a graph or something similar to "task manager" in windows.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 13:14                                                                                       ` jb
@ 2015-05-07 13:26                                                                                         ` Neil Davies
  2015-05-07 14:45                                                                                         ` Simon Barber
  1 sibling, 0 replies; 127+ messages in thread
From: Neil Davies @ 2015-05-07 13:26 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 2962 bytes --]


On 7 May 2015, at 14:14, jb <justin@dslr.net> wrote:

> I thought would be more sane too. I see mentioned online that PDV is a 
> gaussian distribution (around mean) but it looks more like half a bell curve, with most numbers near the the lowest latency seen, and getting progressively worse with
> less frequency.

That's someone describing the typical mathematical formulation (motivated by noise models in signal propagation) not the reality experienced over DSL links

> At least for DSL connections on good ISPs that scenario seems more frequent.
> You "usually" get the best latency and "sometimes" get spikes or fuzz on top of it.

"Good ISPs" (let's, for the moment define good this way) are ones in which the variability induced by transit accross them is small and bounded - BT Wholesale (access network) has - in our experience - delivers packets (after you've removed the effects of distance and packet size) from the customer to the retail ISP with <5ms delay variation (~0%loss) and from the retail ISP to the customer <15ms delay variation <0.1% loss. The delay appears to be uniformly distributed.

The major (in such a scenario) cause of delay/loss is the instantaneous overdriving of the last mile capacity - that takes the typical pattern of rapid growth followed by slow decay that would expected for a queue fill/empty cycle at that point in the network (in that case the BRAS)

An example (not quite what described above - but one that illustrates the isssues) can be found here; http://www.slideshare.net/mgeddes/advanced-network-performance-measurement

Neil

> 
> by the way after I posted I discovered Firefox has an issue with this test so I had
> to block it with a message, my apologies if anyone wasted time trying it with FF.
> Hopefully i can figure out why.
> 
> 
> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 7 May 2015, jb wrote:
> 
> There is a web socket based jitter tester now. It is very early stage but
> works ok.
> 
> http://www.dslreports.com/speedtest?radar=1
> 
> So the latency displayed is the mean latency from a rolling 60 sample buffer, Minimum latency is also displayed. and the +/- PDV value is the mean difference between sequential pings in that same rolling buffer. It is quite similar to the std.dev actually (not shown).
> 
> So I think there are two schools here, either you take average and display + / - from that, but I think I prefer to take the lowest of the last 100 samples (or something), and then display PDV from that "floor" value, ie PDV can't ever be negative, it can only be positive.
> 
> Apart from that, the above multi-place RTT test is really really nice, thanks for doing this!
> 
> 
> -- 
> Mikael Abrahamsson    email: swmike@swm.pp.se
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


[-- Attachment #2: Type: text/html, Size: 4350 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 13:14                                                                                       ` jb
  2015-05-07 13:26                                                                                         ` Neil Davies
@ 2015-05-07 14:45                                                                                         ` Simon Barber
  2015-05-07 22:27                                                                                           ` Dave Taht
  1 sibling, 1 reply; 127+ messages in thread
From: Simon Barber @ 2015-05-07 14:45 UTC (permalink / raw)
  To: jb, Mikael Abrahamsson, bloat

[-- Attachment #1: Type: text/plain, Size: 3916 bytes --]

The key figure for VoIP is maximum latency, or perhaps somewhere around 
99th percentile. Voice packets cannot be played out if they are late, so 
how late they are is the only thing that matters. If many packets are early 
but more than a very small number are late, then the jitter buffer has to 
adjust to handle the late packets. Adjusting the jitter buffer disrupts the 
conversation, so ideally adjustments are infrequent. When maximum latency 
suddenly increases it becomes necessary to increase the buffer fairly 
quickly to avoid a dropout in the conversation. Buffer reductions can be 
hidden by waiting for gaps in conversation. People get used to the acoustic 
round trip latency and learn how quickly to expect a reply from the other 
person (unless latency is really too high), but adjustments interfere with 
this learned expectation, so make it hard to interpret why the other person 
has paused. Thus adjustments to the buffering should be as infrequent as 
possible.

Codel measures and tracks minimum latency in its inner 'interval' loop. For 
VoIP the maximum is what counts. You can call it minimum + jitter, but the 
maximum is the important thing (not the absolute maximum, since a very 
small number of late packets are tolerable, but perhaps the 99th percentile).

During a conversation it will take some time at the start to learn the 
characteristics of the link, but ideally the jitter buffer algorithm will 
quickly get to a place where few adjustments are made. The more 
conservative the buffer (higher delay above minimum) the less likely a 
future adjustment will be needed, hence a tendency towards larger buffers 
(and more delay).

Priority queueing is perfect for VoIP, since it can keep the jitter at a 
single hop down to the transmission time for a single maximum size packet. 
Fair Queueing will often achieve the same thing, since VoIP streams are 
often the lowest bandwidth ongoing stream on the link.

Simon

Sent with AquaMail for Android
http://www.aqua-mail.com


On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:

> I thought would be more sane too. I see mentioned online that PDV is a
> gaussian distribution (around mean) but it looks more like half a bell
> curve, with most numbers near the the lowest latency seen, and getting
> progressively worse with
> less frequency.
> At least for DSL connections on good ISPs that scenario seems more frequent.
> You "usually" get the best latency and "sometimes" get spikes or fuzz on
> top of it.
>
> by the way after I posted I discovered Firefox has an issue with this test
> so I had
> to block it with a message, my apologies if anyone wasted time trying it
> with FF.
> Hopefully i can figure out why.
>
>
> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>
> > On Thu, 7 May 2015, jb wrote:
> >
> >  There is a web socket based jitter tester now. It is very early stage but
> >> works ok.
> >>
> >> http://www.dslreports.com/speedtest?radar=1
> >>
> >> So the latency displayed is the mean latency from a rolling 60 sample
> >> buffer, Minimum latency is also displayed. and the +/- PDV value is the
> >> mean difference between sequential pings in that same rolling buffer. It is
> >> quite similar to the std.dev actually (not shown).
> >>
> >
> > So I think there are two schools here, either you take average and display
> > + / - from that, but I think I prefer to take the lowest of the last 100
> > samples (or something), and then display PDV from that "floor" value, ie
> > PDV can't ever be negative, it can only be positive.
> >
> > Apart from that, the above multi-place RTT test is really really nice,
> > thanks for doing this!
> >
> >
> > --
> > Mikael Abrahamsson    email: swmike@swm.pp.se
> >
>
>
>
> ----------
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 5330 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 14:45                                                                                         ` Simon Barber
@ 2015-05-07 22:27                                                                                           ` Dave Taht
  2015-05-07 22:45                                                                                             ` Dave Taht
  2015-05-07 23:09                                                                                             ` Dave Taht
  0 siblings, 2 replies; 127+ messages in thread
From: Dave Taht @ 2015-05-07 22:27 UTC (permalink / raw)
  To: Simon Barber; +Cc: bloat

On Thu, May 7, 2015 at 7:45 AM, Simon Barber <simon@superduper.net> wrote:
> The key figure for VoIP is maximum latency, or perhaps somewhere around 99th
> percentile. Voice packets cannot be played out if they are late, so how late
> they are is the only thing that matters. If many packets are early but more
> than a very small number are late, then the jitter buffer has to adjust to
> handle the late packets. Adjusting the jitter buffer disrupts the
> conversation, so ideally adjustments are infrequent. When maximum latency
> suddenly increases it becomes necessary to increase the buffer fairly
> quickly to avoid a dropout in the conversation. Buffer reductions can be
> hidden by waiting for gaps in conversation. People get used to the acoustic
> round trip latency and learn how quickly to expect a reply from the other
> person (unless latency is really too high), but adjustments interfere with
> this learned expectation, so make it hard to interpret why the other person
> has paused. Thus adjustments to the buffering should be as infrequent as
> possible.
>
> Codel measures and tracks minimum latency in its inner 'interval' loop. For
> VoIP the maximum is what counts. You can call it minimum + jitter, but the
> maximum is the important thing (not the absolute maximum, since a very small
> number of late packets are tolerable, but perhaps the 99th percentile).
>
> During a conversation it will take some time at the start to learn the
> characteristics of the link, but ideally the jitter buffer algorithm will
> quickly get to a place where few adjustments are made. The more conservative
> the buffer (higher delay above minimum) the less likely a future adjustment
> will be needed, hence a tendency towards larger buffers (and more delay).
>
> Priority queueing is perfect for VoIP, since it can keep the jitter at a
> single hop down to the transmission time for a single maximum size packet.
> Fair Queueing will often achieve the same thing, since VoIP streams are
> often the lowest bandwidth ongoing stream on the link.

Unfortunately this is more nuanced than this. Not for the first time
do I wish that email contained math, and/or we had put together a paper
for this containing the relevant math. I do have a spreadsheet lying
around here somewhere...

In the case of a drop tail queue, jitter is a function of the total
amount of data outstanding on the link by all the flows. A single
big fat flow experiencing a drop will drop it's buffer occupancy
(and thus effect on other flows) by a lot on the next RTT. However
a lot of fat flows will drop by less if drops are few. Total delay
is the sum of all packets outstanding on the link.

In the case of stochastic packet-fair queuing jitter is a function
of the total number of bytes in each packet outstanding on the sum
of the total number of flows. The total delay is the sum of the
bytes delivered per packet per flow.

In the case of DRR, jitter is a function of the total number of bytes
allowed by the quantum per flow outstanding on the link. The total
delay experienced by the flow is a function of the amounts of
bytes delivered with the number of flows.

In the case of fq_codel, jitter is a function of of the total number
of bytes allowed by the quantum per flow outstanding on the link,
with the sparse optimization pushing flows with no queue
queue in the available window to the front. Furthermore
codel acts to shorten the lengths of the queues overall.

fq_codel's delay: when the arriving in new flow packet can be serviced
in less time than the total number of flows' quantums, is a function
of the total number of flows that are not also building queues. When
the total service time for all flows exceeds the interval the voip
packet is delivered in, and AND the quantum under which the algorithm
is delivering, fq_codel degrades to DRR behavior. (in other words,
given enough queuing flows or enough new flows, you can steadily
accrue delay on a voip flow under fq_codel). Predicting jitter is
really hard to do here, but still pretty minimal compared to the
alternatives above.

in the above 3 cases, hash collisions permute the result. Cake and
fq_pie have a lot less collisions.

I am generally sanguine about this along the edge - from the internet
packets cannot be easily classified, yet most edge networks have more
bandwidth from that direction, thus able to fit WAY more flows in
under 10ms, and outbound, from the home or small business, some
classification can be effectively used in a X tier shaper (or cake) to
ensure better priority (still with fair) queuing for this special
class of application - not that under most home workloads that this is
an issue. We think. We really need to do more benchmarking of web and
dash traffic loads.

> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
> On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
>>
>> I thought would be more sane too. I see mentioned online that PDV is a
>> gaussian distribution (around mean) but it looks more like half a bell
>> curve, with most numbers near the the lowest latency seen, and getting
>> progressively worse with
>> less frequency.
>> At least for DSL connections on good ISPs that scenario seems more
>> frequent.
>> You "usually" get the best latency and "sometimes" get spikes or fuzz on
>> top of it.
>>
>> by the way after I posted I discovered Firefox has an issue with this test
>> so I had
>> to block it with a message, my apologies if anyone wasted time trying it
>> with FF.
>> Hopefully i can figure out why.
>>
>>
>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se>
>> wrote:
>>>
>>> On Thu, 7 May 2015, jb wrote:
>>>
>>>> There is a web socket based jitter tester now. It is very early stage
>>>> but
>>>> works ok.
>>>>
>>>> http://www.dslreports.com/speedtest?radar=1
>>>>
>>>> So the latency displayed is the mean latency from a rolling 60 sample
>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is the mean
>>>> difference between sequential pings in that same rolling buffer. It is quite
>>>> similar to the std.dev actually (not shown).
>>>
>>>
>>> So I think there are two schools here, either you take average and
>>> display + / - from that, but I think I prefer to take the lowest of the last
>>> 100 samples (or something), and then display PDV from that "floor" value, ie
>>> PDV can't ever be negative, it can only be positive.
>>>
>>> Apart from that, the above multi-place RTT test is really really nice,
>>> thanks for doing this!
>>>
>>>
>>> --
>>> Mikael Abrahamsson    email: swmike@swm.pp.se
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 22:27                                                                                           ` Dave Taht
@ 2015-05-07 22:45                                                                                             ` Dave Taht
  2015-05-07 23:09                                                                                             ` Dave Taht
  1 sibling, 0 replies; 127+ messages in thread
From: Dave Taht @ 2015-05-07 22:45 UTC (permalink / raw)
  To: Simon Barber; +Cc: bloat

On Thu, May 7, 2015 at 3:27 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, May 7, 2015 at 7:45 AM, Simon Barber <simon@superduper.net> wrote:
>> The key figure for VoIP is maximum latency, or perhaps somewhere around 99th
>> percentile. Voice packets cannot be played out if they are late, so how late
>> they are is the only thing that matters. If many packets are early but more
>> than a very small number are late, then the jitter buffer has to adjust to
>> handle the late packets. Adjusting the jitter buffer disrupts the
>> conversation, so ideally adjustments are infrequent. When maximum latency
>> suddenly increases it becomes necessary to increase the buffer fairly
>> quickly to avoid a dropout in the conversation. Buffer reductions can be
>> hidden by waiting for gaps in conversation. People get used to the acoustic
>> round trip latency and learn how quickly to expect a reply from the other
>> person (unless latency is really too high), but adjustments interfere with
>> this learned expectation, so make it hard to interpret why the other person
>> has paused. Thus adjustments to the buffering should be as infrequent as
>> possible.
>>
>> Codel measures and tracks minimum latency in its inner 'interval' loop. For
>> VoIP the maximum is what counts. You can call it minimum + jitter, but the
>> maximum is the important thing (not the absolute maximum, since a very small
>> number of late packets are tolerable, but perhaps the 99th percentile).
>>
>> During a conversation it will take some time at the start to learn the
>> characteristics of the link, but ideally the jitter buffer algorithm will
>> quickly get to a place where few adjustments are made. The more conservative
>> the buffer (higher delay above minimum) the less likely a future adjustment
>> will be needed, hence a tendency towards larger buffers (and more delay).
>>
>> Priority queueing is perfect for VoIP, since it can keep the jitter at a
>> single hop down to the transmission time for a single maximum size packet.
>> Fair Queueing will often achieve the same thing, since VoIP streams are
>> often the lowest bandwidth ongoing stream on the link.
>
> Unfortunately this is more nuanced than this. Not for the first time
> do I wish that email contained math, and/or we had put together a paper
> for this containing the relevant math. I do have a spreadsheet lying
> around here somewhere...
>
> In the case of a drop tail queue, jitter is a function of the total
> amount of data outstanding on the link by all the flows. A single
> big fat flow experiencing a drop will drop it's buffer occupancy
> (and thus effect on other flows) by a lot on the next RTT. However
> a lot of fat flows will drop by less if drops are few. Total delay
> is the sum of all packets outstanding on the link.
>
> In the case of stochastic packet-fair queuing jitter is a function
> of the total number of bytes in each packet outstanding on the sum
> of the total number of flows. The total delay is the sum of the
> bytes delivered per packet per flow.
>
> In the case of DRR, jitter is a function of the total number of bytes
> allowed by the quantum per flow outstanding on the link. The total
> delay experienced by the flow is a function of the amounts of
> bytes delivered with the number of flows.
>
> In the case of fq_codel, jitter is a function of of the total number
> of bytes allowed by the quantum per flow outstanding on the link,
> with the sparse optimization pushing flows with no queue
> queue in the available window to the front. Furthermore
> codel acts to shorten the lengths of the queues overall.
>
> fq_codel's delay: when the arriving in new flow packet can be serviced
> in less time than the total number of flows' quantums, is a function
> of the total number of flows that are not also building queues. When
> the total service time for all flows exceeds the interval the voip
> packet is delivered in, and AND the quantum under which the algorithm
> is delivering, fq_codel degrades to DRR behavior. (in other words,
> given enough queuing flows or enough new flows, you can steadily
> accrue delay on a voip flow under fq_codel). Predicting jitter is
> really hard to do here, but still pretty minimal compared to the
> alternatives above.
>
> in the above 3 cases, hash collisions permute the result. Cake and
> fq_pie have a lot less collisions.
>
> I am generally sanguine about this along the edge - from the internet
> packets cannot be easily classified, yet most edge networks have more
> bandwidth from that direction, thus able to fit WAY more flows in
> under 10ms, and outbound, from the home or small business, some
> classification can be effectively used in a X tier shaper (or cake) to
> ensure better priority (still with fair) queuing for this special
> class of application - not that under most home workloads that this is
> an issue. We think. We really need to do more benchmarking of web and
> dash traffic loads.

I note also that I fought for (and lost) an argument to make it more
possible for webrtc applications to use one port for video and another
for voice. This would have provided a useful e2e clock to measure
against video congesting on a  minimal interval in a FQ'd environment
in particular and perhaps have led to lower latency videoconferencing,
more rapid ramp ups of frame rate or quality, etc.

The argument for minimizing port use and the enormous difficulty in
establishing two clear paths for those ports all the time won the day,
and also the difficulty in lipsync with two separate flows. I decided
to wait til we had fq_codel derived stuff more worked out to play with
the browsers themselves.

I would have also liked ECN adopted for webrtc's primary frame bursts
as well, but the early proposal for that (in "Nada") was dropped, last
I looked. Again, this could be revisited.

>> Simon
>>
>> Sent with AquaMail for Android
>> http://www.aqua-mail.com
>>
>> On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
>>>
>>> I thought would be more sane too. I see mentioned online that PDV is a
>>> gaussian distribution (around mean) but it looks more like half a bell
>>> curve, with most numbers near the the lowest latency seen, and getting
>>> progressively worse with
>>> less frequency.
>>> At least for DSL connections on good ISPs that scenario seems more
>>> frequent.
>>> You "usually" get the best latency and "sometimes" get spikes or fuzz on
>>> top of it.
>>>
>>> by the way after I posted I discovered Firefox has an issue with this test
>>> so I had
>>> to block it with a message, my apologies if anyone wasted time trying it
>>> with FF.
>>> Hopefully i can figure out why.
>>>
>>>
>>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se>
>>> wrote:
>>>>
>>>> On Thu, 7 May 2015, jb wrote:
>>>>
>>>>> There is a web socket based jitter tester now. It is very early stage
>>>>> but
>>>>> works ok.
>>>>>
>>>>> http://www.dslreports.com/speedtest?radar=1
>>>>>
>>>>> So the latency displayed is the mean latency from a rolling 60 sample
>>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is the mean
>>>>> difference between sequential pings in that same rolling buffer. It is quite
>>>>> similar to the std.dev actually (not shown).
>>>>
>>>>
>>>> So I think there are two schools here, either you take average and
>>>> display + / - from that, but I think I prefer to take the lowest of the last
>>>> 100 samples (or something), and then display PDV from that "floor" value, ie
>>>> PDV can't ever be negative, it can only be positive.
>>>>
>>>> Apart from that, the above multi-place RTT test is really really nice,
>>>> thanks for doing this!
>>>>
>>>>
>>>> --
>>>> Mikael Abrahamsson    email: swmike@swm.pp.se
>>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 22:27                                                                                           ` Dave Taht
  2015-05-07 22:45                                                                                             ` Dave Taht
@ 2015-05-07 23:09                                                                                             ` Dave Taht
  2015-05-08  2:05                                                                                               ` jb
  2015-05-08  3:54                                                                                               ` Eric Dumazet
  1 sibling, 2 replies; 127+ messages in thread
From: Dave Taht @ 2015-05-07 23:09 UTC (permalink / raw)
  To: Simon Barber; +Cc: bloat

On Thu, May 7, 2015 at 3:27 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, May 7, 2015 at 7:45 AM, Simon Barber <simon@superduper.net> wrote:
>> The key figure for VoIP is maximum latency, or perhaps somewhere around 99th
>> percentile. Voice packets cannot be played out if they are late, so how late
>> they are is the only thing that matters. If many packets are early but more
>> than a very small number are late, then the jitter buffer has to adjust to
>> handle the late packets. Adjusting the jitter buffer disrupts the
>> conversation, so ideally adjustments are infrequent. When maximum latency
>> suddenly increases it becomes necessary to increase the buffer fairly
>> quickly to avoid a dropout in the conversation. Buffer reductions can be
>> hidden by waiting for gaps in conversation. People get used to the acoustic
>> round trip latency and learn how quickly to expect a reply from the other
>> person (unless latency is really too high), but adjustments interfere with
>> this learned expectation, so make it hard to interpret why the other person
>> has paused. Thus adjustments to the buffering should be as infrequent as
>> possible.
>>
>> Codel measures and tracks minimum latency in its inner 'interval' loop. For
>> VoIP the maximum is what counts. You can call it minimum + jitter, but the
>> maximum is the important thing (not the absolute maximum, since a very small
>> number of late packets are tolerable, but perhaps the 99th percentile).
>>
>> During a conversation it will take some time at the start to learn the
>> characteristics of the link, but ideally the jitter buffer algorithm will
>> quickly get to a place where few adjustments are made. The more conservative
>> the buffer (higher delay above minimum) the less likely a future adjustment
>> will be needed, hence a tendency towards larger buffers (and more delay).
>>
>> Priority queueing is perfect for VoIP, since it can keep the jitter at a
>> single hop down to the transmission time for a single maximum size packet.
>> Fair Queueing will often achieve the same thing, since VoIP streams are
>> often the lowest bandwidth ongoing stream on the link.
>
> Unfortunately this is more nuanced than this. Not for the first time
> do I wish that email contained math, and/or we had put together a paper
> for this containing the relevant math. I do have a spreadsheet lying
> around here somewhere...
>
> In the case of a drop tail queue, jitter is a function of the total
> amount of data outstanding on the link by all the flows. A single
> big fat flow experiencing a drop will drop it's buffer occupancy
> (and thus effect on other flows) by a lot on the next RTT. However
> a lot of fat flows will drop by less if drops are few. Total delay
> is the sum of all packets outstanding on the link.
>
> In the case of stochastic packet-fair queuing jitter is a function
> of the total number of bytes in each packet outstanding on the sum
> of the total number of flows. The total delay is the sum of the
> bytes delivered per packet per flow.
>
> In the case of DRR, jitter is a function of the total number of bytes
> allowed by the quantum per flow outstanding on the link. The total
> delay experienced by the flow is a function of the amounts of
> bytes delivered with the number of flows.
>
> In the case of fq_codel, jitter is a function of of the total number
> of bytes allowed by the quantum per flow outstanding on the link,
> with the sparse optimization pushing flows with no queue
> queue in the available window to the front. Furthermore
> codel acts to shorten the lengths of the queues overall.
>
> fq_codel's delay: when the arriving in new flow packet can be serviced
> in less time than the total number of flows' quantums, is a function
> of the total number of flows that are not also building queues. When
> the total service time for all flows exceeds the interval the voip
> packet is delivered in, and AND the quantum under which the algorithm
> is delivering, fq_codel degrades to DRR behavior. (in other words,
> given enough queuing flows or enough new flows, you can steadily
> accrue delay on a voip flow under fq_codel). Predicting jitter is
> really hard to do here, but still pretty minimal compared to the
> alternatives above.

And to complexify it further if the total flows' service time exceeds
the interval on which the voip flow is being delivered, the voip flow
can deliver a fq_codel quantum's worth of packets back to back.

Boy I wish I could explain all this better, and/or observe the results
on real jitter buffers in real apps.

>
> in the above 3 cases, hash collisions permute the result. Cake and
> fq_pie have a lot less collisions.

Which is not necessarily a panacea either. perfect flow isolation
(cake) to hundreds of flows might be in some cases worse that
suffering hash collisions (fq_codel) for some workloads. sch_fq and
fq_pie have *perfect* flow isolation and I worry about the effects of
tons and tons of short flows (think ddos attacks) - I am comforted by
colliisions! and tend to think there is an ideal ratio of flows
allowed without queue management verses available bandwidth that we
don't know yet - as well as think for larger numbers of flows we
should be inheriting more global environmental (state of the link and
all queues) than we currently do in initializing both cake and
fq_codel queues.

Recently I did some tests of 450+ flows (details on the cake mailing
list) against sch_fq which got hopelessly buried (10000 packets in
queue). cake and fq_pie did a lot better.

> I am generally sanguine about this along the edge - from the internet
> packets cannot be easily classified, yet most edge networks have more
> bandwidth from that direction, thus able to fit WAY more flows in
> under 10ms, and outbound, from the home or small business, some
> classification can be effectively used in a X tier shaper (or cake) to
> ensure better priority (still with fair) queuing for this special
> class of application - not that under most home workloads that this is
> an issue. We think. We really need to do more benchmarking of web and
> dash traffic loads.
>
>> Simon
>>
>> Sent with AquaMail for Android
>> http://www.aqua-mail.com
>>
>> On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
>>>
>>> I thought would be more sane too. I see mentioned online that PDV is a
>>> gaussian distribution (around mean) but it looks more like half a bell
>>> curve, with most numbers near the the lowest latency seen, and getting
>>> progressively worse with
>>> less frequency.
>>> At least for DSL connections on good ISPs that scenario seems more
>>> frequent.
>>> You "usually" get the best latency and "sometimes" get spikes or fuzz on
>>> top of it.
>>>
>>> by the way after I posted I discovered Firefox has an issue with this test
>>> so I had
>>> to block it with a message, my apologies if anyone wasted time trying it
>>> with FF.
>>> Hopefully i can figure out why.
>>>
>>>
>>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se>
>>> wrote:
>>>>
>>>> On Thu, 7 May 2015, jb wrote:
>>>>
>>>>> There is a web socket based jitter tester now. It is very early stage
>>>>> but
>>>>> works ok.
>>>>>
>>>>> http://www.dslreports.com/speedtest?radar=1
>>>>>
>>>>> So the latency displayed is the mean latency from a rolling 60 sample
>>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is the mean
>>>>> difference between sequential pings in that same rolling buffer. It is quite
>>>>> similar to the std.dev actually (not shown).
>>>>
>>>>
>>>> So I think there are two schools here, either you take average and
>>>> display + / - from that, but I think I prefer to take the lowest of the last
>>>> 100 samples (or something), and then display PDV from that "floor" value, ie
>>>> PDV can't ever be negative, it can only be positive.
>>>>
>>>> Apart from that, the above multi-place RTT test is really really nice,
>>>> thanks for doing this!
>>>>
>>>>
>>>> --
>>>> Mikael Abrahamsson    email: swmike@swm.pp.se
>>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 23:09                                                                                             ` Dave Taht
@ 2015-05-08  2:05                                                                                               ` jb
  2015-05-08  4:16                                                                                                 ` David Lang
  2015-05-08  3:54                                                                                               ` Eric Dumazet
  1 sibling, 1 reply; 127+ messages in thread
From: jb @ 2015-05-08  2:05 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 10597 bytes --]

I've made some changes and now this test displays the "PDV" column as
simply the recent average increase on the best latency seen, as usually the
best latency seen is pretty stable. (It also should work in firefox too
now).

In addition, every 30 seconds, a grade is printed next to a timestamp.
I know how we all like grades :) the grade is based on the average of all
the PDVs, and ranges from A+ (5 milliseconds or less) down to F for fail.

I'm not 100% happy with this PDV figure, a stellar connection - and no
internet
congestion - will show a low number that is stable and an A+ grade. A
connection
with jitter will show a PDV that is half the average jitter amplitude. So
far so good.

But a connection with almost no jitter, but that has visibly higher than
minimal
latency, will show a failing grade. And if this is a jitter / packet delay
variation
type test, I'm not sure about this situation. One could say it is a very
good
connection but because it is 30ms higher than just one revealed optimal
ping, yet it might get a "D". Not sure how common this state of things could
be though.

Also since it is a global test a component of the grade is also internet
backbone congestion, and not necessarily an ISP or equipment issue.


On Fri, May 8, 2015 at 9:09 AM, Dave Taht <dave.taht@gmail.com> wrote:

> On Thu, May 7, 2015 at 3:27 PM, Dave Taht <dave.taht@gmail.com> wrote:
> > On Thu, May 7, 2015 at 7:45 AM, Simon Barber <simon@superduper.net>
> wrote:
> >> The key figure for VoIP is maximum latency, or perhaps somewhere around
> 99th
> >> percentile. Voice packets cannot be played out if they are late, so how
> late
> >> they are is the only thing that matters. If many packets are early but
> more
> >> than a very small number are late, then the jitter buffer has to adjust
> to
> >> handle the late packets. Adjusting the jitter buffer disrupts the
> >> conversation, so ideally adjustments are infrequent. When maximum
> latency
> >> suddenly increases it becomes necessary to increase the buffer fairly
> >> quickly to avoid a dropout in the conversation. Buffer reductions can be
> >> hidden by waiting for gaps in conversation. People get used to the
> acoustic
> >> round trip latency and learn how quickly to expect a reply from the
> other
> >> person (unless latency is really too high), but adjustments interfere
> with
> >> this learned expectation, so make it hard to interpret why the other
> person
> >> has paused. Thus adjustments to the buffering should be as infrequent as
> >> possible.
> >>
> >> Codel measures and tracks minimum latency in its inner 'interval' loop.
> For
> >> VoIP the maximum is what counts. You can call it minimum + jitter, but
> the
> >> maximum is the important thing (not the absolute maximum, since a very
> small
> >> number of late packets are tolerable, but perhaps the 99th percentile).
> >>
> >> During a conversation it will take some time at the start to learn the
> >> characteristics of the link, but ideally the jitter buffer algorithm
> will
> >> quickly get to a place where few adjustments are made. The more
> conservative
> >> the buffer (higher delay above minimum) the less likely a future
> adjustment
> >> will be needed, hence a tendency towards larger buffers (and more
> delay).
> >>
> >> Priority queueing is perfect for VoIP, since it can keep the jitter at a
> >> single hop down to the transmission time for a single maximum size
> packet.
> >> Fair Queueing will often achieve the same thing, since VoIP streams are
> >> often the lowest bandwidth ongoing stream on the link.
> >
> > Unfortunately this is more nuanced than this. Not for the first time
> > do I wish that email contained math, and/or we had put together a paper
> > for this containing the relevant math. I do have a spreadsheet lying
> > around here somewhere...
> >
> > In the case of a drop tail queue, jitter is a function of the total
> > amount of data outstanding on the link by all the flows. A single
> > big fat flow experiencing a drop will drop it's buffer occupancy
> > (and thus effect on other flows) by a lot on the next RTT. However
> > a lot of fat flows will drop by less if drops are few. Total delay
> > is the sum of all packets outstanding on the link.
> >
> > In the case of stochastic packet-fair queuing jitter is a function
> > of the total number of bytes in each packet outstanding on the sum
> > of the total number of flows. The total delay is the sum of the
> > bytes delivered per packet per flow.
> >
> > In the case of DRR, jitter is a function of the total number of bytes
> > allowed by the quantum per flow outstanding on the link. The total
> > delay experienced by the flow is a function of the amounts of
> > bytes delivered with the number of flows.
> >
> > In the case of fq_codel, jitter is a function of of the total number
> > of bytes allowed by the quantum per flow outstanding on the link,
> > with the sparse optimization pushing flows with no queue
> > queue in the available window to the front. Furthermore
> > codel acts to shorten the lengths of the queues overall.
> >
> > fq_codel's delay: when the arriving in new flow packet can be serviced
> > in less time than the total number of flows' quantums, is a function
> > of the total number of flows that are not also building queues. When
> > the total service time for all flows exceeds the interval the voip
> > packet is delivered in, and AND the quantum under which the algorithm
> > is delivering, fq_codel degrades to DRR behavior. (in other words,
> > given enough queuing flows or enough new flows, you can steadily
> > accrue delay on a voip flow under fq_codel). Predicting jitter is
> > really hard to do here, but still pretty minimal compared to the
> > alternatives above.
>
> And to complexify it further if the total flows' service time exceeds
> the interval on which the voip flow is being delivered, the voip flow
> can deliver a fq_codel quantum's worth of packets back to back.
>
> Boy I wish I could explain all this better, and/or observe the results
> on real jitter buffers in real apps.
>
> >
> > in the above 3 cases, hash collisions permute the result. Cake and
> > fq_pie have a lot less collisions.
>
> Which is not necessarily a panacea either. perfect flow isolation
> (cake) to hundreds of flows might be in some cases worse that
> suffering hash collisions (fq_codel) for some workloads. sch_fq and
> fq_pie have *perfect* flow isolation and I worry about the effects of
> tons and tons of short flows (think ddos attacks) - I am comforted by
> colliisions! and tend to think there is an ideal ratio of flows
> allowed without queue management verses available bandwidth that we
> don't know yet - as well as think for larger numbers of flows we
> should be inheriting more global environmental (state of the link and
> all queues) than we currently do in initializing both cake and
> fq_codel queues.
>
> Recently I did some tests of 450+ flows (details on the cake mailing
> list) against sch_fq which got hopelessly buried (10000 packets in
> queue). cake and fq_pie did a lot better.
>
> > I am generally sanguine about this along the edge - from the internet
> > packets cannot be easily classified, yet most edge networks have more
> > bandwidth from that direction, thus able to fit WAY more flows in
> > under 10ms, and outbound, from the home or small business, some
> > classification can be effectively used in a X tier shaper (or cake) to
> > ensure better priority (still with fair) queuing for this special
> > class of application - not that under most home workloads that this is
> > an issue. We think. We really need to do more benchmarking of web and
> > dash traffic loads.
> >
> >> Simon
> >>
> >> Sent with AquaMail for Android
> >> http://www.aqua-mail.com
> >>
> >> On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
> >>>
> >>> I thought would be more sane too. I see mentioned online that PDV is a
> >>> gaussian distribution (around mean) but it looks more like half a bell
> >>> curve, with most numbers near the the lowest latency seen, and getting
> >>> progressively worse with
> >>> less frequency.
> >>> At least for DSL connections on good ISPs that scenario seems more
> >>> frequent.
> >>> You "usually" get the best latency and "sometimes" get spikes or fuzz
> on
> >>> top of it.
> >>>
> >>> by the way after I posted I discovered Firefox has an issue with this
> test
> >>> so I had
> >>> to block it with a message, my apologies if anyone wasted time trying
> it
> >>> with FF.
> >>> Hopefully i can figure out why.
> >>>
> >>>
> >>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se>
> >>> wrote:
> >>>>
> >>>> On Thu, 7 May 2015, jb wrote:
> >>>>
> >>>>> There is a web socket based jitter tester now. It is very early stage
> >>>>> but
> >>>>> works ok.
> >>>>>
> >>>>> http://www.dslreports.com/speedtest?radar=1
> >>>>>
> >>>>> So the latency displayed is the mean latency from a rolling 60 sample
> >>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is
> the mean
> >>>>> difference between sequential pings in that same rolling buffer. It
> is quite
> >>>>> similar to the std.dev actually (not shown).
> >>>>
> >>>>
> >>>> So I think there are two schools here, either you take average and
> >>>> display + / - from that, but I think I prefer to take the lowest of
> the last
> >>>> 100 samples (or something), and then display PDV from that "floor"
> value, ie
> >>>> PDV can't ever be negative, it can only be positive.
> >>>>
> >>>> Apart from that, the above multi-place RTT test is really really nice,
> >>>> thanks for doing this!
> >>>>
> >>>>
> >>>> --
> >>>> Mikael Abrahamsson    email: swmike@swm.pp.se
> >>>
> >>>
> >>> _______________________________________________
> >>> Bloat mailing list
> >>> Bloat@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/bloat
> >>>
> >>
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >>
> >
> >
> >
> > --
> > Dave Täht
> > Open Networking needs **Open Source Hardware**
> >
> > https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>

[-- Attachment #2: Type: text/html, Size: 13425 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-07 23:09                                                                                             ` Dave Taht
  2015-05-08  2:05                                                                                               ` jb
@ 2015-05-08  3:54                                                                                               ` Eric Dumazet
  2015-05-08  4:20                                                                                                 ` Dave Taht
  1 sibling, 1 reply; 127+ messages in thread
From: Eric Dumazet @ 2015-05-08  3:54 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

On Thu, 2015-05-07 at 16:09 -0700, Dave Taht wrote:

> Recently I did some tests of 450+ flows (details on the cake mailing
> list) against sch_fq which got hopelessly buried (10000 packets in
> queue). cake and fq_pie did a lot better.

Seriously, comparing sch_fq against fq_pie or fq_codel or cake is quite
strange.

sch_fq has no CoDel part, it doesn't drop packets, unless you hit some
limit.

First intent for fq was for hosts, to implement TCP pacing at low cost.

Maybe you need an hybrid, and this is very possible to do that.

I did recently one change in sch_fq. where non local flows can be hashed
in a stochastic way. You could eventually add CoDel capability to such
flows.

http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=06eb395fa9856b5a87cf7d80baee2a0ed3cdb9d7




^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-08  2:05                                                                                               ` jb
@ 2015-05-08  4:16                                                                                                 ` David Lang
  0 siblings, 0 replies; 127+ messages in thread
From: David Lang @ 2015-05-08  4:16 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: TEXT/Plain, Size: 10787 bytes --]

On Fri, 8 May 2015, jb wrote:

> I've made some changes and now this test displays the "PDV" column as
> simply the recent average increase on the best latency seen, as usually the
> best latency seen is pretty stable. (It also should work in firefox too
> now).
>
> In addition, every 30 seconds, a grade is printed next to a timestamp.
> I know how we all like grades :) the grade is based on the average of all
> the PDVs, and ranges from A+ (5 milliseconds or less) down to F for fail.
>
> I'm not 100% happy with this PDV figure, a stellar connection - and no 
> internet congestion - will show a low number that is stable and an A+ grade. A 
> connection with jitter will show a PDV that is half the average jitter 
> amplitude. So far so good.
>
> But a connection with almost no jitter, but that has visibly higher than 
> minimal latency, will show a failing grade. And if this is a jitter / packet 
> delay variation type test, I'm not sure about this situation. One could say it 
> is a very good connection but because it is 30ms higher than just one revealed 
> optimal ping, yet it might get a "D". Not sure how common this state of things 
> could be though.

this is why the grade should be based more on the ability to induce jitter (the 
additional latency under load) than the absolute number

a 100ms worth of buffer induced latency on a 20ms connection should score far 
worse than 20ms worth of induced latency on a 100ms connection.

David Lang

> Also since it is a global test a component of the grade is also internet
> backbone congestion, and not necessarily an ISP or equipment issue.
>
>
> On Fri, May 8, 2015 at 9:09 AM, Dave Taht <dave.taht@gmail.com> wrote:
>
>> On Thu, May 7, 2015 at 3:27 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>> On Thu, May 7, 2015 at 7:45 AM, Simon Barber <simon@superduper.net>
>> wrote:
>>>> The key figure for VoIP is maximum latency, or perhaps somewhere around
>> 99th
>>>> percentile. Voice packets cannot be played out if they are late, so how
>> late
>>>> they are is the only thing that matters. If many packets are early but
>> more
>>>> than a very small number are late, then the jitter buffer has to adjust
>> to
>>>> handle the late packets. Adjusting the jitter buffer disrupts the
>>>> conversation, so ideally adjustments are infrequent. When maximum
>> latency
>>>> suddenly increases it becomes necessary to increase the buffer fairly
>>>> quickly to avoid a dropout in the conversation. Buffer reductions can be
>>>> hidden by waiting for gaps in conversation. People get used to the
>> acoustic
>>>> round trip latency and learn how quickly to expect a reply from the
>> other
>>>> person (unless latency is really too high), but adjustments interfere
>> with
>>>> this learned expectation, so make it hard to interpret why the other
>> person
>>>> has paused. Thus adjustments to the buffering should be as infrequent as
>>>> possible.
>>>>
>>>> Codel measures and tracks minimum latency in its inner 'interval' loop.
>> For
>>>> VoIP the maximum is what counts. You can call it minimum + jitter, but
>> the
>>>> maximum is the important thing (not the absolute maximum, since a very
>> small
>>>> number of late packets are tolerable, but perhaps the 99th percentile).
>>>>
>>>> During a conversation it will take some time at the start to learn the
>>>> characteristics of the link, but ideally the jitter buffer algorithm
>> will
>>>> quickly get to a place where few adjustments are made. The more
>> conservative
>>>> the buffer (higher delay above minimum) the less likely a future
>> adjustment
>>>> will be needed, hence a tendency towards larger buffers (and more
>> delay).
>>>>
>>>> Priority queueing is perfect for VoIP, since it can keep the jitter at a
>>>> single hop down to the transmission time for a single maximum size
>> packet.
>>>> Fair Queueing will often achieve the same thing, since VoIP streams are
>>>> often the lowest bandwidth ongoing stream on the link.
>>>
>>> Unfortunately this is more nuanced than this. Not for the first time
>>> do I wish that email contained math, and/or we had put together a paper
>>> for this containing the relevant math. I do have a spreadsheet lying
>>> around here somewhere...
>>>
>>> In the case of a drop tail queue, jitter is a function of the total
>>> amount of data outstanding on the link by all the flows. A single
>>> big fat flow experiencing a drop will drop it's buffer occupancy
>>> (and thus effect on other flows) by a lot on the next RTT. However
>>> a lot of fat flows will drop by less if drops are few. Total delay
>>> is the sum of all packets outstanding on the link.
>>>
>>> In the case of stochastic packet-fair queuing jitter is a function
>>> of the total number of bytes in each packet outstanding on the sum
>>> of the total number of flows. The total delay is the sum of the
>>> bytes delivered per packet per flow.
>>>
>>> In the case of DRR, jitter is a function of the total number of bytes
>>> allowed by the quantum per flow outstanding on the link. The total
>>> delay experienced by the flow is a function of the amounts of
>>> bytes delivered with the number of flows.
>>>
>>> In the case of fq_codel, jitter is a function of of the total number
>>> of bytes allowed by the quantum per flow outstanding on the link,
>>> with the sparse optimization pushing flows with no queue
>>> queue in the available window to the front. Furthermore
>>> codel acts to shorten the lengths of the queues overall.
>>>
>>> fq_codel's delay: when the arriving in new flow packet can be serviced
>>> in less time than the total number of flows' quantums, is a function
>>> of the total number of flows that are not also building queues. When
>>> the total service time for all flows exceeds the interval the voip
>>> packet is delivered in, and AND the quantum under which the algorithm
>>> is delivering, fq_codel degrades to DRR behavior. (in other words,
>>> given enough queuing flows or enough new flows, you can steadily
>>> accrue delay on a voip flow under fq_codel). Predicting jitter is
>>> really hard to do here, but still pretty minimal compared to the
>>> alternatives above.
>>
>> And to complexify it further if the total flows' service time exceeds
>> the interval on which the voip flow is being delivered, the voip flow
>> can deliver a fq_codel quantum's worth of packets back to back.
>>
>> Boy I wish I could explain all this better, and/or observe the results
>> on real jitter buffers in real apps.
>>
>>>
>>> in the above 3 cases, hash collisions permute the result. Cake and
>>> fq_pie have a lot less collisions.
>>
>> Which is not necessarily a panacea either. perfect flow isolation
>> (cake) to hundreds of flows might be in some cases worse that
>> suffering hash collisions (fq_codel) for some workloads. sch_fq and
>> fq_pie have *perfect* flow isolation and I worry about the effects of
>> tons and tons of short flows (think ddos attacks) - I am comforted by
>> colliisions! and tend to think there is an ideal ratio of flows
>> allowed without queue management verses available bandwidth that we
>> don't know yet - as well as think for larger numbers of flows we
>> should be inheriting more global environmental (state of the link and
>> all queues) than we currently do in initializing both cake and
>> fq_codel queues.
>>
>> Recently I did some tests of 450+ flows (details on the cake mailing
>> list) against sch_fq which got hopelessly buried (10000 packets in
>> queue). cake and fq_pie did a lot better.
>>
>>> I am generally sanguine about this along the edge - from the internet
>>> packets cannot be easily classified, yet most edge networks have more
>>> bandwidth from that direction, thus able to fit WAY more flows in
>>> under 10ms, and outbound, from the home or small business, some
>>> classification can be effectively used in a X tier shaper (or cake) to
>>> ensure better priority (still with fair) queuing for this special
>>> class of application - not that under most home workloads that this is
>>> an issue. We think. We really need to do more benchmarking of web and
>>> dash traffic loads.
>>>
>>>> Simon
>>>>
>>>> Sent with AquaMail for Android
>>>> http://www.aqua-mail.com
>>>>
>>>> On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
>>>>>
>>>>> I thought would be more sane too. I see mentioned online that PDV is a
>>>>> gaussian distribution (around mean) but it looks more like half a bell
>>>>> curve, with most numbers near the the lowest latency seen, and getting
>>>>> progressively worse with
>>>>> less frequency.
>>>>> At least for DSL connections on good ISPs that scenario seems more
>>>>> frequent.
>>>>> You "usually" get the best latency and "sometimes" get spikes or fuzz
>> on
>>>>> top of it.
>>>>>
>>>>> by the way after I posted I discovered Firefox has an issue with this
>> test
>>>>> so I had
>>>>> to block it with a message, my apologies if anyone wasted time trying
>> it
>>>>> with FF.
>>>>> Hopefully i can figure out why.
>>>>>
>>>>>
>>>>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se>
>>>>> wrote:
>>>>>>
>>>>>> On Thu, 7 May 2015, jb wrote:
>>>>>>
>>>>>>> There is a web socket based jitter tester now. It is very early stage
>>>>>>> but
>>>>>>> works ok.
>>>>>>>
>>>>>>> http://www.dslreports.com/speedtest?radar=1
>>>>>>>
>>>>>>> So the latency displayed is the mean latency from a rolling 60 sample
>>>>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is
>> the mean
>>>>>>> difference between sequential pings in that same rolling buffer. It
>> is quite
>>>>>>> similar to the std.dev actually (not shown).
>>>>>>
>>>>>>
>>>>>> So I think there are two schools here, either you take average and
>>>>>> display + / - from that, but I think I prefer to take the lowest of
>> the last
>>>>>> 100 samples (or something), and then display PDV from that "floor"
>> value, ie
>>>>>> PDV can't ever be negative, it can only be positive.
>>>>>>
>>>>>> Apart from that, the above multi-place RTT test is really really nice,
>>>>>> thanks for doing this!
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Mikael Abrahamsson    email: swmike@swm.pp.se
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>>
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>
>>>
>>>
>>> --
>>> Dave Täht
>>> Open Networking needs **Open Source Hardware**
>>>
>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>
>>
>>
>> --
>> Dave Täht
>> Open Networking needs **Open Source Hardware**
>>
>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>
>

[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
  2015-05-08  3:54                                                                                               ` Eric Dumazet
@ 2015-05-08  4:20                                                                                                 ` Dave Taht
  0 siblings, 0 replies; 127+ messages in thread
From: Dave Taht @ 2015-05-08  4:20 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: bloat

On Thu, May 7, 2015 at 8:54 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> On Thu, 2015-05-07 at 16:09 -0700, Dave Taht wrote:
>
>> Recently I did some tests of 450+ flows (details on the cake mailing
>> list) against sch_fq which got hopelessly buried (10000 packets in
>> queue). cake and fq_pie did a lot better.
>
> Seriously, comparing sch_fq against fq_pie or fq_codel or cake is quite
> strange.

I have been beating back the folk that think fq is all you need for almost
as long as the pure aqm'rs. Sometimes it makes me do crazy stuff like
that test... or use pfifo_fast... just to have the data.

In the 450 flow test, I was actually trying to exercise cake's ultimate
deal-with-the-8way-set-associative collision code path, and ran the
other  qdiscs for giggles. I did not expect sch_fq to hit a backlog of
10k packets, frankly.

> sch_fq has no CoDel part, it doesn't drop packets, unless you hit some
> limit.

> First intent for fq was for hosts, to implement TCP pacing at low cost.

that was a test on a 1gige server running those qdiscs, not a router.

I am sure the pacing bit works well when the host does not saturate
its own card, but when it does, oh, my!

>
> Maybe you need an hybrid, and this is very possible to do that.

fq_pie did well, as did cake.

> I did recently one change in sch_fq. where non local flows can be hashed
> in a stochastic way. You could eventually add CoDel capability to such
> flows.
>
> http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=06eb395fa9856b5a87cf7d80baee2a0ed3cdb9d7

I am aware of that patch.

My own take on things was that TSQ needs to be more aware of the total
number of flows in this case.


>
>



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Jitter/PDV test
  2015-05-07 10:44                                                                                   ` jb
  2015-05-07 11:36                                                                                     ` Sebastian Moeller
  2015-05-07 11:44                                                                                     ` Mikael Abrahamsson
@ 2015-05-08 13:20                                                                                     ` Rich Brown
  2015-05-08 14:22                                                                                       ` jb
  2 siblings, 1 reply; 127+ messages in thread
From: Rich Brown @ 2015-05-08 13:20 UTC (permalink / raw)
  To: jb; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 711 bytes --]


On May 7, 2015, at 6:44 AM, jb <justin@dslr.net> wrote:

> There is a web socket based jitter tester now. It is very early stage but works ok.
> 
> http://www.dslreports.com/speedtest?radar=1

I was surprised to see how *good* a test the websocket could be. It appears to add only a couple msec over the ICMP Echo timings.

Here are a few samples from the web page. The first four columns show the values from the PDV test; the final column is the min/avg/max/stddev from ping.

NY, USA	162.248.95.144	41	+3ms	38.677/40.405/43.192/1.269 ms

CO, USA	72.5.102.138	80	+3ms	79.305/81.531/85.514/1.540 ms

LA, USA	162.248.93.162	108	+5ms	105.225/106.540/108.358/0.877 ms

Nice work, Justin!

Rich

[-- Attachment #2: Type: text/html, Size: 3006 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [Bloat] DSLReports Jitter/PDV test
  2015-05-08 13:20                                                                                     ` [Bloat] DSLReports Jitter/PDV test Rich Brown
@ 2015-05-08 14:22                                                                                       ` jb
  0 siblings, 0 replies; 127+ messages in thread
From: jb @ 2015-05-08 14:22 UTC (permalink / raw)
  To: Rich Brown; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1151 bytes --]

I was surprised as well, I wasn't that impressed with web sockets for a
while
then realised it was the server side holding them back.

They are also interesting because you can play with asymmetry. For example:
ping with 1 byte up but 1k down? or vice versa. Can. You can't do that with
icmp.

they also don't seem to be too demanding on cpu.

On Fri, May 8, 2015 at 11:20 PM, Rich Brown <richb.hanover@gmail.com> wrote:

>
> On May 7, 2015, at 6:44 AM, jb <justin@dslr.net> wrote:
>
> There is a web socket based jitter tester now. It is very early stage but
> works ok.
>
> http://www.dslreports.com/speedtest?radar=1
>
>
> I was surprised to see how *good* a test the websocket could be. It
> appears to add only a couple msec over the ICMP Echo timings.
>
> Here are a few samples from the web page. The first four columns show the
> values from the PDV test; the final column is the min/avg/max/stddev from
> ping.
>
> NY, USA 162.248.95.144 41 +3ms 38.677/40.405/43.192/1.269 ms
>
> CO, USA 72.5.102.138 80 +3ms 79.305/81.531/85.514/1.540 ms
>
> LA, USA 162.248.93.162 108 +5ms 105.225/106.540/108.358/0.877 ms
>
> Nice work, Justin!
>
> Rich
>

[-- Attachment #2: Type: text/html, Size: 3221 bytes --]

^ permalink raw reply	[flat|nested] 127+ messages in thread

end of thread, other threads:[~2015-05-08 14:22 UTC | newest]

Thread overview: 127+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-19  5:26 [Bloat] DSLReports Speed Test has latency measurement built-in jb
2015-04-19  7:36 ` David Lang
2015-04-19  7:48   ` David Lang
2015-04-19  9:33   ` jb
2015-04-19 10:45     ` David Lang
2015-04-19  8:28 ` Alex Burr
2015-04-19 10:20 ` Sebastian Moeller
2015-04-19 10:46   ` Jonathan Morton
2015-04-19 16:30     ` Sebastian Moeller
2015-04-19 17:41       ` Jonathan Morton
2015-04-19 19:40         ` Sebastian Moeller
2015-04-19 20:53           ` Jonathan Morton
2015-04-21  2:56             ` Simon Barber
2015-04-21  4:15               ` jb
2015-04-21  4:47                 ` David Lang
2015-04-21  7:35                   ` jb
2015-04-21  9:14                     ` Steinar H. Gunderson
2015-04-21 14:20                     ` David Lang
2015-04-21 14:25                       ` David Lang
2015-04-21 14:28                         ` David Lang
2015-04-21 22:13                           ` jb
2015-04-21 22:39                             ` Aaron Wood
2015-04-21 23:17                             ` jb
2015-04-22  2:14                               ` Simon Barber
2015-04-22  2:56                                 ` jb
2015-04-22 14:32                       ` Simon Barber
2015-04-22 17:35                         ` David Lang
2015-04-23  1:37                           ` Simon Barber
2015-04-24 16:54                             ` David Lang
2015-04-24 17:00                               ` Rick Jones
2015-04-21  9:37                 ` Jonathan Morton
2015-04-21 10:35                   ` jb
2015-04-22  4:04                     ` Steinar H. Gunderson
2015-04-22  4:28                       ` Eric Dumazet
2015-04-22  8:51                         ` [Bloat] RE : " luca.muscariello
2015-04-22 12:02                           ` jb
2015-04-22 13:08                             ` Jonathan Morton
     [not found]                             ` <14ce17a7810.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net>
2015-04-22 14:15                               ` Simon Barber
2015-04-22 13:50                           ` [Bloat] " Eric Dumazet
2015-04-22 14:09                             ` Steinar H. Gunderson
2015-04-22 15:26                             ` [Bloat] RE : " luca.muscariello
2015-04-22 15:44                               ` [Bloat] " Eric Dumazet
2015-04-22 16:35                                 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 17:16                                   ` Eric Dumazet
2015-04-22 17:24                                     ` Steinar H. Gunderson
2015-04-22 17:28                                     ` MUSCARIELLO Luca IMT/OLN
2015-04-22 17:45                                       ` MUSCARIELLO Luca IMT/OLN
2015-04-23  5:27                                         ` MUSCARIELLO Luca IMT/OLN
2015-04-23  6:48                                           ` Eric Dumazet
     [not found]                                             ` <CAH3Ss96VwE_fWNMOMOY4AgaEnVFtCP3rPDHSudOcHxckSDNMqQ@mail.gmail.com>
2015-04-23 10:08                                               ` jb
2015-04-24  8:18                                                 ` Sebastian Moeller
2015-04-24  8:29                                                   ` Toke Høiland-Jørgensen
2015-04-24  8:55                                                     ` Sebastian Moeller
2015-04-24  9:02                                                       ` Toke Høiland-Jørgensen
2015-04-24 13:32                                                         ` jb
2015-04-24 13:58                                                           ` Toke Høiland-Jørgensen
2015-04-24 16:51                                                           ` David Lang
2015-04-25  3:15                                                       ` Simon Barber
2015-04-25  4:04                                                         ` Dave Taht
2015-04-25  4:26                                                           ` Simon Barber
2015-04-25  6:03                                                             ` Sebastian Moeller
2015-04-27 16:39                                                               ` Dave Taht
2015-04-28  7:18                                                                 ` Sebastian Moeller
2015-04-28  8:01                                                                   ` David Lang
2015-04-28  8:19                                                                     ` Toke Høiland-Jørgensen
2015-04-28 15:42                                                                       ` David Lang
2015-04-28  8:38                                                                     ` Sebastian Moeller
2015-04-28 12:09                                                                       ` Rich Brown
2015-04-28 15:26                                                                         ` David Lang
2015-04-28 15:39                                                                       ` David Lang
2015-04-28 11:04                                                                     ` Mikael Abrahamsson
2015-04-28 11:49                                                                       ` Sebastian Moeller
2015-04-28 12:24                                                                         ` Mikael Abrahamsson
2015-04-28 13:44                                                                           ` Sebastian Moeller
2015-04-28 19:09                                                                             ` Rick Jones
2015-04-28 14:06                                                                       ` Dave Taht
2015-04-28 14:02                                                                   ` Dave Taht
2015-05-06  5:08                                                               ` Simon Barber
2015-05-06  8:50                                                                 ` Sebastian Moeller
2015-05-06 15:30                                                                   ` Jim Gettys
2015-05-06 18:03                                                                     ` Sebastian Moeller
2015-05-06 20:25                                                                     ` Jonathan Morton
2015-05-06 20:43                                                                       ` Toke Høiland-Jørgensen
2015-05-07  7:33                                                                         ` Sebastian Moeller
2015-05-07  4:29                                                                       ` Mikael Abrahamsson
2015-05-07  7:08                                                                         ` jb
2015-05-07  7:18                                                                           ` Jonathan Morton
2015-05-07  7:24                                                                             ` Mikael Abrahamsson
2015-05-07  7:40                                                                               ` Sebastian Moeller
2015-05-07  9:16                                                                                 ` Mikael Abrahamsson
2015-05-07 10:44                                                                                   ` jb
2015-05-07 11:36                                                                                     ` Sebastian Moeller
2015-05-07 11:44                                                                                     ` Mikael Abrahamsson
2015-05-07 13:10                                                                                       ` Jim Gettys
2015-05-07 13:18                                                                                         ` Mikael Abrahamsson
2015-05-07 13:14                                                                                       ` jb
2015-05-07 13:26                                                                                         ` Neil Davies
2015-05-07 14:45                                                                                         ` Simon Barber
2015-05-07 22:27                                                                                           ` Dave Taht
2015-05-07 22:45                                                                                             ` Dave Taht
2015-05-07 23:09                                                                                             ` Dave Taht
2015-05-08  2:05                                                                                               ` jb
2015-05-08  4:16                                                                                                 ` David Lang
2015-05-08  3:54                                                                                               ` Eric Dumazet
2015-05-08  4:20                                                                                                 ` Dave Taht
2015-05-08 13:20                                                                                     ` [Bloat] DSLReports Jitter/PDV test Rich Brown
2015-05-08 14:22                                                                                       ` jb
2015-05-07  7:37                                                                             ` [Bloat] DSLReports Speed Test has latency measurement built-in Sebastian Moeller
2015-05-07  7:19                                                                           ` Mikael Abrahamsson
2015-05-07  6:19                                                                       ` Sebastian Moeller
2015-04-25  3:23                                                       ` Simon Barber
2015-04-24 15:20                                                     ` Bill Ver Steeg (versteb)
2015-04-25  2:24                                                   ` Simon Barber
2015-04-23 10:17                                             ` renaud sallantin
2015-04-23 14:10                                               ` Eric Dumazet
2015-04-23 14:38                                                 ` renaud sallantin
2015-04-23 15:52                                                   ` Jonathan Morton
2015-04-23 16:00                                                     ` Simon Barber
2015-04-23 13:17                                             ` MUSCARIELLO Luca IMT/OLN
2015-04-22 18:22                                       ` Eric Dumazet
2015-04-22 18:39                                         ` [Bloat] Pacing --- was " MUSCARIELLO Luca IMT/OLN
2015-04-22 19:05                                           ` Jonathan Morton
2015-04-22 15:59                               ` [Bloat] RE : " Steinar H. Gunderson
2015-04-22 16:16                                 ` Eric Dumazet
2015-04-22 16:19                                 ` Dave Taht
2015-04-22 17:15                                   ` Rick Jones
2015-04-19 12:14 ` [Bloat] " Toke Høiland-Jørgensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox