* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
@ 2015-04-19 5:26 jb
2015-04-19 7:36 ` David Lang
` (3 more replies)
0 siblings, 4 replies; 183+ messages in thread
From: jb @ 2015-04-19 5:26 UTC (permalink / raw)
To: Dave Taht, bloat
[-- Attachment #1.1: Type: text/plain, Size: 3070 bytes --]
The graph below the upload and download is what is new.
(unfortunately you do have to be logged into the site to see this)
it shows the latency during the upload and download, color coded. (see
attached image).
In your case during the upload it spiked to ~200ms from ~50ms but it was
not so bad. During upload, there were no issues with latency.
I don't want to force anyone to sign up, just was making sure not to
confuse anonymous users with more information than they knew what to do
with. When I'm clear how to present the information, I'll make it available
by default, to anyone member or otherwise.
Also, regarding your download, it stalled out completely for 5 seconds..
Hence the low conclusion as to your actual speed. It picked up to full
speed again at the end. It basically went
40 .. 40 .. 40 .. 40 .. 8 .. 8 .. 8 .. 40 .. 40 .. 40
which explains why the Latency measurements in blue are not all high.
A TCP stall? you may want to re-run or re-run with Chrome or Safari to see
if it is reproducible. Normally users on your ISP have flat downloads with
no stalls.
thanks
-Justin
> On Sun, Apr 19, 2015 at 2:01 PM, Dave Taht <dave.taht@gmail.com> wrote:
>
>> What I see here is the same old latency, upload, download series, not
>> latency and bandwidth at the same time.
>>
>> http://www.dslreports.com/speedtest/319616
>>
>> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com>
>> wrote:
>> > Folks,
>> >
>> > I am delighted to pass along the news that Justin has added latency
>> measurements into the Speed Test at DSLReports.com.
>> >
>> > Go to: https://www.dslreports.com/speedtest and click the button for
>> your Internet link. This controls the number of simultaneous connections
>> that get established between your browser and the speedtest server. After
>> you run the test, click the green "Results + Share" button to see detailed
>> info. For the moment, you need to be logged in to see the latency results.
>> There's a "register" link on each page.
>> >
>> > The speed test measures latency using websocket pings: Justin says that
>> a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just
>> ran a test and got 48 msec latency from DSLReports, while ping
>> gstatic.com gave 38-40 msec, so they're pretty fast.
>> >
>> > You can leave feedback on this page -
>> http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest
>> - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>> >
>> > Enjoy!
>> >
>> > Rich
>> > _______________________________________________
>> > Bloat mailing list
>> > Bloat@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>>
>> --
>> Dave Täht
>> Open Networking needs **Open Source Hardware**
>>
>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
[-- Attachment #1.2: Type: text/html, Size: 4778 bytes --]
[-- Attachment #2: Screen Shot 2015-04-19 at 3.08.56 pm.png --]
[-- Type: image/png, Size: 14545 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 5:26 [Bloat] DSLReports Speed Test has latency measurement built-in jb
@ 2015-04-19 7:36 ` David Lang
2015-04-19 7:48 ` David Lang
2015-04-19 9:33 ` jb
2015-04-19 8:28 ` Alex Burr
` (2 subsequent siblings)
3 siblings, 2 replies; 183+ messages in thread
From: David Lang @ 2015-04-19 7:36 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: TEXT/Plain, Size: 3973 bytes --]
As a start, the ping time during the test that shows up in the results page is
good, but you should show that on the main screen, not just in the results+share
tab.
it looked like the main test was showing when the upload was stalled, (white
under the line instead of color), but this didn't show up in the report tab.
I also think that the retransmit stats are probably worth watching and doing
something with. You are trying to drive the line to full capacity, so some
drops/retransmts are expected. How many re expected vs how many are showing up?
http://www.dslreports.com/speedtest/320230 (and now you see my pathetic link)
David Lang
On Sun, 19 Apr 2015, jb wrote:
> Date: Sun, 19 Apr 2015 15:26:51 +1000
> From: jb <justin@dslr.net>
> To: Dave Taht <dave.taht@gmail.com>, bloat@lists.bufferbloat.net
> Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in
>
> The graph below the upload and download is what is new.
> (unfortunately you do have to be logged into the site to see this)
> it shows the latency during the upload and download, color coded. (see
> attached image).
>
> In your case during the upload it spiked to ~200ms from ~50ms but it was
> not so bad. During upload, there were no issues with latency.
>
> I don't want to force anyone to sign up, just was making sure not to
> confuse anonymous users with more information than they knew what to do
> with. When I'm clear how to present the information, I'll make it available
> by default, to anyone member or otherwise.
>
> Also, regarding your download, it stalled out completely for 5 seconds..
> Hence the low conclusion as to your actual speed. It picked up to full
> speed again at the end. It basically went
> 40 .. 40 .. 40 .. 40 .. 8 .. 8 .. 8 .. 40 .. 40 .. 40
> which explains why the Latency measurements in blue are not all high.
> A TCP stall? you may want to re-run or re-run with Chrome or Safari to see
> if it is reproducible. Normally users on your ISP have flat downloads with
> no stalls.
>
> thanks
> -Justin
>
>
>
>> On Sun, Apr 19, 2015 at 2:01 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>
>>> What I see here is the same old latency, upload, download series, not
>>> latency and bandwidth at the same time.
>>>
>>> http://www.dslreports.com/speedtest/319616
>>>
>>> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com>
>>> wrote:
>>>> Folks,
>>>>
>>>> I am delighted to pass along the news that Justin has added latency
>>> measurements into the Speed Test at DSLReports.com.
>>>>
>>>> Go to: https://www.dslreports.com/speedtest and click the button for
>>> your Internet link. This controls the number of simultaneous connections
>>> that get established between your browser and the speedtest server. After
>>> you run the test, click the green "Results + Share" button to see detailed
>>> info. For the moment, you need to be logged in to see the latency results.
>>> There's a "register" link on each page.
>>>>
>>>> The speed test measures latency using websocket pings: Justin says that
>>> a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just
>>> ran a test and got 48 msec latency from DSLReports, while ping
>>> gstatic.com gave 38-40 msec, so they're pretty fast.
>>>>
>>>> You can leave feedback on this page -
>>> http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest
>>> - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>>>>
>>>> Enjoy!
>>>>
>>>> Rich
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>>
>>> --
>>> Dave Täht
>>> Open Networking needs **Open Source Hardware**
>>>
>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>>
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 7:36 ` David Lang
@ 2015-04-19 7:48 ` David Lang
2015-04-19 9:33 ` jb
1 sibling, 0 replies; 183+ messages in thread
From: David Lang @ 2015-04-19 7:48 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 4322 bytes --]
On Sun, 19 Apr 2015, David Lang wrote:
> As a start, the ping time during the test that shows up in the results page
> is good, but you should show that on the main screen, not just in the
> results+share tab.
Thinking about it, how about a graph showing the ratio of latency under test to
the initial idle latency along with the bandwidth number?
David Lang
> it looked like the main test was showing when the upload was stalled, (white
> under the line instead of color), but this didn't show up in the report tab.
>
> I also think that the retransmit stats are probably worth watching and doing
> something with. You are trying to drive the line to full capacity, so some
> drops/retransmts are expected. How many re expected vs how many are showing
> up?
>
> http://www.dslreports.com/speedtest/320230 (and now you see my pathetic link)
>
> David Lang
>
>
> On Sun, 19 Apr 2015, jb wrote:
>
>> Date: Sun, 19 Apr 2015 15:26:51 +1000
>> From: jb <justin@dslr.net>
>> To: Dave Taht <dave.taht@gmail.com>, bloat@lists.bufferbloat.net
>> Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in
>>
>> The graph below the upload and download is what is new.
>> (unfortunately you do have to be logged into the site to see this)
>> it shows the latency during the upload and download, color coded. (see
>> attached image).
>>
>> In your case during the upload it spiked to ~200ms from ~50ms but it was
>> not so bad. During upload, there were no issues with latency.
>>
>> I don't want to force anyone to sign up, just was making sure not to
>> confuse anonymous users with more information than they knew what to do
>> with. When I'm clear how to present the information, I'll make it available
>> by default, to anyone member or otherwise.
>>
>> Also, regarding your download, it stalled out completely for 5 seconds..
>> Hence the low conclusion as to your actual speed. It picked up to full
>> speed again at the end. It basically went
>> 40 .. 40 .. 40 .. 40 .. 8 .. 8 .. 8 .. 40 .. 40 .. 40
>> which explains why the Latency measurements in blue are not all high.
>> A TCP stall? you may want to re-run or re-run with Chrome or Safari to see
>> if it is reproducible. Normally users on your ISP have flat downloads with
>> no stalls.
>>
>> thanks
>> -Justin
>>
>>
>>
>>> On Sun, Apr 19, 2015 at 2:01 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>>> What I see here is the same old latency, upload, download series, not
>>>> latency and bandwidth at the same time.
>>>>
>>>> http://www.dslreports.com/speedtest/319616
>>>>
>>>> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com>
>>>> wrote:
>>>>> Folks,
>>>>>
>>>>> I am delighted to pass along the news that Justin has added latency
>>>> measurements into the Speed Test at DSLReports.com.
>>>>>
>>>>> Go to: https://www.dslreports.com/speedtest and click the button for
>>>> your Internet link. This controls the number of simultaneous connections
>>>> that get established between your browser and the speedtest server. After
>>>> you run the test, click the green "Results + Share" button to see
>>>> detailed
>>>> info. For the moment, you need to be logged in to see the latency
>>>> results.
>>>> There's a "register" link on each page.
>>>>>
>>>>> The speed test measures latency using websocket pings: Justin says that
>>>> a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I
>>>> just
>>>> ran a test and got 48 msec latency from DSLReports, while ping
>>>> gstatic.com gave 38-40 msec, so they're pretty fast.
>>>>>
>>>>> You can leave feedback on this page -
>>>> http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest
>>>> - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>>>>>
>>>>> Enjoy!
>>>>>
>>>>> Rich
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>>
>>>> --
>>>> Dave Täht
>>>> Open Networking needs **Open Source Hardware**
>>>>
>>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>
>>>
>
[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 5:26 [Bloat] DSLReports Speed Test has latency measurement built-in jb
2015-04-19 7:36 ` David Lang
@ 2015-04-19 8:28 ` Alex Burr
2015-04-19 10:20 ` Sebastian Moeller
2015-04-19 12:14 ` [Bloat] " Toke Høiland-Jørgensen
3 siblings, 0 replies; 183+ messages in thread
From: Alex Burr @ 2015-04-19 8:28 UTC (permalink / raw)
To: jb, bloat
[-- Attachment #1: Type: text/plain, Size: 1597 bytes --]
Justin,
This looks really useful. On the subject of presenting the information in a clear way, we have always struggled with how to present a 'larger is worse' number. A while back I posted a graphic which attempts to overcome this for latency:https://imgrush.com/0oPGJ8VHluFy.pngFeel free to use any aspect of that. The example there compares fictional ISPs but it could easily be used to compare typical latency with latency under load.
(I used the word 'delay' as it is more familiar than latency. The number is illustrated by a picture of a physical queue; hopefully everyone can identify it instantly, and knows that a longer one is worse. The eye supposed to be drawn to the figure at the back of the queue to emphasise this.)
Best,Alex
From: jb <justin@dslr.net>
To: Dave Taht <dave.taht@gmail.com>; bloat@lists.bufferbloat.net
Sent: Sunday, April 19, 2015 6:26 AM
Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in
The graph below the upload and download is what is new.(unfortunately you do have to be logged into the site to see this)it shows the latency during the upload and download, color coded. (see attached image).
In your case during the upload it spiked to ~200ms from ~50ms but it was not so bad. During upload, there were no issues with latency.
I don't want to force anyone to sign up, just was making sure not to confuse anonymous users with more information than they knew what to do with. When I'm clear how to present the information, I'll make it available by default, to anyone member or otherwise.
[-- Attachment #2: Type: text/html, Size: 4283 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 7:36 ` David Lang
2015-04-19 7:48 ` David Lang
@ 2015-04-19 9:33 ` jb
2015-04-19 10:45 ` David Lang
1 sibling, 1 reply; 183+ messages in thread
From: jb @ 2015-04-19 9:33 UTC (permalink / raw)
To: David Lang, bloat
[-- Attachment #1: Type: text/plain, Size: 6403 bytes --]
Hey there is nothing wrong with Megapath ! they were my second ISP after
Northpoint went bust.
If I had a time machine, and went back to 2001, I'd be very happy with
Megapath.. :)
Anyway Rich has some good ideas for how to display the latency during the
test progress and yes
that is the plan, to do that.
The results page, for the sake of non-confusion by random people, does just
show the smoothed
line and not the instant download speeds. I'm not confident enough that the
instant speeds
relate 1:1 with what is going on at your interface as unfortunately modern
browsers don't
give nearly enough feedback on how uploads are going compared to how they
instrument
downloads - which are almost on a packet by packet level. You would hope
they would pass
back events regularly but unless then line is pretty fast, they don't. They
go quiet and catch up
and meanwhile the actual upload might be steady.
The retransmit stats and congestion window and other things come from the
linux tcp structures
on the server side. Retransmits are not necessarily lost packets, it can
just be TCP getting confused by
a highly variable RTT and re-sending too soon. But on a very good
connection (google fiber) that
column is 0%. On some bad connections, it can be 20%+ Often it is between 1
and 3%.
Yes the speed test is trying to drive the line to capacity but not force
feeding, it is just TCP after
all, and all streams should find their relative place in the available
bandwidth and lost packets
should be rare when the underlying network is good quality. At least, that
is what I've seen. I'm
far from understanding the congestion algorithms etc. However I did find it
somewhat surprising
that whether one stream or 20, it all sort of works out with roughly the
same efficiency.
With more data I think the re-transmits and other indicators can just show
real problems, even
despite that the test tends to (hopefully) find your last mile sync speed
and drive it to capacity.
-justin
On Sun, Apr 19, 2015 at 5:36 PM, David Lang <david@lang.hm> wrote:
> As a start, the ping time during the test that shows up in the results
> page is good, but you should show that on the main screen, not just in the
> results+share tab.
>
> it looked like the main test was showing when the upload was stalled,
> (white under the line instead of color), but this didn't show up in the
> report tab.
>
> I also think that the retransmit stats are probably worth watching and
> doing something with. You are trying to drive the line to full capacity, so
> some drops/retransmts are expected. How many re expected vs how many are
> showing up?
>
> http://www.dslreports.com/speedtest/320230 (and now you see my pathetic
> link)
>
> David Lang
>
>
> On Sun, 19 Apr 2015, jb wrote:
>
> Date: Sun, 19 Apr 2015 15:26:51 +1000
>> From: jb <justin@dslr.net>
>> To: Dave Taht <dave.taht@gmail.com>, bloat@lists.bufferbloat.net
>> Subject: Re: [Bloat] DSLReports Speed Test has latency measurement
>> built-in
>>
>>
>> The graph below the upload and download is what is new.
>> (unfortunately you do have to be logged into the site to see this)
>> it shows the latency during the upload and download, color coded. (see
>> attached image).
>>
>> In your case during the upload it spiked to ~200ms from ~50ms but it was
>> not so bad. During upload, there were no issues with latency.
>>
>> I don't want to force anyone to sign up, just was making sure not to
>> confuse anonymous users with more information than they knew what to do
>> with. When I'm clear how to present the information, I'll make it
>> available
>> by default, to anyone member or otherwise.
>>
>> Also, regarding your download, it stalled out completely for 5 seconds..
>> Hence the low conclusion as to your actual speed. It picked up to full
>> speed again at the end. It basically went
>> 40 .. 40 .. 40 .. 40 .. 8 .. 8 .. 8 .. 40 .. 40 .. 40
>> which explains why the Latency measurements in blue are not all high.
>> A TCP stall? you may want to re-run or re-run with Chrome or Safari to see
>> if it is reproducible. Normally users on your ISP have flat downloads with
>> no stalls.
>>
>> thanks
>> -Justin
>>
>>
>>
>> On Sun, Apr 19, 2015 at 2:01 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>> What I see here is the same old latency, upload, download series, not
>>>> latency and bandwidth at the same time.
>>>>
>>>> http://www.dslreports.com/speedtest/319616
>>>>
>>>> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com>
>>>> wrote:
>>>>
>>>>> Folks,
>>>>>
>>>>> I am delighted to pass along the news that Justin has added latency
>>>>>
>>>> measurements into the Speed Test at DSLReports.com.
>>>>
>>>>>
>>>>> Go to: https://www.dslreports.com/speedtest and click the button for
>>>>>
>>>> your Internet link. This controls the number of simultaneous connections
>>>> that get established between your browser and the speedtest server.
>>>> After
>>>> you run the test, click the green "Results + Share" button to see
>>>> detailed
>>>> info. For the moment, you need to be logged in to see the latency
>>>> results.
>>>> There's a "register" link on each page.
>>>>
>>>>>
>>>>> The speed test measures latency using websocket pings: Justin says that
>>>>>
>>>> a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I
>>>> just
>>>> ran a test and got 48 msec latency from DSLReports, while ping
>>>> gstatic.com gave 38-40 msec, so they're pretty fast.
>>>>
>>>>>
>>>>> You can leave feedback on this page -
>>>>>
>>>>
>>>> http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest
>>>> - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>>>>
>>>>>
>>>>> Enjoy!
>>>>>
>>>>> Rich
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Dave Täht
>>>> Open Networking needs **Open Source Hardware**
>>>>
>>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>
>>>
[-- Attachment #2: Type: text/html, Size: 8952 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 5:26 [Bloat] DSLReports Speed Test has latency measurement built-in jb
2015-04-19 7:36 ` David Lang
2015-04-19 8:28 ` Alex Burr
@ 2015-04-19 10:20 ` Sebastian Moeller
2015-04-19 10:46 ` Jonathan Morton
2015-04-19 12:14 ` [Bloat] " Toke Høiland-Jørgensen
3 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2015-04-19 10:20 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 5674 bytes --]
Hi Justin,
On Apr 19, 2015, at 07:26 , jb <justin@dslr.net> wrote:
> The graph below the upload and download is what is new.
> (unfortunately you do have to be logged into the site to see this)
> it shows the latency during the upload and download, color coded. (see attached image).
This looks really good! The whole new test is great and reporting the latency numbers are the cherry on top.
If there was a fairy around granting me three wishes restricted to your sppedtest’s latency portion (I know sort of the short end as far as wish-fairies go) I would ask for;
1) show the mean baseline latency as a line crossing all the bars, after all that is the best case and what we need to compare against to get the “under load” part from latency under load from.
2) To be able to asses the variability in the baseline I would ask for 95% confidence intervals around the baseline line. (Sure latencies are not normally distributed and hence neuter arithmetic mean nor confidence interval is the right thing to calculate from a statistics point of view, but at least they are relatively easy to understand, should be known to the users, and should still capture the gist of what is happening). The beauty of confidence intervals is that this allows to eye-ball the significance of the latency deviations under the two load conditions, if the bar does not fall into the 95% confidence interval, testing this value against the baseline distribution will turn out significant with p<= 0.05 in a t-test.
3) I would ask to never use a log scale, as this makes extreme outliers look better than they are so linear scale starting at 0 would be my wish here. People starting out from a high latency link will not b able willing to tolerate more latency increase under load than people on low latency links but rather they can only tolerate less latency increase if they still want decent VoIP or gaming experience, so reporting the latency under load as ratio of the unloaded latency would be counter productive. Reporting the latency under load as frequency (inverse of delay time) would be nice in that higher numbers denote a "better” link, but has the issue that it is going to be hard to quickly add different latency sources/components...
4) I know I only had three wishes, but measuring the latency while simultaneously saturating up- and download would be nice to test the worst case latency under load increase...
I wonder is the latency test running against a different host than the bandwidth tests? If so are they using the same connection/port? (I just wonder whether fq_codel will hash the latency probe packets into different bins than the bandwidth packets).
Best Regards
Sebastian
>
> In your case during the upload it spiked to ~200ms from ~50ms but it was not so bad. During upload, there were no issues with latency.
>
> I don't want to force anyone to sign up, just was making sure not to confuse anonymous users with more information than they knew what to do with. When I'm clear how to present the information, I'll make it available by default, to anyone member or otherwise.
>
> Also, regarding your download, it stalled out completely for 5 seconds.. Hence the low conclusion as to your actual speed. It picked up to full speed again at the end. It basically went
> 40 .. 40 .. 40 .. 40 .. 8 .. 8 .. 8 .. 40 .. 40 .. 40
> which explains why the Latency measurements in blue are not all high.
> A TCP stall? you may want to re-run or re-run with Chrome or Safari to see if it is reproducible. Normally users on your ISP have flat downloads with no stalls.
>
> thanks
> -Justin
>
>
> On Sun, Apr 19, 2015 at 2:01 PM, Dave Taht <dave.taht@gmail.com> wrote:
> What I see here is the same old latency, upload, download series, not
> latency and bandwidth at the same time.
>
> http://www.dslreports.com/speedtest/319616
>
> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com> wrote:
> > Folks,
> >
> > I am delighted to pass along the news that Justin has added latency measurements into the Speed Test at DSLReports.com.
> >
> > Go to: https://www.dslreports.com/speedtest and click the button for your Internet link. This controls the number of simultaneous connections that get established between your browser and the speedtest server. After you run the test, click the green "Results + Share" button to see detailed info. For the moment, you need to be logged in to see the latency results. There's a "register" link on each page.
> >
> > The speed test measures latency using websocket pings: Justin says that a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just ran a test and got 48 msec latency from DSLReports, while ping gstatic.com gave 38-40 msec, so they're pretty fast.
> >
> > You can leave feedback on this page - http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest - or wait 'til Justin creates a new Bufferbloat topic on the forums.
> >
> > Enjoy!
> >
> > Rich
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
[-- Attachment #2.1: Type: text/html, Size: 6784 bytes --]
[-- Attachment #2.2: Screen Shot 2015-04-19 at 3.08.56 pm.png --]
[-- Type: image/png, Size: 14545 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 9:33 ` jb
@ 2015-04-19 10:45 ` David Lang
0 siblings, 0 replies; 183+ messages in thread
From: David Lang @ 2015-04-19 10:45 UTC (permalink / raw)
To: jb; +Cc: bloat
On Sun, 19 Apr 2015, jb wrote:
> Hey there is nothing wrong with Megapath ! they were my second ISP after
> Northpoint went bust.
> If I had a time machine, and went back to 2001, I'd be very happy with
> Megapath.. :)
nothing's wrong with megapath, just with the phone lines in the area limiting
the best speed I can get.
In any case, this looks like a good test.
David Lang
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 10:20 ` Sebastian Moeller
@ 2015-04-19 10:46 ` Jonathan Morton
2015-04-19 16:30 ` Sebastian Moeller
0 siblings, 1 reply; 183+ messages in thread
From: Jonathan Morton @ 2015-04-19 10:46 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
> On 19 Apr, 2015, at 13:20, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Reporting the latency under load as frequency (inverse of delay time) would be nice in that higher numbers denote a "better” link, but has the issue that it is going to be hard to quickly add different latency sources/components...
Personally I’d say that this disadvantage matters more to us scientists and engineers than to end-users. Frequency readouts are probably more accessible to the latter.
- Jonathan Morton
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 5:26 [Bloat] DSLReports Speed Test has latency measurement built-in jb
` (2 preceding siblings ...)
2015-04-19 10:20 ` Sebastian Moeller
@ 2015-04-19 12:14 ` Toke Høiland-Jørgensen
3 siblings, 0 replies; 183+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-19 12:14 UTC (permalink / raw)
To: jb; +Cc: bloat
jb <justin@dslr.net> writes:
> The graph below the upload and download is what is new. (unfortunately
> you do have to be logged into the site to see this) it shows the
> latency during the upload and download, color coded. (see attached
> image).
So where is that graph? I only see the regular up- and down graphs.
http://www.dslreports.com/speedtest/320936
It shows up for this result, though...
http://www.dslreports.com/speedtest/319616
-Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 10:46 ` Jonathan Morton
@ 2015-04-19 16:30 ` Sebastian Moeller
2015-04-19 17:41 ` Jonathan Morton
0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2015-04-19 16:30 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
Hi Jonathan,
On Apr 19, 2015, at 12:46 , Jonathan Morton <chromatix99@gmail.com> wrote:
>
>> On 19 Apr, 2015, at 13:20, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Reporting the latency under load as frequency (inverse of delay time) would be nice in that higher numbers denote a "better” link, but has the issue that it is going to be hard to quickly add different latency sources/components...
>
> Personally I’d say that this disadvantage matters more to us scientists and engineers than to end-users. Frequency readouts are probably more accessible to the latter.
The frequency domain more accessible to laypersons? I have my doubts ;) I like your responsiveness frequency roprt as I tend to call it myself, but I more and more think calling the whole thing latency cost or latency tax will make everybody understand that it should be minimized, plus it allows for easier calculations…. ;)
Best Regards
Sebastian
>
> - Jonathan Morton
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 16:30 ` Sebastian Moeller
@ 2015-04-19 17:41 ` Jonathan Morton
2015-04-19 19:40 ` Sebastian Moeller
0 siblings, 1 reply; 183+ messages in thread
From: Jonathan Morton @ 2015-04-19 17:41 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
> On 19 Apr, 2015, at 19:30, Sebastian Moeller <moeller0@gmx.de> wrote:
>
>> Frequency readouts are probably more accessible to the latter.
>
> The frequency domain more accessible to laypersons? I have my doubts ;)
Gamers, at least, are familiar with “frames per second” and how that corresponds to their monitor’s refresh rate. The desirable range of latencies, when converted to Hz, happens to be roughly the same as the range of desirable framerates.
- Jonathan Morton
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 17:41 ` Jonathan Morton
@ 2015-04-19 19:40 ` Sebastian Moeller
2015-04-19 20:53 ` Jonathan Morton
0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2015-04-19 19:40 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
Hi Jonathan,
On Apr 19, 2015, at 19:41 , Jonathan Morton <chromatix99@gmail.com> wrote:
>
>> On 19 Apr, 2015, at 19:30, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>>> Frequency readouts are probably more accessible to the latter.
>>
>> The frequency domain more accessible to laypersons? I have my doubts ;)
>
> Gamers, at least, are familiar with “frames per second” and how that corresponds to their monitor’s refresh rate.
I am sure they can easily transform back into time domain to get the frame period ;) . I am partly kidding, I think your idea is great in that it is a truly positive value which could lend itself to being used in ISP/router manufacturer advertising, and hence might work in the real work; on the other hand I like to keep data as “raw” as possible (not that ^(-1) is a transformation worthy of being called data massage).
> The desirable range of latencies, when converted to Hz, happens to be roughly the same as the range of desirable frame rates.
Just to play devils advocate, the interesting part is time or saving time so seconds or milliseconds are also intuitively understandable and can be easily added ;)
Best Regards
Sebastian
>
> - Jonathan Morton
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 19:40 ` Sebastian Moeller
@ 2015-04-19 20:53 ` Jonathan Morton
2015-04-21 2:56 ` Simon Barber
0 siblings, 1 reply; 183+ messages in thread
From: Jonathan Morton @ 2015-04-19 20:53 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
>>>> Frequency readouts are probably more accessible to the latter.
>>>
>>> The frequency domain more accessible to laypersons? I have my doubts ;)
>>
>> Gamers, at least, are familiar with “frames per second” and how that corresponds to their monitor’s refresh rate.
>
> I am sure they can easily transform back into time domain to get the frame period ;) . I am partly kidding, I think your idea is great in that it is a truly positive value which could lend itself to being used in ISP/router manufacturer advertising, and hence might work in the real work; on the other hand I like to keep data as “raw” as possible (not that ^(-1) is a transformation worthy of being called data massage).
>
>> The desirable range of latencies, when converted to Hz, happens to be roughly the same as the range of desirable frame rates.
>
> Just to play devils advocate, the interesting part is time or saving time so seconds or milliseconds are also intuitively understandable and can be easily added ;)
Such readouts are certainly interesting to people like us. I have no objection to them being reported alongside a frequency readout. But I think most people are not interested in “time savings” measured in milliseconds; they’re much more aware of the minute- and hour-level time savings associated with greater bandwidth.
- Jonathan Morton
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 20:53 ` Jonathan Morton
@ 2015-04-21 2:56 ` Simon Barber
2015-04-21 4:15 ` jb
0 siblings, 1 reply; 183+ messages in thread
From: Simon Barber @ 2015-04-21 2:56 UTC (permalink / raw)
To: Jonathan Morton, Sebastian Moeller; +Cc: bloat
One thing users understand is slow web access. Perhaps translating the
latency measurement into 'a typical web page will take X seconds longer to
load', or even stating the impact as 'this latency causes a typical web
page to load slower, as if your connection was only YY% of the measured speed.'
Simon
Sent with AquaMail for Android
http://www.aqua-mail.com
On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com> wrote:
> >>>> Frequency readouts are probably more accessible to the latter.
> >>>
> >>> The frequency domain more accessible to laypersons? I have my doubts ;)
> >>
> >> Gamers, at least, are familiar with “frames per second” and how that
> corresponds to their monitor’s refresh rate.
> >
> > I am sure they can easily transform back into time domain to get the
> frame period ;) . I am partly kidding, I think your idea is great in that
> it is a truly positive value which could lend itself to being used in
> ISP/router manufacturer advertising, and hence might work in the real work;
> on the other hand I like to keep data as “raw” as possible (not that ^(-1)
> is a transformation worthy of being called data massage).
> >
> >> The desirable range of latencies, when converted to Hz, happens to be
> roughly the same as the range of desirable frame rates.
> >
> > Just to play devils advocate, the interesting part is time or saving
> time so seconds or milliseconds are also intuitively understandable and can
> be easily added ;)
>
> Such readouts are certainly interesting to people like us. I have no
> objection to them being reported alongside a frequency readout. But I
> think most people are not interested in “time savings” measured in
> milliseconds; they’re much more aware of the minute- and hour-level time
> savings associated with greater bandwidth.
>
> - Jonathan Morton
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 2:56 ` Simon Barber
@ 2015-04-21 4:15 ` jb
2015-04-21 4:47 ` David Lang
2015-04-21 9:37 ` Jonathan Morton
0 siblings, 2 replies; 183+ messages in thread
From: jb @ 2015-04-21 4:15 UTC (permalink / raw)
To: bloat
[-- Attachment #1.1: Type: text/plain, Size: 4960 bytes --]
I've discovered something perhaps you guys can explain it better or shed
some light.
It isn't specifically to do with buffer bloat but it is to do with TCP
tuning.
Attached is two pictures of my upload to New York speed test server with 1
stream.
It doesn't make any difference if it is 1 stream or 8 streams, the picture
and behaviour remains the same.
I am 200ms from new york so it qualifies as a fairly long (but not very
fat) pipe.
The nice smooth one is with linux tcp_rmem set to '4096 32768 65535' (on
the server)
The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864' (on
the server)
It actually doesn't matter what that last huge number is, once it goes much
about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
upload speed.
Now as I understand this setting, it is the tcp receive window that Linux
advertises, and the last number sets the maximum size it can get to (for
one TCP stream).
For users with very fast upload speeds, they do not see an ugly bumpy
upload graph, it is smooth and sustained.
But for the majority of users (like me) with uploads less than 5 to 10mbit,
we frequently see the ugly graph.
The second tcp_rmem setting is how I have been running the speed test
servers.
Up to now I thought this was just the distance of the speedtest from the
interface: perhaps the browser was buffering a lot, and didn't feed back
progress but now I realise the bumpy one is actually being influenced by
the server receive window.
I guess my question is this: Why does ALLOWING a large receive window
appear to encourage problems with upload smoothness??
This implies that setting the receive window should be done on a connection
by connection basis: small for slow connections, large, for high speed,
long distance connections.
In addition, if I cap it to 65k, for reasons of smoothness,
that means the bandwidth delay product will keep maximum speed per upload
stream quite low. So a symmetric or gigabit connection is going to need a
ton of parallel streams to see full speed.
Most puzzling is why would anything special be required on the Client -->
Server side of the equation
but nothing much appears wrong with the Server --> Client side, whether
speeds are very low (GPRS) or very high (gigabit).
Note that also I am not yet sure if smoothness == better throughput. I have
noticed upload speeds for some people often being under their claimed sync
rate by 10 or 20% but I've no logs that show the bumpy graph is showing
inefficiency. Maybe.
help!
On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net> wrote:
> One thing users understand is slow web access. Perhaps translating the
> latency measurement into 'a typical web page will take X seconds longer to
> load', or even stating the impact as 'this latency causes a typical web
> page to load slower, as if your connection was only YY% of the measured
> speed.'
>
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
>
>
> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
> wrote:
>
> >>>> Frequency readouts are probably more accessible to the latter.
>> >>>
>> >>> The frequency domain more accessible to laypersons? I have my
>> doubts ;)
>> >>
>> >> Gamers, at least, are familiar with “frames per second” and how that
>> corresponds to their monitor’s refresh rate.
>> >
>> > I am sure they can easily transform back into time domain to get
>> the frame period ;) . I am partly kidding, I think your idea is great in
>> that it is a truly positive value which could lend itself to being used in
>> ISP/router manufacturer advertising, and hence might work in the real work;
>> on the other hand I like to keep data as “raw” as possible (not that ^(-1)
>> is a transformation worthy of being called data massage).
>> >
>> >> The desirable range of latencies, when converted to Hz, happens to be
>> roughly the same as the range of desirable frame rates.
>> >
>> > Just to play devils advocate, the interesting part is time or
>> saving time so seconds or milliseconds are also intuitively understandable
>> and can be easily added ;)
>>
>> Such readouts are certainly interesting to people like us. I have no
>> objection to them being reported alongside a frequency readout. But I
>> think most people are not interested in “time savings” measured in
>> milliseconds; they’re much more aware of the minute- and hour-level time
>> savings associated with greater bandwidth.
>>
>> - Jonathan Morton
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #1.2: Type: text/html, Size: 6432 bytes --]
[-- Attachment #2: Screen Shot 2015-04-21 at 2.00.46 pm.png --]
[-- Type: image/png, Size: 10663 bytes --]
[-- Attachment #3: Screen Shot 2015-04-21 at 1.59.25 pm.png --]
[-- Type: image/png, Size: 9279 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 4:15 ` jb
@ 2015-04-21 4:47 ` David Lang
2015-04-21 7:35 ` jb
2015-04-21 9:37 ` Jonathan Morton
1 sibling, 1 reply; 183+ messages in thread
From: David Lang @ 2015-04-21 4:47 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: TEXT/Plain, Size: 6628 bytes --]
On Tue, 21 Apr 2015, jb wrote:
> I've discovered something perhaps you guys can explain it better or shed
> some light.
> It isn't specifically to do with buffer bloat but it is to do with TCP
> tuning.
>
> Attached is two pictures of my upload to New York speed test server with 1
> stream.
> It doesn't make any difference if it is 1 stream or 8 streams, the picture
> and behaviour remains the same.
> I am 200ms from new york so it qualifies as a fairly long (but not very
> fat) pipe.
>
> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535' (on
> the server)
> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864' (on
> the server)
>
> It actually doesn't matter what that last huge number is, once it goes much
> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
> upload speed.
>
> Now as I understand this setting, it is the tcp receive window that Linux
> advertises, and the last number sets the maximum size it can get to (for
> one TCP stream).
>
> For users with very fast upload speeds, they do not see an ugly bumpy
> upload graph, it is smooth and sustained.
> But for the majority of users (like me) with uploads less than 5 to 10mbit,
> we frequently see the ugly graph.
>
> The second tcp_rmem setting is how I have been running the speed test
> servers.
>
> Up to now I thought this was just the distance of the speedtest from the
> interface: perhaps the browser was buffering a lot, and didn't feed back
> progress but now I realise the bumpy one is actually being influenced by
> the server receive window.
>
> I guess my question is this: Why does ALLOWING a large receive window
> appear to encourage problems with upload smoothness??
>
> This implies that setting the receive window should be done on a connection
> by connection basis: small for slow connections, large, for high speed,
> long distance connections.
This is classic bufferbloat
the receiver advertizes a large receive window, so the sender doesn't pause
until there is that much data outstanding, or they get a timeout of a packet as
a signal to slow down.
and because you have a gig-E link locally, your machine generates traffic very
rapidly, until all that data is 'in flight'. but it's really sitting in the
buffer of a router trying to get through.
then when a packet times out, the sender slows down a smidge and retransmits it.
But the old packet is still sitting in a queue, eating bandwidth. the packets
behind it are also going to timeout and be retransmitted before your first
retransmitted packet gets through, so you have a large slug of data that's being
retransmitted, and the first of the replacement data can't get through until the
last of the old (timed out) data is transmitted.
then when data starts flowing again, the sender again tries to fill up the
window with data in flight.
> In addition, if I cap it to 65k, for reasons of smoothness,
> that means the bandwidth delay product will keep maximum speed per upload
> stream quite low. So a symmetric or gigabit connection is going to need a
> ton of parallel streams to see full speed.
>
> Most puzzling is why would anything special be required on the Client -->
> Server side of the equation
> but nothing much appears wrong with the Server --> Client side, whether
> speeds are very low (GPRS) or very high (gigabit).
but what window sizes are these clients advertising?
> Note that also I am not yet sure if smoothness == better throughput. I have
> noticed upload speeds for some people often being under their claimed sync
> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
> inefficiency. Maybe.
If you were to do a packet capture on the server side, you would see that you
have a bunch of packets that are arriving multiple times, but the first time
"does't count" because the replacement is already on the way.
so your overall throughput is lower for two reasons
1. it's bursty, and there are times when the connection actually is idle (after
you have a lot of timed out packets, the sender needs to ramp up it's speed
again)
2. you are sending some packets multiple times, consuming more total bandwidth
for the same 'goodput' (effective throughput)
David Lang
> help!
>
>
> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net> wrote:
>
>> One thing users understand is slow web access. Perhaps translating the
>> latency measurement into 'a typical web page will take X seconds longer to
>> load', or even stating the impact as 'this latency causes a typical web
>> page to load slower, as if your connection was only YY% of the measured
>> speed.'
>>
>> Simon
>>
>> Sent with AquaMail for Android
>> http://www.aqua-mail.com
>>
>>
>>
>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
>> wrote:
>>
>> >>>> Frequency readouts are probably more accessible to the latter.
>>>>>>
>>>>>> The frequency domain more accessible to laypersons? I have my
>>> doubts ;)
>>>>>
>>>>> Gamers, at least, are familiar with “frames per second” and how that
>>> corresponds to their monitor’s refresh rate.
>>>>
>>>> I am sure they can easily transform back into time domain to get
>>> the frame period ;) . I am partly kidding, I think your idea is great in
>>> that it is a truly positive value which could lend itself to being used in
>>> ISP/router manufacturer advertising, and hence might work in the real work;
>>> on the other hand I like to keep data as “raw” as possible (not that ^(-1)
>>> is a transformation worthy of being called data massage).
>>>>
>>>>> The desirable range of latencies, when converted to Hz, happens to be
>>> roughly the same as the range of desirable frame rates.
>>>>
>>>> Just to play devils advocate, the interesting part is time or
>>> saving time so seconds or milliseconds are also intuitively understandable
>>> and can be easily added ;)
>>>
>>> Such readouts are certainly interesting to people like us. I have no
>>> objection to them being reported alongside a frequency readout. But I
>>> think most people are not interested in “time savings” measured in
>>> milliseconds; they’re much more aware of the minute- and hour-level time
>>> savings associated with greater bandwidth.
>>>
>>> - Jonathan Morton
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
[-- Attachment #2: Type: IMAGE/PNG, Size: 10663 bytes --]
[-- Attachment #3: Type: IMAGE/PNG, Size: 9279 bytes --]
[-- Attachment #4: Type: TEXT/PLAIN, Size: 140 bytes --]
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 4:47 ` David Lang
@ 2015-04-21 7:35 ` jb
2015-04-21 9:14 ` Steinar H. Gunderson
2015-04-21 14:20 ` David Lang
0 siblings, 2 replies; 183+ messages in thread
From: jb @ 2015-04-21 7:35 UTC (permalink / raw)
To: David Lang, bloat
[-- Attachment #1: Type: text/plain, Size: 8849 bytes --]
> the receiver advertizes a large receive window, so the sender doesn't
pause > until there is that much data outstanding, or they get a timeout of
a packet as > a signal to slow down.
> and because you have a gig-E link locally, your machine generates traffic
\
> very rapidly, until all that data is 'in flight'. but it's really sitting
in the buffer of
> router trying to get through.
Hmm, then I have a quandary because I can easily solve the nasty bumpy
upload graphs by keeping the advertised receive window on the server capped
low, however then, paradoxically, there is no more sign of buffer bloat in
the result, at least for the upload phase.
(The graph under the upload/download graphs for my results shows almost no
latency increase during the upload phase, now).
Or, I can crank it back open again, serving people with fiber connections
without having to run heaps of streams in parallel -- and then have people
complain that the upload result is inefficient, or bumpy, vs what they
expect.
And I can't offer an option, because the server receive window (I think)
cannot be set on a case by case basis. You set it for all TCP and forget it.
I suspect you guys are going to say the server should be left with a large
max receive window.. and let people complain to find out what their issue
is.
BTW my setup is wire to billion 7800N, which is a DSL modem and router. I
believe it is a linux based (judging from the system log) device.
cheers,
-Justin
On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
> On Tue, 21 Apr 2015, jb wrote:
>
> I've discovered something perhaps you guys can explain it better or shed
>> some light.
>> It isn't specifically to do with buffer bloat but it is to do with TCP
>> tuning.
>>
>> Attached is two pictures of my upload to New York speed test server with 1
>> stream.
>> It doesn't make any difference if it is 1 stream or 8 streams, the picture
>> and behaviour remains the same.
>> I am 200ms from new york so it qualifies as a fairly long (but not very
>> fat) pipe.
>>
>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535' (on
>> the server)
>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864' (on
>> the server)
>>
>> It actually doesn't matter what that last huge number is, once it goes
>> much
>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
>> upload speed.
>>
>> Now as I understand this setting, it is the tcp receive window that Linux
>> advertises, and the last number sets the maximum size it can get to (for
>> one TCP stream).
>>
>> For users with very fast upload speeds, they do not see an ugly bumpy
>> upload graph, it is smooth and sustained.
>> But for the majority of users (like me) with uploads less than 5 to
>> 10mbit,
>> we frequently see the ugly graph.
>>
>> The second tcp_rmem setting is how I have been running the speed test
>> servers.
>>
>> Up to now I thought this was just the distance of the speedtest from the
>> interface: perhaps the browser was buffering a lot, and didn't feed back
>> progress but now I realise the bumpy one is actually being influenced by
>> the server receive window.
>>
>> I guess my question is this: Why does ALLOWING a large receive window
>> appear to encourage problems with upload smoothness??
>>
>> This implies that setting the receive window should be done on a
>> connection
>> by connection basis: small for slow connections, large, for high speed,
>> long distance connections.
>>
>
> This is classic bufferbloat
>
> the receiver advertizes a large receive window, so the sender doesn't
> pause until there is that much data outstanding, or they get a timeout of a
> packet as a signal to slow down.
>
> and because you have a gig-E link locally, your machine generates traffic
> very rapidly, until all that data is 'in flight'. but it's really sitting
> in the buffer of a router trying to get through.
>
> then when a packet times out, the sender slows down a smidge and
> retransmits it. But the old packet is still sitting in a queue, eating
> bandwidth. the packets behind it are also going to timeout and be
> retransmitted before your first retransmitted packet gets through, so you
> have a large slug of data that's being retransmitted, and the first of the
> replacement data can't get through until the last of the old (timed out)
> data is transmitted.
>
> then when data starts flowing again, the sender again tries to fill up the
> window with data in flight.
>
> In addition, if I cap it to 65k, for reasons of smoothness,
>> that means the bandwidth delay product will keep maximum speed per upload
>> stream quite low. So a symmetric or gigabit connection is going to need a
>> ton of parallel streams to see full speed.
>>
>> Most puzzling is why would anything special be required on the Client -->
>> Server side of the equation
>> but nothing much appears wrong with the Server --> Client side, whether
>> speeds are very low (GPRS) or very high (gigabit).
>>
>
> but what window sizes are these clients advertising?
>
>
> Note that also I am not yet sure if smoothness == better throughput. I
>> have
>> noticed upload speeds for some people often being under their claimed sync
>> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
>> inefficiency. Maybe.
>>
>
> If you were to do a packet capture on the server side, you would see that
> you have a bunch of packets that are arriving multiple times, but the first
> time "does't count" because the replacement is already on the way.
>
> so your overall throughput is lower for two reasons
>
> 1. it's bursty, and there are times when the connection actually is idle
> (after you have a lot of timed out packets, the sender needs to ramp up
> it's speed again)
>
> 2. you are sending some packets multiple times, consuming more total
> bandwidth for the same 'goodput' (effective throughput)
>
> David Lang
>
>
> help!
>>
>>
>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net>
>> wrote:
>>
>> One thing users understand is slow web access. Perhaps translating the
>>> latency measurement into 'a typical web page will take X seconds longer
>>> to
>>> load', or even stating the impact as 'this latency causes a typical web
>>> page to load slower, as if your connection was only YY% of the measured
>>> speed.'
>>>
>>> Simon
>>>
>>> Sent with AquaMail for Android
>>> http://www.aqua-mail.com
>>>
>>>
>>>
>>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
>>> wrote:
>>>
>>> >>>> Frequency readouts are probably more accessible to the latter.
>>>
>>>>
>>>>>>> The frequency domain more accessible to laypersons? I have my
>>>>>>>
>>>>>> doubts ;)
>>>>
>>>>>
>>>>>> Gamers, at least, are familiar with “frames per second” and how that
>>>>>>
>>>>> corresponds to their monitor’s refresh rate.
>>>>
>>>>>
>>>>> I am sure they can easily transform back into time domain to get
>>>>>
>>>> the frame period ;) . I am partly kidding, I think your idea is great
>>>> in
>>>> that it is a truly positive value which could lend itself to being used
>>>> in
>>>> ISP/router manufacturer advertising, and hence might work in the real
>>>> work;
>>>> on the other hand I like to keep data as “raw” as possible (not that
>>>> ^(-1)
>>>> is a transformation worthy of being called data massage).
>>>>
>>>>>
>>>>> The desirable range of latencies, when converted to Hz, happens to be
>>>>>>
>>>>> roughly the same as the range of desirable frame rates.
>>>>
>>>>>
>>>>> Just to play devils advocate, the interesting part is time or
>>>>>
>>>> saving time so seconds or milliseconds are also intuitively
>>>> understandable
>>>> and can be easily added ;)
>>>>
>>>> Such readouts are certainly interesting to people like us. I have no
>>>> objection to them being reported alongside a frequency readout. But I
>>>> think most people are not interested in “time savings” measured in
>>>> milliseconds; they’re much more aware of the minute- and hour-level time
>>>> savings associated with greater bandwidth.
>>>>
>>>> - Jonathan Morton
>>>>
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
[-- Attachment #2: Type: text/html, Size: 12386 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 7:35 ` jb
@ 2015-04-21 9:14 ` Steinar H. Gunderson
2015-04-21 14:20 ` David Lang
1 sibling, 0 replies; 183+ messages in thread
From: Steinar H. Gunderson @ 2015-04-21 9:14 UTC (permalink / raw)
To: bloat
On Tue, Apr 21, 2015 at 05:35:32PM +1000, jb wrote:
> And I can't offer an option, because the server receive window (I think)
> cannot be set on a case by case basis. You set it for all TCP and forget it.
You can set both send and receive buffers using a setsockopt() call
(SO_SNDBUF, SO_RCVBUF). I would advise against it, though; hardly anyone
does it (except the ones that did so to _increase_ the buffer 10-15 years
ago, which now is thoroughly superseded by auto-tuning and thus a
pessimization), and if the point of the test is to identify real-world
performance, you shouldn't do workarounds.
/* Steinar */
--
Homepage: http://www.sesse.net/
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 4:15 ` jb
2015-04-21 4:47 ` David Lang
@ 2015-04-21 9:37 ` Jonathan Morton
2015-04-21 10:35 ` jb
1 sibling, 1 reply; 183+ messages in thread
From: Jonathan Morton @ 2015-04-21 9:37 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 2969 bytes --]
I would explain it a bit differently to David. There are a lot of
interrelated components and concepts in TCP, and its sometimes hard to see
which ones are relevant in a given situation.
The key insight though is that there are two windows which are maintained
by the sender and receiver respectively, and data can only be sent if it
fits into BOTH windows. The receive window is effectively set by that
sysctl, and the congestion window (maintained by the sender) is the one
that changes dynamically.
The correct size of both windows is the bandwidth delay product of the path
between the two hosts. However, this size varies, so you can't set a single
size which works in all our even most situations. The general approach that
has the best chance of working is to set the receive window large and rely
on the congestion window to adapt.
Incidentally, 200ms at say 2Mbps gives a BDP of about 40KB.
The problem with that is that in most networks today, there is insufficient
information for the congestion window to find its ideal size. It will grow
until it receives an unambiguous congestion signal, typically a lost packet
or ECN flag. But that will most likely occur on queue overflow at the
bottleneck, and due to the resulting induced delay, the sender will have
been overdosing that queue for a while before it gets the signal to back
off - so probably a whole bunch of packets got lost in the meantime. Then,
after transmitting the lost packets, the sender has to wait for the
receiver to catch up with the smaller congestion window before it can
resume.
Meanwhile, the receiver can't deliver any of the data it's receiving
because the lost packets belong in front of it. If you've ever noticed a
transfer that seems to stall and then suddenly catch up, that's due to a
lost packet and retransmission. The effect is known as "head of line
blocking", and can be used to detect packet loss at the application layer.
Ironically, most hardware designers will tell you that buffers are meant to
smooth data delivery. It's true, but only when it doesn't overflow - and
TCP will always overflow a dumb queue if allowed to.
Reducing the receive window, to a value below the native BDP of the path
plus the bottleneck queue length, can be used as a crude way to prevent the
bottleneck queue from overflowing. Then, the congestion window will grow to
the receive window size and stay there, and TCP will enter a steady state
where every ack results in the next packet(s) being sent. (Most receivers
won't send an ack for every received packet, as long as none are missing.)
However, running multiple flows in parallel using a receive window tuned
for one flow will double the data in flight, and the queue may once again
overflow. If you look only at aggregate throughput, you might not notice
this because parallel TCPs tend to fill in each others' gaps. But the
individual flow throughputs will show the same "head of line blocking"
effect.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 3203 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 9:37 ` Jonathan Morton
@ 2015-04-21 10:35 ` jb
2015-04-22 4:04 ` Steinar H. Gunderson
0 siblings, 1 reply; 183+ messages in thread
From: jb @ 2015-04-21 10:35 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 4378 bytes --]
As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket buffers
for the application layer, they do not change the TCP window size either
send or receive. Which is perhaps why they aren't used much. They don't do
much good in iperf that's for sure! Might be wrong, but I agree with the
premise - auto-tuning should work.
Regarding my own equipment, I've seen a 2012 topic about the Billion 7800N
I have, complaining it has buffer bloat. The replies to the topic suggested
using QOS to get round the problem of uploading blowing up the latency sky
high. Unfortunately it is a very popular and well regarded DSL modem at
least in Australia AND cannot be flashed with dd-wrt or anything. So I
think for me personally (and for people who use our speed test and complain
about very choppy results on upload), this is the explanation I'll be
giving: experiment with your gear at home, it'll be the problem.
Currently the servers are running at a low maximum receive window. I'll be
switching them back in a day, after I let this one guy witness the
improvement it make for his connection. He has been at me for days saying
the test has an issue because the upload on his bonded 5mbit+5mbit channel
is so choppy.
thanks
On Tue, Apr 21, 2015 at 7:37 PM, Jonathan Morton <chromatix99@gmail.com>
wrote:
> I would explain it a bit differently to David. There are a lot of
> interrelated components and concepts in TCP, and its sometimes hard to see
> which ones are relevant in a given situation.
>
> The key insight though is that there are two windows which are maintained
> by the sender and receiver respectively, and data can only be sent if it
> fits into BOTH windows. The receive window is effectively set by that
> sysctl, and the congestion window (maintained by the sender) is the one
> that changes dynamically.
>
> The correct size of both windows is the bandwidth delay product of the
> path between the two hosts. However, this size varies, so you can't set a
> single size which works in all our even most situations. The general
> approach that has the best chance of working is to set the receive window
> large and rely on the congestion window to adapt.
>
> Incidentally, 200ms at say 2Mbps gives a BDP of about 40KB.
>
> The problem with that is that in most networks today, there is
> insufficient information for the congestion window to find its ideal size.
> It will grow until it receives an unambiguous congestion signal, typically
> a lost packet or ECN flag. But that will most likely occur on queue
> overflow at the bottleneck, and due to the resulting induced delay, the
> sender will have been overdosing that queue for a while before it gets the
> signal to back off - so probably a whole bunch of packets got lost in the
> meantime. Then, after transmitting the lost packets, the sender has to wait
> for the receiver to catch up with the smaller congestion window before it
> can resume.
>
> Meanwhile, the receiver can't deliver any of the data it's receiving
> because the lost packets belong in front of it. If you've ever noticed a
> transfer that seems to stall and then suddenly catch up, that's due to a
> lost packet and retransmission. The effect is known as "head of line
> blocking", and can be used to detect packet loss at the application layer.
>
> Ironically, most hardware designers will tell you that buffers are meant
> to smooth data delivery. It's true, but only when it doesn't overflow - and
> TCP will always overflow a dumb queue if allowed to.
>
> Reducing the receive window, to a value below the native BDP of the path
> plus the bottleneck queue length, can be used as a crude way to prevent the
> bottleneck queue from overflowing. Then, the congestion window will grow to
> the receive window size and stay there, and TCP will enter a steady state
> where every ack results in the next packet(s) being sent. (Most receivers
> won't send an ack for every received packet, as long as none are missing.)
>
> However, running multiple flows in parallel using a receive window tuned
> for one flow will double the data in flight, and the queue may once again
> overflow. If you look only at aggregate throughput, you might not notice
> this because parallel TCPs tend to fill in each others' gaps. But the
> individual flow throughputs will show the same "head of line blocking"
> effect.
>
> - Jonathan Morton
>
[-- Attachment #2: Type: text/html, Size: 5087 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 7:35 ` jb
2015-04-21 9:14 ` Steinar H. Gunderson
@ 2015-04-21 14:20 ` David Lang
2015-04-21 14:25 ` David Lang
2015-04-22 14:32 ` Simon Barber
1 sibling, 2 replies; 183+ messages in thread
From: David Lang @ 2015-04-21 14:20 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 9767 bytes --]
On Tue, 21 Apr 2015, jb wrote:
>> the receiver advertizes a large receive window, so the sender doesn't
> pause > until there is that much data outstanding, or they get a timeout of
> a packet as > a signal to slow down.
>
>> and because you have a gig-E link locally, your machine generates traffic
> \
>> very rapidly, until all that data is 'in flight'. but it's really sitting
> in the buffer of
>> router trying to get through.
>
> Hmm, then I have a quandary because I can easily solve the nasty bumpy
> upload graphs by keeping the advertised receive window on the server capped
> low, however then, paradoxically, there is no more sign of buffer bloat in
> the result, at least for the upload phase.
>
> (The graph under the upload/download graphs for my results shows almost no
> latency increase during the upload phase, now).
>
> Or, I can crank it back open again, serving people with fiber connections
> without having to run heaps of streams in parallel -- and then have people
> complain that the upload result is inefficient, or bumpy, vs what they
> expect.
well, many people expect it to be bumpy (I've heard ISPs explain to customers
that when a link is full it is bumpy, that's just the way things work)
> And I can't offer an option, because the server receive window (I think)
> cannot be set on a case by case basis. You set it for all TCP and forget it.
I think you are right
> I suspect you guys are going to say the server should be left with a large
> max receive window.. and let people complain to find out what their issue
> is.
what is your customer base? how important is it to provide faster service to teh
fiber users? Are they transferring ISO images so the difference is significant
to them? or are they downloading web pages where it's the difference between a
half second and a quarter second? remember that you are seeing this on the
upload side.
in the long run, fixing the problem at the client side is the best thing to do,
but in the meantime, you sometimes have to work around broken customer stuff.
> BTW my setup is wire to billion 7800N, which is a DSL modem and router. I
> believe it is a linux based (judging from the system log) device.
if it's linux based, it would be interesting to learn what sort of settings it
has. It may be one of the rarer devices that has something in place already to
do active queue management.
David Lang
> cheers,
> -Justin
>
> On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
>
>> On Tue, 21 Apr 2015, jb wrote:
>>
>> I've discovered something perhaps you guys can explain it better or shed
>>> some light.
>>> It isn't specifically to do with buffer bloat but it is to do with TCP
>>> tuning.
>>>
>>> Attached is two pictures of my upload to New York speed test server with 1
>>> stream.
>>> It doesn't make any difference if it is 1 stream or 8 streams, the picture
>>> and behaviour remains the same.
>>> I am 200ms from new york so it qualifies as a fairly long (but not very
>>> fat) pipe.
>>>
>>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535' (on
>>> the server)
>>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864' (on
>>> the server)
>>>
>>> It actually doesn't matter what that last huge number is, once it goes
>>> much
>>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
>>> upload speed.
>>>
>>> Now as I understand this setting, it is the tcp receive window that Linux
>>> advertises, and the last number sets the maximum size it can get to (for
>>> one TCP stream).
>>>
>>> For users with very fast upload speeds, they do not see an ugly bumpy
>>> upload graph, it is smooth and sustained.
>>> But for the majority of users (like me) with uploads less than 5 to
>>> 10mbit,
>>> we frequently see the ugly graph.
>>>
>>> The second tcp_rmem setting is how I have been running the speed test
>>> servers.
>>>
>>> Up to now I thought this was just the distance of the speedtest from the
>>> interface: perhaps the browser was buffering a lot, and didn't feed back
>>> progress but now I realise the bumpy one is actually being influenced by
>>> the server receive window.
>>>
>>> I guess my question is this: Why does ALLOWING a large receive window
>>> appear to encourage problems with upload smoothness??
>>>
>>> This implies that setting the receive window should be done on a
>>> connection
>>> by connection basis: small for slow connections, large, for high speed,
>>> long distance connections.
>>>
>>
>> This is classic bufferbloat
>>
>> the receiver advertizes a large receive window, so the sender doesn't
>> pause until there is that much data outstanding, or they get a timeout of a
>> packet as a signal to slow down.
>>
>> and because you have a gig-E link locally, your machine generates traffic
>> very rapidly, until all that data is 'in flight'. but it's really sitting
>> in the buffer of a router trying to get through.
>>
>> then when a packet times out, the sender slows down a smidge and
>> retransmits it. But the old packet is still sitting in a queue, eating
>> bandwidth. the packets behind it are also going to timeout and be
>> retransmitted before your first retransmitted packet gets through, so you
>> have a large slug of data that's being retransmitted, and the first of the
>> replacement data can't get through until the last of the old (timed out)
>> data is transmitted.
>>
>> then when data starts flowing again, the sender again tries to fill up the
>> window with data in flight.
>>
>> In addition, if I cap it to 65k, for reasons of smoothness,
>>> that means the bandwidth delay product will keep maximum speed per upload
>>> stream quite low. So a symmetric or gigabit connection is going to need a
>>> ton of parallel streams to see full speed.
>>>
>>> Most puzzling is why would anything special be required on the Client -->
>>> Server side of the equation
>>> but nothing much appears wrong with the Server --> Client side, whether
>>> speeds are very low (GPRS) or very high (gigabit).
>>>
>>
>> but what window sizes are these clients advertising?
>>
>>
>> Note that also I am not yet sure if smoothness == better throughput. I
>>> have
>>> noticed upload speeds for some people often being under their claimed sync
>>> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
>>> inefficiency. Maybe.
>>>
>>
>> If you were to do a packet capture on the server side, you would see that
>> you have a bunch of packets that are arriving multiple times, but the first
>> time "does't count" because the replacement is already on the way.
>>
>> so your overall throughput is lower for two reasons
>>
>> 1. it's bursty, and there are times when the connection actually is idle
>> (after you have a lot of timed out packets, the sender needs to ramp up
>> it's speed again)
>>
>> 2. you are sending some packets multiple times, consuming more total
>> bandwidth for the same 'goodput' (effective throughput)
>>
>> David Lang
>>
>>
>> help!
>>>
>>>
>>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net>
>>> wrote:
>>>
>>> One thing users understand is slow web access. Perhaps translating the
>>>> latency measurement into 'a typical web page will take X seconds longer
>>>> to
>>>> load', or even stating the impact as 'this latency causes a typical web
>>>> page to load slower, as if your connection was only YY% of the measured
>>>> speed.'
>>>>
>>>> Simon
>>>>
>>>> Sent with AquaMail for Android
>>>> http://www.aqua-mail.com
>>>>
>>>>
>>>>
>>>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
>>>> wrote:
>>>>
>>>>>>>> Frequency readouts are probably more accessible to the latter.
>>>>
>>>>>
>>>>>>>> The frequency domain more accessible to laypersons? I have my
>>>>>>>>
>>>>>>> doubts ;)
>>>>>
>>>>>>
>>>>>>> Gamers, at least, are familiar with “frames per second” and how that
>>>>>>>
>>>>>> corresponds to their monitor’s refresh rate.
>>>>>
>>>>>>
>>>>>> I am sure they can easily transform back into time domain to get
>>>>>>
>>>>> the frame period ;) . I am partly kidding, I think your idea is great
>>>>> in
>>>>> that it is a truly positive value which could lend itself to being used
>>>>> in
>>>>> ISP/router manufacturer advertising, and hence might work in the real
>>>>> work;
>>>>> on the other hand I like to keep data as “raw” as possible (not that
>>>>> ^(-1)
>>>>> is a transformation worthy of being called data massage).
>>>>>
>>>>>>
>>>>>> The desirable range of latencies, when converted to Hz, happens to be
>>>>>>>
>>>>>> roughly the same as the range of desirable frame rates.
>>>>>
>>>>>>
>>>>>> Just to play devils advocate, the interesting part is time or
>>>>>>
>>>>> saving time so seconds or milliseconds are also intuitively
>>>>> understandable
>>>>> and can be easily added ;)
>>>>>
>>>>> Such readouts are certainly interesting to people like us. I have no
>>>>> objection to them being reported alongside a frequency readout. But I
>>>>> think most people are not interested in “time savings” measured in
>>>>> milliseconds; they’re much more aware of the minute- and hour-level time
>>>>> savings associated with greater bandwidth.
>>>>>
>>>>> - Jonathan Morton
>>>>>
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 14:20 ` David Lang
@ 2015-04-21 14:25 ` David Lang
2015-04-21 14:28 ` David Lang
2015-04-22 14:32 ` Simon Barber
1 sibling, 1 reply; 183+ messages in thread
From: David Lang @ 2015-04-21 14:25 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1056 bytes --]
On Tue, 21 Apr 2015, David Lang wrote:
>> I suspect you guys are going to say the server should be left with a large
>> max receive window.. and let people complain to find out what their issue
>> is.
>
> what is your customer base? how important is it to provide faster service to
> teh fiber users? Are they transferring ISO images so the difference is
> significant to them? or are they downloading web pages where it's the
> difference between a half second and a quarter second? remember that you are
> seeing this on the upload side.
>
> in the long run, fixing the problem at the client side is the best thing to
> do, but in the meantime, you sometimes have to work around broken customer
> stuff.
for the speedtest servers, it should be set large, the purpose is to test the
quality of the customer stuff, so you don't want to do anything on your end that
papers over the problem, only to have the customer think things are good and
experience problems when connecting to another server that doesn't implement
work-arounds.
David Lang
[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 14:25 ` David Lang
@ 2015-04-21 14:28 ` David Lang
2015-04-21 22:13 ` jb
0 siblings, 1 reply; 183+ messages in thread
From: David Lang @ 2015-04-21 14:28 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1420 bytes --]
On Tue, 21 Apr 2015, David Lang wrote:
> On Tue, 21 Apr 2015, David Lang wrote:
>
>>> I suspect you guys are going to say the server should be left with a large
>>> max receive window.. and let people complain to find out what their issue
>>> is.
>>
>> what is your customer base? how important is it to provide faster service
>> to teh fiber users? Are they transferring ISO images so the difference is
>> significant to them? or are they downloading web pages where it's the
>> difference between a half second and a quarter second? remember that you
>> are seeing this on the upload side.
>>
>> in the long run, fixing the problem at the client side is the best thing to
>> do, but in the meantime, you sometimes have to work around broken customer
>> stuff.
>
> for the speedtest servers, it should be set large, the purpose is to test the
> quality of the customer stuff, so you don't want to do anything on your end
> that papers over the problem, only to have the customer think things are good
> and experience problems when connecting to another server that doesn't
> implement work-arounds.
Just after hitting send it occured to me that it may be the right thing to have
the server that's being hit by the test play with these settings. If the user
works well at lower settings, but has problems at higher settings, the point
where they start having problems may be useful to know.
David Lang
[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 14:28 ` David Lang
@ 2015-04-21 22:13 ` jb
2015-04-21 22:39 ` Aaron Wood
2015-04-21 23:17 ` jb
0 siblings, 2 replies; 183+ messages in thread
From: jb @ 2015-04-21 22:13 UTC (permalink / raw)
To: David Lang; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 2620 bytes --]
Today I've switched it back to large receive window max.
The customer base is everything from GPRS to gigabit. But I know from
experience that if a test doesn't flatten someones gigabit connection they
will immediately assume "oh congested servers, insufficient capacity" and
the early adopters of fiber to the home and faster cable products are the
most visible in tech forums and so on.
It would be interesting to set one or a few servers with a small receive
window, take them from the pool, and allow an option to select those,
otherwise they would not participate in any default run. Then as you point
out, the test can suggest trying those as an option for results with
chaotic upload speeds and probable bloat. The person would notice the
beauty of the more intimate connection between their kernel and a server,
and work harder to eliminate the problematic equipment. Or. They'd stop
telling me the test was bugged.
thanks
On Wed, Apr 22, 2015 at 12:28 AM, David Lang <david@lang.hm> wrote:
> On Tue, 21 Apr 2015, David Lang wrote:
>
> On Tue, 21 Apr 2015, David Lang wrote:
>>
>> I suspect you guys are going to say the server should be left with a
>>>> large
>>>> max receive window.. and let people complain to find out what their
>>>> issue
>>>> is.
>>>>
>>>
>>> what is your customer base? how important is it to provide faster
>>> service to teh fiber users? Are they transferring ISO images so the
>>> difference is significant to them? or are they downloading web pages where
>>> it's the difference between a half second and a quarter second? remember
>>> that you are seeing this on the upload side.
>>>
>>> in the long run, fixing the problem at the client side is the best thing
>>> to do, but in the meantime, you sometimes have to work around broken
>>> customer stuff.
>>>
>>
>> for the speedtest servers, it should be set large, the purpose is to test
>> the quality of the customer stuff, so you don't want to do anything on your
>> end that papers over the problem, only to have the customer think things
>> are good and experience problems when connecting to another server that
>> doesn't implement work-arounds.
>>
>
> Just after hitting send it occured to me that it may be the right thing to
> have the server that's being hit by the test play with these settings. If
> the user works well at lower settings, but has problems at higher settings,
> the point where they start having problems may be useful to know.
>
> David Lang
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
[-- Attachment #2: Type: text/html, Size: 3600 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 22:13 ` jb
@ 2015-04-21 22:39 ` Aaron Wood
2015-04-21 23:17 ` jb
1 sibling, 0 replies; 183+ messages in thread
From: Aaron Wood @ 2015-04-21 22:39 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 1892 bytes --]
On Tue, Apr 21, 2015 at 3:13 PM, jb <justin@dslr.net> wrote:
> Today I've switched it back to large receive window max.
>
> The customer base is everything from GPRS to gigabit. But I know from
> experience that if a test doesn't flatten someones gigabit connection they
> will immediately assume "oh congested servers, insufficient capacity" and
> the early adopters of fiber to the home and faster cable products are the
> most visible in tech forums and so on.
>
> It would be interesting to set one or a few servers with a small receive
> window, take them from the pool, and allow an option to select those,
> otherwise they would not participate in any default run. Then as you point
> out, the test can suggest trying those as an option for results with
> chaotic upload speeds and probable bloat. The person would notice the
> beauty of the more intimate connection between their kernel and a server,
> and work harder to eliminate the problematic equipment. Or. They'd stop
> telling me the test was bugged.
>
Well, the sawtooth pattern that's the classic sign of bufferbloat should be
readily detectable, especially if the pings during the test climb in a
similar fashion. And from the two sets of numbers, it should be possible
to put a guess on how overbuffered the uplink is. Then when the test
completes, an analysis that flags to the user that they have a bufferbloat
issue might continue to shed light on this.
Attached is results from my location which shows a couple hundred ms of
bloat. While my results didn't have congestion collapse, they do clearly
have a bunch of bloat. That amount of bloat should be easy to spot in an
analysis of the results, and a recommendation to the user that they may
want to look into fixing that if they use their link at the limit with VOIP
or gaming.
I just wish we had a really good un-bloated retail option to recommend.
-Aaron
[-- Attachment #2: Type: text/html, Size: 2368 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 22:13 ` jb
2015-04-21 22:39 ` Aaron Wood
@ 2015-04-21 23:17 ` jb
2015-04-22 2:14 ` Simon Barber
1 sibling, 1 reply; 183+ messages in thread
From: jb @ 2015-04-21 23:17 UTC (permalink / raw)
Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 4117 bytes --]
Regarding the low TCP RWIN max setting, and smoothness.
One remark up-thread still bothers me. It was pointed out (and it makes
sense to me) that if you set a low TCP max rwin it is per stream, but if
you do multiple streams you are still going to rush the soho buffer.
However my observation with a low server rwin max was that the smooth
upload graph was the same whether I did 1 upload stream or 6 upload
streams, or apparently any number.
I would have thought that with 6 streams, the PC is going to try to flood
6x as much data as 1 stream, and this would put you back to square one.
However this was not what happened. It was puzzling that no matter what,
one setting server side got rid of the chop.
Anyone got any plausible explanations for this ?
if not, I'll run some more tests with 1, 6 and 12, to a low rwin server,
and post the graphs to the list. I might also have to start to graph the
interface traffic on a sub-second level, rather than the browser traffic,
to make sure the browser isn't lying about the stalls and chop.
This 7800N has setting for priority of traffic, and utilisation (as a
percentage). Utilisation % didn't help, but priority helped. Making web low
priority and SSH high priority smoothed things out a lot without changing
the speed. Perhaps "low" priority means it isn't so eager to fill its
buffers..
thanks
On Wed, Apr 22, 2015 at 8:13 AM, jb <justin@dslr.net> wrote:
> Today I've switched it back to large receive window max.
>
> The customer base is everything from GPRS to gigabit. But I know from
> experience that if a test doesn't flatten someones gigabit connection they
> will immediately assume "oh congested servers, insufficient capacity" and
> the early adopters of fiber to the home and faster cable products are the
> most visible in tech forums and so on.
>
> It would be interesting to set one or a few servers with a small receive
> window, take them from the pool, and allow an option to select those,
> otherwise they would not participate in any default run. Then as you point
> out, the test can suggest trying those as an option for results with
> chaotic upload speeds and probable bloat. The person would notice the
> beauty of the more intimate connection between their kernel and a server,
> and work harder to eliminate the problematic equipment. Or. They'd stop
> telling me the test was bugged.
>
> thanks
>
>
> On Wed, Apr 22, 2015 at 12:28 AM, David Lang <david@lang.hm> wrote:
>
>> On Tue, 21 Apr 2015, David Lang wrote:
>>
>> On Tue, 21 Apr 2015, David Lang wrote:
>>>
>>> I suspect you guys are going to say the server should be left with a
>>>>> large
>>>>> max receive window.. and let people complain to find out what their
>>>>> issue
>>>>> is.
>>>>>
>>>>
>>>> what is your customer base? how important is it to provide faster
>>>> service to teh fiber users? Are they transferring ISO images so the
>>>> difference is significant to them? or are they downloading web pages where
>>>> it's the difference between a half second and a quarter second? remember
>>>> that you are seeing this on the upload side.
>>>>
>>>> in the long run, fixing the problem at the client side is the best
>>>> thing to do, but in the meantime, you sometimes have to work around broken
>>>> customer stuff.
>>>>
>>>
>>> for the speedtest servers, it should be set large, the purpose is to
>>> test the quality of the customer stuff, so you don't want to do anything on
>>> your end that papers over the problem, only to have the customer think
>>> things are good and experience problems when connecting to another server
>>> that doesn't implement work-arounds.
>>>
>>
>> Just after hitting send it occured to me that it may be the right thing
>> to have the server that's being hit by the test play with these settings.
>> If the user works well at lower settings, but has problems at higher
>> settings, the point where they start having problems may be useful to know.
>>
>> David Lang
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>
[-- Attachment #2: Type: text/html, Size: 5549 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 23:17 ` jb
@ 2015-04-22 2:14 ` Simon Barber
2015-04-22 2:56 ` jb
0 siblings, 1 reply; 183+ messages in thread
From: Simon Barber @ 2015-04-22 2:14 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 4811 bytes --]
If you set the window only a little bit larger than the actual BDP of the
link then there will only be a little bit of data to fill buffer, so given
large buffers it will take many connections to overflow the buffer.
Simon
Sent with AquaMail for Android
http://www.aqua-mail.com
On April 21, 2015 4:18:10 PM jb <justin@dslr.net> wrote:
> Regarding the low TCP RWIN max setting, and smoothness.
>
> One remark up-thread still bothers me. It was pointed out (and it makes
> sense to me) that if you set a low TCP max rwin it is per stream, but if
> you do multiple streams you are still going to rush the soho buffer.
>
> However my observation with a low server rwin max was that the smooth
> upload graph was the same whether I did 1 upload stream or 6 upload
> streams, or apparently any number.
> I would have thought that with 6 streams, the PC is going to try to flood
> 6x as much data as 1 stream, and this would put you back to square one.
> However this was not what happened. It was puzzling that no matter what,
> one setting server side got rid of the chop.
> Anyone got any plausible explanations for this ?
>
> if not, I'll run some more tests with 1, 6 and 12, to a low rwin server,
> and post the graphs to the list. I might also have to start to graph the
> interface traffic on a sub-second level, rather than the browser traffic,
> to make sure the browser isn't lying about the stalls and chop.
>
> This 7800N has setting for priority of traffic, and utilisation (as a
> percentage). Utilisation % didn't help, but priority helped. Making web low
> priority and SSH high priority smoothed things out a lot without changing
> the speed. Perhaps "low" priority means it isn't so eager to fill its
> buffers..
>
> thanks
>
>
> On Wed, Apr 22, 2015 at 8:13 AM, jb <justin@dslr.net> wrote:
>
> > Today I've switched it back to large receive window max.
> >
> > The customer base is everything from GPRS to gigabit. But I know from
> > experience that if a test doesn't flatten someones gigabit connection they
> > will immediately assume "oh congested servers, insufficient capacity" and
> > the early adopters of fiber to the home and faster cable products are the
> > most visible in tech forums and so on.
> >
> > It would be interesting to set one or a few servers with a small receive
> > window, take them from the pool, and allow an option to select those,
> > otherwise they would not participate in any default run. Then as you point
> > out, the test can suggest trying those as an option for results with
> > chaotic upload speeds and probable bloat. The person would notice the
> > beauty of the more intimate connection between their kernel and a server,
> > and work harder to eliminate the problematic equipment. Or. They'd stop
> > telling me the test was bugged.
> >
> > thanks
> >
> >
> > On Wed, Apr 22, 2015 at 12:28 AM, David Lang <david@lang.hm> wrote:
> >
> >> On Tue, 21 Apr 2015, David Lang wrote:
> >>
> >> On Tue, 21 Apr 2015, David Lang wrote:
> >>>
> >>> I suspect you guys are going to say the server should be left with a
> >>>>> large
> >>>>> max receive window.. and let people complain to find out what their
> >>>>> issue
> >>>>> is.
> >>>>>
> >>>>
> >>>> what is your customer base? how important is it to provide faster
> >>>> service to teh fiber users? Are they transferring ISO images so the
> >>>> difference is significant to them? or are they downloading web pages where
> >>>> it's the difference between a half second and a quarter second? remember
> >>>> that you are seeing this on the upload side.
> >>>>
> >>>> in the long run, fixing the problem at the client side is the best
> >>>> thing to do, but in the meantime, you sometimes have to work around broken
> >>>> customer stuff.
> >>>>
> >>>
> >>> for the speedtest servers, it should be set large, the purpose is to
> >>> test the quality of the customer stuff, so you don't want to do anything on
> >>> your end that papers over the problem, only to have the customer think
> >>> things are good and experience problems when connecting to another server
> >>> that doesn't implement work-arounds.
> >>>
> >>
> >> Just after hitting send it occured to me that it may be the right thing
> >> to have the server that's being hit by the test play with these settings.
> >> If the user works well at lower settings, but has problems at higher
> >> settings, the point where they start having problems may be useful to know.
> >>
> >> David Lang
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >>
> >>
> >
>
>
>
> ----------
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 6695 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 2:14 ` Simon Barber
@ 2015-04-22 2:56 ` jb
0 siblings, 0 replies; 183+ messages in thread
From: jb @ 2015-04-22 2:56 UTC (permalink / raw)
To: Simon Barber; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 4949 bytes --]
That makes sense. Ok.
On Wed, Apr 22, 2015 at 12:14 PM, Simon Barber <simon@superduper.net> wrote:
> If you set the window only a little bit larger than the actual BDP of
> the link then there will only be a little bit of data to fill buffer, so
> given large buffers it will take many connections to overflow the buffer.
>
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
> On April 21, 2015 4:18:10 PM jb <justin@dslr.net> wrote:
>
>> Regarding the low TCP RWIN max setting, and smoothness.
>>
>> One remark up-thread still bothers me. It was pointed out (and it makes
>> sense to me) that if you set a low TCP max rwin it is per stream, but if
>> you do multiple streams you are still going to rush the soho buffer.
>>
>> However my observation with a low server rwin max was that the smooth
>> upload graph was the same whether I did 1 upload stream or 6 upload
>> streams, or apparently any number.
>> I would have thought that with 6 streams, the PC is going to try to flood
>> 6x as much data as 1 stream, and this would put you back to square one.
>> However this was not what happened. It was puzzling that no matter what,
>> one setting server side got rid of the chop.
>> Anyone got any plausible explanations for this ?
>>
>> if not, I'll run some more tests with 1, 6 and 12, to a low rwin server,
>> and post the graphs to the list. I might also have to start to graph the
>> interface traffic on a sub-second level, rather than the browser traffic,
>> to make sure the browser isn't lying about the stalls and chop.
>>
>> This 7800N has setting for priority of traffic, and utilisation (as a
>> percentage). Utilisation % didn't help, but priority helped. Making web low
>> priority and SSH high priority smoothed things out a lot without changing
>> the speed. Perhaps "low" priority means it isn't so eager to fill its
>> buffers..
>>
>> thanks
>>
>>
>> On Wed, Apr 22, 2015 at 8:13 AM, jb <justin@dslr.net> wrote:
>>
>>> Today I've switched it back to large receive window max.
>>>
>>> The customer base is everything from GPRS to gigabit. But I know from
>>> experience that if a test doesn't flatten someones gigabit connection they
>>> will immediately assume "oh congested servers, insufficient capacity" and
>>> the early adopters of fiber to the home and faster cable products are the
>>> most visible in tech forums and so on.
>>>
>>> It would be interesting to set one or a few servers with a small receive
>>> window, take them from the pool, and allow an option to select those,
>>> otherwise they would not participate in any default run. Then as you point
>>> out, the test can suggest trying those as an option for results with
>>> chaotic upload speeds and probable bloat. The person would notice the
>>> beauty of the more intimate connection between their kernel and a server,
>>> and work harder to eliminate the problematic equipment. Or. They'd stop
>>> telling me the test was bugged.
>>>
>>> thanks
>>>
>>>
>>> On Wed, Apr 22, 2015 at 12:28 AM, David Lang <david@lang.hm> wrote:
>>>
>>>> On Tue, 21 Apr 2015, David Lang wrote:
>>>>
>>>> On Tue, 21 Apr 2015, David Lang wrote:
>>>>>
>>>>> I suspect you guys are going to say the server should be left with a
>>>>>>> large
>>>>>>> max receive window.. and let people complain to find out what their
>>>>>>> issue
>>>>>>> is.
>>>>>>>
>>>>>>
>>>>>> what is your customer base? how important is it to provide faster
>>>>>> service to teh fiber users? Are they transferring ISO images so the
>>>>>> difference is significant to them? or are they downloading web pages where
>>>>>> it's the difference between a half second and a quarter second? remember
>>>>>> that you are seeing this on the upload side.
>>>>>>
>>>>>> in the long run, fixing the problem at the client side is the best
>>>>>> thing to do, but in the meantime, you sometimes have to work around broken
>>>>>> customer stuff.
>>>>>>
>>>>>
>>>>> for the speedtest servers, it should be set large, the purpose is to
>>>>> test the quality of the customer stuff, so you don't want to do anything on
>>>>> your end that papers over the problem, only to have the customer think
>>>>> things are good and experience problems when connecting to another server
>>>>> that doesn't implement work-arounds.
>>>>>
>>>>
>>>> Just after hitting send it occured to me that it may be the right thing
>>>> to have the server that's being hit by the test play with these settings.
>>>> If the user works well at lower settings, but has problems at higher
>>>> settings, the point where they start having problems may be useful to know.
>>>>
>>>> David Lang
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>
[-- Attachment #2: Type: text/html, Size: 7278 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 10:35 ` jb
@ 2015-04-22 4:04 ` Steinar H. Gunderson
2015-04-22 4:28 ` Eric Dumazet
0 siblings, 1 reply; 183+ messages in thread
From: Steinar H. Gunderson @ 2015-04-22 4:04 UTC (permalink / raw)
To: jb; +Cc: bloat
On Tue, Apr 21, 2015 at 08:35:21PM +1000, jb wrote:
> As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket buffers
> for the application layer, they do not change the TCP window size either
> send or receive.
I haven't gone into the code and checked, but from practical experience I
think you're wrong. I've certainly seen positive effects (and verified with
tcpdump) from reducing SO_SNDBUF on a server that should have no problems at
all sending data really fast to the kernel.
Then again, this kind of manual tuning trickery got obsolete for me the
moment sch_fq became available.
/* Steinar */
--
Homepage: http://www.sesse.net/
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 4:04 ` Steinar H. Gunderson
@ 2015-04-22 4:28 ` Eric Dumazet
2015-04-22 8:51 ` [Bloat] RE : " luca.muscariello
0 siblings, 1 reply; 183+ messages in thread
From: Eric Dumazet @ 2015-04-22 4:28 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: bloat
On Wed, 2015-04-22 at 06:04 +0200, Steinar H. Gunderson wrote:
> On Tue, Apr 21, 2015 at 08:35:21PM +1000, jb wrote:
> > As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket buffers
> > for the application layer, they do not change the TCP window size either
> > send or receive.
>
> I haven't gone into the code and checked, but from practical experience I
> think you're wrong. I've certainly seen positive effects (and verified with
> tcpdump) from reducing SO_SNDBUF on a server that should have no problems at
> all sending data really fast to the kernel.
Well, using SO_SNDBUF disables TCP autotuning.
Doing so :
Pros:
autotuning is known to enable TCP cubic to grow cwnd to bloat levels.
With small enough SO_SNDBUF, you limit this cwnd increase.
Cons:
Long rtt sessions might not have enough packets to utilize bandwidth.
>
> Then again, this kind of manual tuning trickery got obsolete for me the
> moment sch_fq became available.
Note that I suppose the SO_MAX_PACING rate is really helping you.
Without it, TCP cubic is still allowed to 'fill the pipes' until packet
losses.
^ permalink raw reply [flat|nested] 183+ messages in thread
* [Bloat] RE : DSLReports Speed Test has latency measurement built-in
2015-04-22 4:28 ` Eric Dumazet
@ 2015-04-22 8:51 ` luca.muscariello
2015-04-22 12:02 ` jb
2015-04-22 13:50 ` [Bloat] " Eric Dumazet
0 siblings, 2 replies; 183+ messages in thread
From: luca.muscariello @ 2015-04-22 8:51 UTC (permalink / raw)
To: Eric Dumazet, Steinar H. Gunderson; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 2833 bytes --]
cons: large BDP in general would be negatively affected.
A Gbps access vs a DSL access to the same server would require very different tuning.
sch_fq would probably make the whole thing less of a problem.
But running it in a VM does not sound a good idea and would not reflect usual servers setting BTW
-------- Message d'origine --------
De : Eric Dumazet
Date :2015/04/22 12:29 (GMT+08:00)
À : "Steinar H. Gunderson"
Cc : bloat
Objet : Re: [Bloat] DSLReports Speed Test has latency measurement built-in
On Wed, 2015-04-22 at 06:04 +0200, Steinar H. Gunderson wrote:
> On Tue, Apr 21, 2015 at 08:35:21PM +1000, jb wrote:
> > As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket buffers
> > for the application layer, they do not change the TCP window size either
> > send or receive.
>
> I haven't gone into the code and checked, but from practical experience I
> think you're wrong. I've certainly seen positive effects (and verified with
> tcpdump) from reducing SO_SNDBUF on a server that should have no problems at
> all sending data really fast to the kernel.
Well, using SO_SNDBUF disables TCP autotuning.
Doing so :
Pros:
autotuning is known to enable TCP cubic to grow cwnd to bloat levels.
With small enough SO_SNDBUF, you limit this cwnd increase.
Cons:
Long rtt sessions might not have enough packets to utilize bandwidth.
>
> Then again, this kind of manual tuning trickery got obsolete for me the
> moment sch_fq became available.
Note that I suppose the SO_MAX_PACING rate is really helping you.
Without it, TCP cubic is still allowed to 'fill the pipes' until packet
losses.
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.
[-- Attachment #2: Type: text/html, Size: 3848 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
2015-04-22 8:51 ` [Bloat] RE : " luca.muscariello
@ 2015-04-22 12:02 ` jb
2015-04-22 13:08 ` Jonathan Morton
[not found] ` <14ce17a7810.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net>
2015-04-22 13:50 ` [Bloat] " Eric Dumazet
1 sibling, 2 replies; 183+ messages in thread
From: jb @ 2015-04-22 12:02 UTC (permalink / raw)
Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 4281 bytes --]
So I find a page that explains SO_RCVBUF is allegedly the most poorly
implemented on Linux, vs Windows or OSX, mainly because the one you start
with is the cap, you can go lower, but not higher, and data is needed to
shrink the window to a new setting, instead of slamming it shut by
setsockopt
Nevertheless! it is good enough that if I set it on tcp connect I can at
least offer the option to run pretty much the same test to the same set of
servers, but with a selectable cap on sender rate.
By the way, is there a selectable congestion control algorithm available
that is sensitive to an RTT that increases dramatically? in other words,
one that does the best at avoiding buffer size issues on the remote side of
the slowest link? I know heuristics always sound better in theory than
practice but surely if an algorithm picks up the idle RTT of a link, it can
then pump up the window until an RTT increase indicates it should back off,
Instead of (encouraged by no loss) thinking the end-user must be
accelerating towards the moon..
thanks.
On Wed, Apr 22, 2015 at 6:51 PM, <luca.muscariello@orange.com> wrote:
> cons: large BDP in general would be negatively affected.
> A Gbps access vs a DSL access to the same server would require very
> different tuning.
>
> sch_fq would probably make the whole thing less of a problem.
> But running it in a VM does not sound a good idea and would not reflect
> usual servers setting BTW
>
>
>
>
>
>
>
>
>
> -------- Message d'origine --------
> De : Eric Dumazet
> Date :2015/04/22 12:29 (GMT+08:00)
> À : "Steinar H. Gunderson"
> Cc : bloat
> Objet : Re: [Bloat] DSLReports Speed Test has latency measurement built-in
>
> On Wed, 2015-04-22 at 06:04 +0200, Steinar H. Gunderson wrote:
> > On Tue, Apr 21, 2015 at 08:35:21PM +1000, jb wrote:
> > > As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket
> buffers
> > > for the application layer, they do not change the TCP window size
> either
> > > send or receive.
> >
> > I haven't gone into the code and checked, but from practical experience I
> > think you're wrong. I've certainly seen positive effects (and verified
> with
> > tcpdump) from reducing SO_SNDBUF on a server that should have no
> problems at
> > all sending data really fast to the kernel.
>
> Well, using SO_SNDBUF disables TCP autotuning.
>
> Doing so :
>
> Pros:
>
> autotuning is known to enable TCP cubic to grow cwnd to bloat levels.
> With small enough SO_SNDBUF, you limit this cwnd increase.
>
> Cons:
>
> Long rtt sessions might not have enough packets to utilize bandwidth.
>
>
> >
> > Then again, this kind of manual tuning trickery got obsolete for me the
> > moment sch_fq became available.
>
> Note that I suppose the SO_MAX_PACING rate is really helping you.
>
> Without it, TCP cubic is still allowed to 'fill the pipes' until packet
> losses.
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
> _________________________________________________________________________________________________________________________
>
> Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
> Thank you.
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
[-- Attachment #2: Type: text/html, Size: 5461 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
2015-04-22 12:02 ` jb
@ 2015-04-22 13:08 ` Jonathan Morton
[not found] ` <14ce17a7810.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net>
1 sibling, 0 replies; 183+ messages in thread
From: Jonathan Morton @ 2015-04-22 13:08 UTC (permalink / raw)
To: jb; +Cc: bloat
> On 22 Apr, 2015, at 15:02, jb <justin@dslr.net> wrote:
>
> ...data is needed to shrink the window to a new setting, instead of slamming it shut by setsockopt
I believe that is RFC-compliant behaviour; one is not supposed to renege on an advertised TCP receive window. So Linux holds the rwin pointer in place until the window has shrunk to the new setting.
> By the way, is there a selectable congestion control algorithm available that is sensitive to an RTT that increases dramatically?
Vegas and LEDBAT do this explicitly; Vegas is old, and LEDBAT isn’t yet upstream but can be built against an existing kernel. Some other TCPs incorporate RTT into their control law (eg. Westwood+, Illinois and Microsoft’s CompoundTCP), but won’t actually stop growing the congestion window based on that; Westwood+ uses RTT and bandwidth to determine what congestion window size to set *after* receiving a conventional congestion signal, while Illinois uses increasing RTT as a signal to *slow* the increase of cwnd, thus spending more time *near* the BDP.
Both Vegas and LEDBAT, however, compete very unfavourably with conventional senders (for Vegas, there’s a contemporary paper showing this against Reno) sharing the same link, which is why they aren’t widely deployed. LEDBAT is however used as part of uTP (ie. BitTorrent’s UDP protocol) specifically for its “background traffic” properties.
Westwood+ does compete fairly with conventional TCPs and works well with AQM, since it avoids the sawtooth of under-utilisation that Reno shows, but it has a tendency to underestimate the cwnd on exiting the slow-start phase. On extreme LFNs, this can result in an extremely long time to converge on the correct BDP.
Illinois is also potentially interesting, because it does make an effort to avoid filling buffers quite as quickly as most. By contrast, CUBIC sets its inflection point at the cwnd where the previous congestion signal was received.
CompoundTCP is described roughly as using a cwnd that is the sum of the results of running Reno and Vegas. So there is a region of operation where the Reno part is increasing its cwnd and Vegas is decreasing it at the same time, resulting in a roughly constant overall cwnd in the vicinity of the BDP. I don’t know offhand how well it works in practice.
The fact remains, though, that most servers use conventional TCPs, usually CUBIC (if Linux based) or Compound (if Microsoft).
One interesting theory is that it’s possible to detect whether FQ is in use on a link, by observing whether Vegas competes on equal terms with a conventional TCP or not.
- Jonathan Morton
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 8:51 ` [Bloat] RE : " luca.muscariello
2015-04-22 12:02 ` jb
@ 2015-04-22 13:50 ` Eric Dumazet
2015-04-22 14:09 ` Steinar H. Gunderson
2015-04-22 15:26 ` [Bloat] RE : " luca.muscariello
1 sibling, 2 replies; 183+ messages in thread
From: Eric Dumazet @ 2015-04-22 13:50 UTC (permalink / raw)
To: luca.muscariello; +Cc: bloat
On Wed, 2015-04-22 at 08:51 +0000, luca.muscariello@orange.com wrote:
> cons: large BDP in general would be negatively affected.
> A Gbps access vs a DSL access to the same server would require very
> different tuning.
>
Yep. This is what I mentioned with 'long rtt'. This was relative to BDP.
>
> sch_fq would probably make the whole thing less of a problem.
> But running it in a VM does not sound a good idea and would not
> reflect usual servers setting BTW
>
No idea why it should mater. Have you got some experimental data ?
You know, 'usual servers' used to run pfifo_fast, they now run sch_fq.
(All Google fleet at least)
So this kind of argument sounds not based on experiments.
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 13:50 ` [Bloat] " Eric Dumazet
@ 2015-04-22 14:09 ` Steinar H. Gunderson
2015-04-22 15:26 ` [Bloat] RE : " luca.muscariello
1 sibling, 0 replies; 183+ messages in thread
From: Steinar H. Gunderson @ 2015-04-22 14:09 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
On Wed, Apr 22, 2015 at 06:50:57AM -0700, Eric Dumazet wrote:
> You know, 'usual servers' used to run pfifo_fast, they now run sch_fq.
>
> (All Google fleet at least)
I think Google is a bit ahead of the curve here :-) Does any distribution
ship sch_fq by default yet?
/* Steinar */
--
Homepage: http://www.sesse.net/
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
[not found] ` <14ce17a7810.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net>
@ 2015-04-22 14:15 ` Simon Barber
0 siblings, 0 replies; 183+ messages in thread
From: Simon Barber @ 2015-04-22 14:15 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 4740 bytes --]
Yes - the classic one is TCP Vegas.
Simon
Sent with AquaMail for Android
http://www.aqua-mail.com
On April 22, 2015 5:03:26 AM jb <justin@dslr.net> wrote:
> So I find a page that explains SO_RCVBUF is allegedly the most poorly
> implemented on Linux, vs Windows or OSX, mainly because the one you start
> with is the cap, you can go lower, but not higher, and data is needed to
> shrink the window to a new setting, instead of slamming it shut by
> setsockopt
>
> Nevertheless! it is good enough that if I set it on tcp connect I can at
> least offer the option to run pretty much the same test to the same set of
> servers, but with a selectable cap on sender rate.
>
> By the way, is there a selectable congestion control algorithm available
> that is sensitive to an RTT that increases dramatically? in other words,
> one that does the best at avoiding buffer size issues on the remote side of
> the slowest link? I know heuristics always sound better in theory than
> practice but surely if an algorithm picks up the idle RTT of a link, it can
> then pump up the window until an RTT increase indicates it should back off,
> Instead of (encouraged by no loss) thinking the end-user must be
> accelerating towards the moon..
>
> thanks.
>
> On Wed, Apr 22, 2015 at 6:51 PM, <luca.muscariello@orange.com> wrote:
>
> > cons: large BDP in general would be negatively affected.
> > A Gbps access vs a DSL access to the same server would require very
> > different tuning.
> >
> > sch_fq would probably make the whole thing less of a problem.
> > But running it in a VM does not sound a good idea and would not reflect
> > usual servers setting BTW
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > -------- Message d'origine --------
> > De : Eric Dumazet
> > Date :2015/04/22 12:29 (GMT+08:00)
> > À : "Steinar H. Gunderson"
> > Cc : bloat
> > Objet : Re: [Bloat] DSLReports Speed Test has latency measurement built-in
> >
> > On Wed, 2015-04-22 at 06:04 +0200, Steinar H. Gunderson wrote:
> > > On Tue, Apr 21, 2015 at 08:35:21PM +1000, jb wrote:
> > > > As I understand it (I thought) SO_SNDBUF and SO_RCVBUF are socket
> > buffers
> > > > for the application layer, they do not change the TCP window size
> > either
> > > > send or receive.
> > >
> > > I haven't gone into the code and checked, but from practical experience I
> > > think you're wrong. I've certainly seen positive effects (and verified
> > with
> > > tcpdump) from reducing SO_SNDBUF on a server that should have no
> > problems at
> > > all sending data really fast to the kernel.
> >
> > Well, using SO_SNDBUF disables TCP autotuning.
> >
> > Doing so :
> >
> > Pros:
> >
> > autotuning is known to enable TCP cubic to grow cwnd to bloat levels.
> > With small enough SO_SNDBUF, you limit this cwnd increase.
> >
> > Cons:
> >
> > Long rtt sessions might not have enough packets to utilize bandwidth.
> >
> >
> > >
> > > Then again, this kind of manual tuning trickery got obsolete for me the
> > > moment sch_fq became available.
> >
> > Note that I suppose the SO_MAX_PACING rate is really helping you.
> >
> > Without it, TCP cubic is still allowed to 'fill the pipes' until packet
> > losses.
> >
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> >
> >
> _________________________________________________________________________________________________________________________
> >
> > Ce message et ses pieces jointes peuvent contenir des informations
> confidentielles ou privilegiees et ne doivent donc
> > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> recu ce message par erreur, veuillez le signaler
> > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> electroniques etant susceptibles d'alteration,
> > Orange decline toute responsabilite si ce message a ete altere, deforme
> ou falsifie. Merci.
> >
> > This message and its attachments may contain confidential or privileged
> information that may be protected by law;
> > they should not be distributed, used or copied without authorisation.
> > If you have received this email in error, please notify the sender and
> delete this message and its attachments.
> > As emails may be altered, Orange is not liable for messages that have
> been modified, changed or falsified.
> > Thank you.
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> >
> >
>
>
>
> ----------
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 6353 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-21 14:20 ` David Lang
2015-04-21 14:25 ` David Lang
@ 2015-04-22 14:32 ` Simon Barber
2015-04-22 17:35 ` David Lang
1 sibling, 1 reply; 183+ messages in thread
From: Simon Barber @ 2015-04-22 14:32 UTC (permalink / raw)
To: David Lang, jb; +Cc: bloat
The bumps are due to packet loss causing head of line blocking. Until the
lost packet is retransmitted the receiver can't release any subsequent
received packets to the application due to the requirement for in order
delivery. If you counted received bytes with a packet counter rather than
looking at application level you would be able to illustrate that data was
being received smoothly (even though out of order).
Simon
Sent with AquaMail for Android
http://www.aqua-mail.com
On April 21, 2015 7:21:09 AM David Lang <david@lang.hm> wrote:
> On Tue, 21 Apr 2015, jb wrote:
>
> >> the receiver advertizes a large receive window, so the sender doesn't
> > pause > until there is that much data outstanding, or they get a timeout of
> > a packet as > a signal to slow down.
> >
> >> and because you have a gig-E link locally, your machine generates traffic
> > \
> >> very rapidly, until all that data is 'in flight'. but it's really sitting
> > in the buffer of
> >> router trying to get through.
> >
> > Hmm, then I have a quandary because I can easily solve the nasty bumpy
> > upload graphs by keeping the advertised receive window on the server capped
> > low, however then, paradoxically, there is no more sign of buffer bloat in
> > the result, at least for the upload phase.
> >
> > (The graph under the upload/download graphs for my results shows almost no
> > latency increase during the upload phase, now).
> >
> > Or, I can crank it back open again, serving people with fiber connections
> > without having to run heaps of streams in parallel -- and then have people
> > complain that the upload result is inefficient, or bumpy, vs what they
> > expect.
>
> well, many people expect it to be bumpy (I've heard ISPs explain to customers
> that when a link is full it is bumpy, that's just the way things work)
>
> > And I can't offer an option, because the server receive window (I think)
> > cannot be set on a case by case basis. You set it for all TCP and forget it.
>
> I think you are right
>
> > I suspect you guys are going to say the server should be left with a large
> > max receive window.. and let people complain to find out what their issue
> > is.
>
> what is your customer base? how important is it to provide faster service
> to teh
> fiber users? Are they transferring ISO images so the difference is significant
> to them? or are they downloading web pages where it's the difference between a
> half second and a quarter second? remember that you are seeing this on the
> upload side.
>
> in the long run, fixing the problem at the client side is the best thing to do,
> but in the meantime, you sometimes have to work around broken customer stuff.
>
> > BTW my setup is wire to billion 7800N, which is a DSL modem and router. I
> > believe it is a linux based (judging from the system log) device.
>
> if it's linux based, it would be interesting to learn what sort of settings it
> has. It may be one of the rarer devices that has something in place already to
> do active queue management.
>
> David Lang
>
> > cheers,
> > -Justin
> >
> > On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
> >
> >> On Tue, 21 Apr 2015, jb wrote:
> >>
> >> I've discovered something perhaps you guys can explain it better or shed
> >>> some light.
> >>> It isn't specifically to do with buffer bloat but it is to do with TCP
> >>> tuning.
> >>>
> >>> Attached is two pictures of my upload to New York speed test server with 1
> >>> stream.
> >>> It doesn't make any difference if it is 1 stream or 8 streams, the picture
> >>> and behaviour remains the same.
> >>> I am 200ms from new york so it qualifies as a fairly long (but not very
> >>> fat) pipe.
> >>>
> >>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535' (on
> >>> the server)
> >>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864' (on
> >>> the server)
> >>>
> >>> It actually doesn't matter what that last huge number is, once it goes
> >>> much
> >>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
> >>> upload speed.
> >>>
> >>> Now as I understand this setting, it is the tcp receive window that Linux
> >>> advertises, and the last number sets the maximum size it can get to (for
> >>> one TCP stream).
> >>>
> >>> For users with very fast upload speeds, they do not see an ugly bumpy
> >>> upload graph, it is smooth and sustained.
> >>> But for the majority of users (like me) with uploads less than 5 to
> >>> 10mbit,
> >>> we frequently see the ugly graph.
> >>>
> >>> The second tcp_rmem setting is how I have been running the speed test
> >>> servers.
> >>>
> >>> Up to now I thought this was just the distance of the speedtest from the
> >>> interface: perhaps the browser was buffering a lot, and didn't feed back
> >>> progress but now I realise the bumpy one is actually being influenced by
> >>> the server receive window.
> >>>
> >>> I guess my question is this: Why does ALLOWING a large receive window
> >>> appear to encourage problems with upload smoothness??
> >>>
> >>> This implies that setting the receive window should be done on a
> >>> connection
> >>> by connection basis: small for slow connections, large, for high speed,
> >>> long distance connections.
> >>>
> >>
> >> This is classic bufferbloat
> >>
> >> the receiver advertizes a large receive window, so the sender doesn't
> >> pause until there is that much data outstanding, or they get a timeout of a
> >> packet as a signal to slow down.
> >>
> >> and because you have a gig-E link locally, your machine generates traffic
> >> very rapidly, until all that data is 'in flight'. but it's really sitting
> >> in the buffer of a router trying to get through.
> >>
> >> then when a packet times out, the sender slows down a smidge and
> >> retransmits it. But the old packet is still sitting in a queue, eating
> >> bandwidth. the packets behind it are also going to timeout and be
> >> retransmitted before your first retransmitted packet gets through, so you
> >> have a large slug of data that's being retransmitted, and the first of the
> >> replacement data can't get through until the last of the old (timed out)
> >> data is transmitted.
> >>
> >> then when data starts flowing again, the sender again tries to fill up the
> >> window with data in flight.
> >>
> >> In addition, if I cap it to 65k, for reasons of smoothness,
> >>> that means the bandwidth delay product will keep maximum speed per upload
> >>> stream quite low. So a symmetric or gigabit connection is going to need a
> >>> ton of parallel streams to see full speed.
> >>>
> >>> Most puzzling is why would anything special be required on the Client -->
> >>> Server side of the equation
> >>> but nothing much appears wrong with the Server --> Client side, whether
> >>> speeds are very low (GPRS) or very high (gigabit).
> >>>
> >>
> >> but what window sizes are these clients advertising?
> >>
> >>
> >> Note that also I am not yet sure if smoothness == better throughput. I
> >>> have
> >>> noticed upload speeds for some people often being under their claimed sync
> >>> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
> >>> inefficiency. Maybe.
> >>>
> >>
> >> If you were to do a packet capture on the server side, you would see that
> >> you have a bunch of packets that are arriving multiple times, but the first
> >> time "does't count" because the replacement is already on the way.
> >>
> >> so your overall throughput is lower for two reasons
> >>
> >> 1. it's bursty, and there are times when the connection actually is idle
> >> (after you have a lot of timed out packets, the sender needs to ramp up
> >> it's speed again)
> >>
> >> 2. you are sending some packets multiple times, consuming more total
> >> bandwidth for the same 'goodput' (effective throughput)
> >>
> >> David Lang
> >>
> >>
> >> help!
> >>>
> >>>
> >>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net>
> >>> wrote:
> >>>
> >>> One thing users understand is slow web access. Perhaps translating the
> >>>> latency measurement into 'a typical web page will take X seconds longer
> >>>> to
> >>>> load', or even stating the impact as 'this latency causes a typical web
> >>>> page to load slower, as if your connection was only YY% of the measured
> >>>> speed.'
> >>>>
> >>>> Simon
> >>>>
> >>>> Sent with AquaMail for Android
> >>>> http://www.aqua-mail.com
> >>>>
> >>>>
> >>>>
> >>>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
> >>>> wrote:
> >>>>
> >>>>>>>> Frequency readouts are probably more accessible to the latter.
> >>>>
> >>>>>
> >>>>>>>> The frequency domain more accessible to laypersons? I have my
> >>>>>>>>
> >>>>>>> doubts ;)
> >>>>>
> >>>>>>
> >>>>>>> Gamers, at least, are familiar with “frames per second” and how that
> >>>>>>>
> >>>>>> corresponds to their monitor’s refresh rate.
> >>>>>
> >>>>>>
> >>>>>> I am sure they can easily transform back into time domain to get
> >>>>>>
> >>>>> the frame period ;) . I am partly kidding, I think your idea is great
> >>>>> in
> >>>>> that it is a truly positive value which could lend itself to being used
> >>>>> in
> >>>>> ISP/router manufacturer advertising, and hence might work in the real
> >>>>> work;
> >>>>> on the other hand I like to keep data as “raw” as possible (not that
> >>>>> ^(-1)
> >>>>> is a transformation worthy of being called data massage).
> >>>>>
> >>>>>>
> >>>>>> The desirable range of latencies, when converted to Hz, happens to be
> >>>>>>>
> >>>>>> roughly the same as the range of desirable frame rates.
> >>>>>
> >>>>>>
> >>>>>> Just to play devils advocate, the interesting part is time or
> >>>>>>
> >>>>> saving time so seconds or milliseconds are also intuitively
> >>>>> understandable
> >>>>> and can be easily added ;)
> >>>>>
> >>>>> Such readouts are certainly interesting to people like us. I have no
> >>>>> objection to them being reported alongside a frequency readout. But I
> >>>>> think most people are not interested in “time savings” measured in
> >>>>> milliseconds; they’re much more aware of the minute- and hour-level time
> >>>>> savings associated with greater bandwidth.
> >>>>>
> >>>>> - Jonathan Morton
> >>>>>
> >>>>> _______________________________________________
> >>>>> Bloat mailing list
> >>>>> Bloat@lists.bufferbloat.net
> >>>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>>
> >>>>>
> >>>>
> >>>> _______________________________________________
> >>>> Bloat mailing list
> >>>> Bloat@lists.bufferbloat.net
> >>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>
> >>>>
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >>
> >>
> >
>
>
> ----------
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* [Bloat] RE : DSLReports Speed Test has latency measurement built-in
2015-04-22 13:50 ` [Bloat] " Eric Dumazet
2015-04-22 14:09 ` Steinar H. Gunderson
@ 2015-04-22 15:26 ` luca.muscariello
2015-04-22 15:44 ` [Bloat] " Eric Dumazet
2015-04-22 15:59 ` [Bloat] RE : " Steinar H. Gunderson
1 sibling, 2 replies; 183+ messages in thread
From: luca.muscariello @ 2015-04-22 15:26 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 2312 bytes --]
Do I need to read this as all Google servers == all servers :)
BTW if a paced flow from Google shares a bloated buffer with a non paced flow from a non Google server, doesn't this turn out to be a performance penalty for the paced flow?
fq_codel gives incentives to do pacing but if it's not deployed what's the performance gain of using pacing?
-------- Message d'origine --------
De : Eric Dumazet
Date :2015/04/22 21:51 (GMT+08:00)
À : MUSCARIELLO Luca IMT/OLN
Cc : "Steinar H. Gunderson" , bloat
Objet : Re: [Bloat] DSLReports Speed Test has latency measurement built-in
On Wed, 2015-04-22 at 08:51 +0000, luca.muscariello@orange.com wrote:
> cons: large BDP in general would be negatively affected.
> A Gbps access vs a DSL access to the same server would require very
> different tuning.
>
Yep. This is what I mentioned with 'long rtt'. This was relative to BDP.
>
> sch_fq would probably make the whole thing less of a problem.
> But running it in a VM does not sound a good idea and would not
> reflect usual servers setting BTW
>
No idea why it should mater. Have you got some experimental data ?
You know, 'usual servers' used to run pfifo_fast, they now run sch_fq.
(All Google fleet at least)
So this kind of argument sounds not based on experiments.
_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.
[-- Attachment #2: Type: text/html, Size: 3063 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 15:26 ` [Bloat] RE : " luca.muscariello
@ 2015-04-22 15:44 ` Eric Dumazet
2015-04-22 16:35 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 15:59 ` [Bloat] RE : " Steinar H. Gunderson
1 sibling, 1 reply; 183+ messages in thread
From: Eric Dumazet @ 2015-04-22 15:44 UTC (permalink / raw)
To: luca.muscariello; +Cc: bloat
On Wed, 2015-04-22 at 15:26 +0000, luca.muscariello@orange.com wrote:
> Do I need to read this as all Google servers == all servers :)
>
>
Read again what I wrote. Don't play with my words.
> BTW if a paced flow from Google shares a bloated buffer with a non
> paced flow from a non Google server, doesn't this turn out to be a
> performance penalty for the paced flow?
>
>
What do you think ? Do you think Google would still use sch_fq if this
was a potential problem ?
> fq_codel gives incentives to do pacing but if it's not deployed what's
> the performance gain of using pacing?
1) fq_codel has nothing to do with pacing.
2) sch_fq doesn't depend on fq_codel or codel being used anywhere.
It seems you are quite confused, and unfortunately I wont take time to
explain anything.
Run experiments, then draw your own conclusions.
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
2015-04-22 15:26 ` [Bloat] RE : " luca.muscariello
2015-04-22 15:44 ` [Bloat] " Eric Dumazet
@ 2015-04-22 15:59 ` Steinar H. Gunderson
2015-04-22 16:16 ` Eric Dumazet
2015-04-22 16:19 ` Dave Taht
1 sibling, 2 replies; 183+ messages in thread
From: Steinar H. Gunderson @ 2015-04-22 15:59 UTC (permalink / raw)
To: luca.muscariello; +Cc: bloat
On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com wrote:
> BTW if a paced flow from Google shares a bloated buffer with a non paced
> flow from a non Google server, doesn't this turn out to be a performance
> penalty for the paced flow?
Nope. The paced flow puts less strain on the buffer (and hooray for that),
which is a win no matter if the buffer is contended or not.
> fq_codel gives incentives to do pacing but if it's not deployed what's the
> performance gain of using pacing?
fq_codel doesn't give any specific incentive to do pacing. In fact, if
absolutely all devices on your path would use fq_codel and have adequate
buffers, I believe pacing would be largely a no-op.
/* Steinar */
--
Homepage: http://www.sesse.net/
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
2015-04-22 15:59 ` [Bloat] RE : " Steinar H. Gunderson
@ 2015-04-22 16:16 ` Eric Dumazet
2015-04-22 16:19 ` Dave Taht
1 sibling, 0 replies; 183+ messages in thread
From: Eric Dumazet @ 2015-04-22 16:16 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: bloat
On Wed, 2015-04-22 at 17:59 +0200, Steinar H. Gunderson wrote:
> On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com wrote:
> > BTW if a paced flow from Google shares a bloated buffer with a non paced
> > flow from a non Google server, doesn't this turn out to be a performance
> > penalty for the paced flow?
>
> Nope. The paced flow puts less strain on the buffer (and hooray for that),
> which is a win no matter if the buffer is contended or not.
>
> > fq_codel gives incentives to do pacing but if it's not deployed what's the
> > performance gain of using pacing?
>
> fq_codel doesn't give any specific incentive to do pacing. In fact, if
> absolutely all devices on your path would use fq_codel and have adequate
> buffers, I believe pacing would be largely a no-op.
While this might be true for stationary flows (ACK driven, no pacing is
enforced in sch_fq), sch_fq/pacing is still nice after idle period.
Say a flow deliver chunks of data. With pacing, you no longer have to
slow start after idle.
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
2015-04-22 15:59 ` [Bloat] RE : " Steinar H. Gunderson
2015-04-22 16:16 ` Eric Dumazet
@ 2015-04-22 16:19 ` Dave Taht
2015-04-22 17:15 ` Rick Jones
1 sibling, 1 reply; 183+ messages in thread
From: Dave Taht @ 2015-04-22 16:19 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: bloat
On Wed, Apr 22, 2015 at 8:59 AM, Steinar H. Gunderson
<sgunderson@bigfoot.com> wrote:
> On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com wrote:
>> BTW if a paced flow from Google shares a bloated buffer with a non paced
>> flow from a non Google server, doesn't this turn out to be a performance
>> penalty for the paced flow?
>
> Nope. The paced flow puts less strain on the buffer (and hooray for that),
> which is a win no matter if the buffer is contended or not.
I just posted some test results for 450 simultaneous flows on a new thread.
sch_fq has a fixed per flow packet limit of 100 packets, which shows up here.
Cake did surprisingly well, I have no idea why. I suspect my kernel is broken,
actually. I am getting on a plane in a bit, and have done too much work this
"vacation" already.
Has anyone added pacing to netperf yet? (I can do so, but would need
guidance as to what getopt option to add)
>> fq_codel gives incentives to do pacing but if it's not deployed what's the
>> performance gain of using pacing?
>
> fq_codel doesn't give any specific incentive to do pacing.
\
Concur, except that in the case where there is no queue for that flow,
fq_codel gives a boost.
> In fact, if
> absolutely all devices on your path would use fq_codel and have adequate
> buffers, I believe pacing would be largely a no-op.
Concur.
> /* Steinar */
> --
> Homepage: http://www.sesse.net/
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 15:44 ` [Bloat] " Eric Dumazet
@ 2015-04-22 16:35 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 17:16 ` Eric Dumazet
0 siblings, 1 reply; 183+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-22 16:35 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
On 04/22/2015 05:44 PM, Eric Dumazet wrote:
> On Wed, 2015-04-22 at 15:26 +0000, luca.muscariello@orange.com wrote:
>> Do I need to read this as all Google servers == all servers :)
>>
>>
> Read again what I wrote. Don't play with my words.
let the stupid guy ask questions.
In the worst case don't answer, but no reason to get nervous.
>
>> BTW if a paced flow from Google shares a bloated buffer with a non
>> paced flow from a non Google server, doesn't this turn out to be a
>> performance penalty for the paced flow?
>>
>>
> What do you think ? Do you think Google would still use sch_fq if this
> was a potential problem ?
Frankly, I do not understand this statement as it seems you tell me
that that would be the right choice as Google does it.
I believe that Google does it for technical reasons and that would be
part of your possible answers. You have the right not to share with the
list of course.
>
>> fq_codel gives incentives to do pacing but if it's not deployed what's
>> the performance gain of using pacing?
> 1) fq_codel has nothing to do with pacing.
FQ gives you flow isolation.
Extreme example: If you share the link with a flow that saturates the
buffer (for whatever reason)
flow isolation gives you the comfort to do whatever works better for
your application.
Without flow isolation how can you benefit from the good features of
pacing if
the buffer get screwed by a competing flow?
This is what I meant with incentives to do pacing in presence of flow
isolation, as you get rewarded
if the sender behaves better no matter what others do.
>
> 2) sch_fq doesn't depend on fq_codel or codel being used anywhere.
that's clear to me.
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
2015-04-22 16:19 ` Dave Taht
@ 2015-04-22 17:15 ` Rick Jones
0 siblings, 0 replies; 183+ messages in thread
From: Rick Jones @ 2015-04-22 17:15 UTC (permalink / raw)
To: bloat
On 04/22/2015 09:19 AM, Dave Taht wrote:
>
> Has anyone added pacing to netperf yet? (I can do so, but would need
> guidance as to what getopt option to add)
./configure --enable-intervals
recompile netperf and then you can use:
netperf ... -b <NumSendsInBurst> -w <InterBurstInterval>
If you want to be able to specify an interval shorter than the minimum
time for the itimer you need to add --enable-spin to the ./configure -
netperf will burn CPU time like there was no tomorrow, but you'll get
the finer control on the burst interval.
happy benchmarking,
rick
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 16:35 ` MUSCARIELLO Luca IMT/OLN
@ 2015-04-22 17:16 ` Eric Dumazet
2015-04-22 17:24 ` Steinar H. Gunderson
2015-04-22 17:28 ` MUSCARIELLO Luca IMT/OLN
0 siblings, 2 replies; 183+ messages in thread
From: Eric Dumazet @ 2015-04-22 17:16 UTC (permalink / raw)
To: MUSCARIELLO Luca IMT/OLN; +Cc: bloat
On Wed, 2015-04-22 at 18:35 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> FQ gives you flow isolation.
So does fq_codel.
sch_fq adds *pacing*, which in itself has benefits, regardless of fair
queues : Smaller bursts, less self inflicted drops.
If flows are competing, this is the role of Congestion Control module,
not packet schedulers / AQM.
Packets schedulers help to have smaller bursts, and help CC modules to
see a better signal (packet drops or rtt variations)
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 17:16 ` Eric Dumazet
@ 2015-04-22 17:24 ` Steinar H. Gunderson
2015-04-22 17:28 ` MUSCARIELLO Luca IMT/OLN
1 sibling, 0 replies; 183+ messages in thread
From: Steinar H. Gunderson @ 2015-04-22 17:24 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
On Wed, Apr 22, 2015 at 10:16:19AM -0700, Eric Dumazet wrote:
> sch_fq adds *pacing*, which in itself has benefits, regardless of fair
> queues : Smaller bursts, less self inflicted drops.
Somehow I think sch_fq should just have been named sch_pacing :-)
/* Steinar */
--
Homepage: http://www.sesse.net/
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 17:16 ` Eric Dumazet
2015-04-22 17:24 ` Steinar H. Gunderson
@ 2015-04-22 17:28 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 17:45 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 18:22 ` Eric Dumazet
1 sibling, 2 replies; 183+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-22 17:28 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
On 04/22/2015 07:16 PM, Eric Dumazet wrote:
> On Wed, 2015-04-22 at 18:35 +0200, MUSCARIELLO Luca IMT/OLN wrote:
>
>> FQ gives you flow isolation.
> So does fq_codel.
yes, the FQ part of fq_codel. that's what I meant. Not the FQ part of
sch_fq.
>
> sch_fq adds *pacing*, which in itself has benefits, regardless of fair
> queues : Smaller bursts, less self inflicted drops.
This I understand. But it can't protect from non self inflicted drops.
>
> If flows are competing, this is the role of Congestion Control module,
> not packet schedulers / AQM.
Exactly. Two same CC modules competing on the same link, one w pacing
the other one w/o pacing.
The latter will have negative impact on the former in FIFO. Not in FQ
(fq_codel to clarify).
And that's my incentive argument which comes from the flow isolation
feature of FQ (_codel).
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 14:32 ` Simon Barber
@ 2015-04-22 17:35 ` David Lang
2015-04-23 1:37 ` Simon Barber
0 siblings, 1 reply; 183+ messages in thread
From: David Lang @ 2015-04-22 17:35 UTC (permalink / raw)
To: Simon Barber; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 11865 bytes --]
Data that's received and not used doesn't really matter (a tree falls in the
woods type of thing).
The head of line blocking can cause a chunk of packets to be retransmitted, even
though the receiving machine got them the first time. So looking at the received
bytes gives you a false picture of what is going on.
David Lang
On Wed, 22 Apr 2015, Simon Barber wrote:
> The bumps are due to packet loss causing head of line blocking. Until the
> lost packet is retransmitted the receiver can't release any subsequent
> received packets to the application due to the requirement for in order
> delivery. If you counted received bytes with a packet counter rather than
> looking at application level you would be able to illustrate that data was
> being received smoothly (even though out of order).
>
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
>
> On April 21, 2015 7:21:09 AM David Lang <david@lang.hm> wrote:
>
>> On Tue, 21 Apr 2015, jb wrote:
>>
>> >> the receiver advertizes a large receive window, so the sender doesn't
>> > pause > until there is that much data outstanding, or they get a timeout
>> of
>> > a packet as > a signal to slow down.
>> >
>> >> and because you have a gig-E link locally, your machine generates
>> traffic
>> > \
>> >> very rapidly, until all that data is 'in flight'. but it's really
>> sitting
>> > in the buffer of
>> >> router trying to get through.
>> >
>> > Hmm, then I have a quandary because I can easily solve the nasty bumpy
>> > upload graphs by keeping the advertised receive window on the server
>> capped
>> > low, however then, paradoxically, there is no more sign of buffer bloat
>> in
>> > the result, at least for the upload phase.
>> >
>> > (The graph under the upload/download graphs for my results shows almost
>> no
>> > latency increase during the upload phase, now).
>> >
>> > Or, I can crank it back open again, serving people with fiber connections
>> > without having to run heaps of streams in parallel -- and then have
>> people
>> > complain that the upload result is inefficient, or bumpy, vs what they
>> > expect.
>>
>> well, many people expect it to be bumpy (I've heard ISPs explain to
>> customers
>> that when a link is full it is bumpy, that's just the way things work)
>>
>> > And I can't offer an option, because the server receive window (I think)
>> > cannot be set on a case by case basis. You set it for all TCP and forget
>> it.
>>
>> I think you are right
>>
>> > I suspect you guys are going to say the server should be left with a
>> large
>> > max receive window.. and let people complain to find out what their issue
>> > is.
>>
>> what is your customer base? how important is it to provide faster service
>> to teh
>> fiber users? Are they transferring ISO images so the difference is
>> significant
>> to them? or are they downloading web pages where it's the difference
>> between a
>> half second and a quarter second? remember that you are seeing this on the
>> upload side.
>>
>> in the long run, fixing the problem at the client side is the best thing to
>> do,
>> but in the meantime, you sometimes have to work around broken customer
>> stuff.
>>
>> > BTW my setup is wire to billion 7800N, which is a DSL modem and router. I
>> > believe it is a linux based (judging from the system log) device.
>>
>> if it's linux based, it would be interesting to learn what sort of settings
>> it
>> has. It may be one of the rarer devices that has something in place already
>> to
>> do active queue management.
>>
>> David Lang
>>
>> > cheers,
>> > -Justin
>> >
>> > On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
>> >
>> >> On Tue, 21 Apr 2015, jb wrote:
>> >>
>> >> I've discovered something perhaps you guys can explain it better or
>> shed
>> >>> some light.
>> >>> It isn't specifically to do with buffer bloat but it is to do with TCP
>> >>> tuning.
>> >>>
>> >>> Attached is two pictures of my upload to New York speed test server
>> with 1
>> >>> stream.
>> >>> It doesn't make any difference if it is 1 stream or 8 streams, the
>> picture
>> >>> and behaviour remains the same.
>> >>> I am 200ms from new york so it qualifies as a fairly long (but not very
>> >>> fat) pipe.
>> >>>
>> >>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535'
>> (on
>> >>> the server)
>> >>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864'
>> (on
>> >>> the server)
>> >>>
>> >>> It actually doesn't matter what that last huge number is, once it goes
>> >>> much
>> >>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
>> >>> upload speed.
>> >>>
>> >>> Now as I understand this setting, it is the tcp receive window that
>> Linux
>> >>> advertises, and the last number sets the maximum size it can get to
>> (for
>> >>> one TCP stream).
>> >>>
>> >>> For users with very fast upload speeds, they do not see an ugly bumpy
>> >>> upload graph, it is smooth and sustained.
>> >>> But for the majority of users (like me) with uploads less than 5 to
>> >>> 10mbit,
>> >>> we frequently see the ugly graph.
>> >>>
>> >>> The second tcp_rmem setting is how I have been running the speed test
>> >>> servers.
>> >>>
>> >>> Up to now I thought this was just the distance of the speedtest from
>> the
>> >>> interface: perhaps the browser was buffering a lot, and didn't feed
>> back
>> >>> progress but now I realise the bumpy one is actually being influenced
>> by
>> >>> the server receive window.
>> >>>
>> >>> I guess my question is this: Why does ALLOWING a large receive window
>> >>> appear to encourage problems with upload smoothness??
>> >>>
>> >>> This implies that setting the receive window should be done on a
>> >>> connection
>> >>> by connection basis: small for slow connections, large, for high speed,
>> >>> long distance connections.
>> >>>
>> >>
>> >> This is classic bufferbloat
>> >>
>> >> the receiver advertizes a large receive window, so the sender doesn't
>> >> pause until there is that much data outstanding, or they get a timeout
>> of a
>> >> packet as a signal to slow down.
>> >>
>> >> and because you have a gig-E link locally, your machine generates
>> traffic
>> >> very rapidly, until all that data is 'in flight'. but it's really
>> sitting
>> >> in the buffer of a router trying to get through.
>> >>
>> >> then when a packet times out, the sender slows down a smidge and
>> >> retransmits it. But the old packet is still sitting in a queue, eating
>> >> bandwidth. the packets behind it are also going to timeout and be
>> >> retransmitted before your first retransmitted packet gets through, so
>> you
>> >> have a large slug of data that's being retransmitted, and the first of
>> the
>> >> replacement data can't get through until the last of the old (timed out)
>> >> data is transmitted.
>> >>
>> >> then when data starts flowing again, the sender again tries to fill up
>> the
>> >> window with data in flight.
>> >>
>> >> In addition, if I cap it to 65k, for reasons of smoothness,
>> >>> that means the bandwidth delay product will keep maximum speed per
>> upload
>> >>> stream quite low. So a symmetric or gigabit connection is going to need
>> a
>> >>> ton of parallel streams to see full speed.
>> >>>
>> >>> Most puzzling is why would anything special be required on the Client
>> -->
>> >>> Server side of the equation
>> >>> but nothing much appears wrong with the Server --> Client side, whether
>> >>> speeds are very low (GPRS) or very high (gigabit).
>> >>>
>> >>
>> >> but what window sizes are these clients advertising?
>> >>
>> >>
>> >> Note that also I am not yet sure if smoothness == better throughput. I
>> >>> have
>> >>> noticed upload speeds for some people often being under their claimed
>> sync
>> >>> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
>> >>> inefficiency. Maybe.
>> >>>
>> >>
>> >> If you were to do a packet capture on the server side, you would see
>> that
>> >> you have a bunch of packets that are arriving multiple times, but the
>> first
>> >> time "does't count" because the replacement is already on the way.
>> >>
>> >> so your overall throughput is lower for two reasons
>> >>
>> >> 1. it's bursty, and there are times when the connection actually is idle
>> >> (after you have a lot of timed out packets, the sender needs to ramp up
>> >> it's speed again)
>> >>
>> >> 2. you are sending some packets multiple times, consuming more total
>> >> bandwidth for the same 'goodput' (effective throughput)
>> >>
>> >> David Lang
>> >>
>> >>
>> >> help!
>> >>>
>> >>>
>> >>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net>
>> >>> wrote:
>> >>>
>> >>> One thing users understand is slow web access. Perhaps translating
>> the
>> >>>> latency measurement into 'a typical web page will take X seconds
>> longer
>> >>>> to
>> >>>> load', or even stating the impact as 'this latency causes a typical
>> web
>> >>>> page to load slower, as if your connection was only YY% of the
>> measured
>> >>>> speed.'
>> >>>>
>> >>>> Simon
>> >>>>
>> >>>> Sent with AquaMail for Android
>> >>>> http://www.aqua-mail.com
>> >>>>
>> >>>>
>> >>>>
>> >>>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
>> >>>> wrote:
>> >>>>
>> >>>>>>>> Frequency readouts are probably more accessible to the latter.
>> >>>>
>> >>>>>
>> >>>>>>>> The frequency domain more accessible to laypersons? I have my
>> >>>>>>>>
>> >>>>>>> doubts ;)
>> >>>>>
>> >>>>>>
>> >>>>>>> Gamers, at least, are familiar with “frames per second” and how
>> that
>> >>>>>>>
>> >>>>>> corresponds to their monitor’s refresh rate.
>> >>>>>
>> >>>>>>
>> >>>>>> I am sure they can easily transform back into time domain to
>> get
>> >>>>>>
>> >>>>> the frame period ;) . I am partly kidding, I think your idea is
>> great
>> >>>>> in
>> >>>>> that it is a truly positive value which could lend itself to being
>> used
>> >>>>> in
>> >>>>> ISP/router manufacturer advertising, and hence might work in the real
>> >>>>> work;
>> >>>>> on the other hand I like to keep data as “raw” as possible (not that
>> >>>>> ^(-1)
>> >>>>> is a transformation worthy of being called data massage).
>> >>>>>
>> >>>>>>
>> >>>>>> The desirable range of latencies, when converted to Hz, happens to
>> be
>> >>>>>>>
>> >>>>>> roughly the same as the range of desirable frame rates.
>> >>>>>
>> >>>>>>
>> >>>>>> Just to play devils advocate, the interesting part is time or
>> >>>>>>
>> >>>>> saving time so seconds or milliseconds are also intuitively
>> >>>>> understandable
>> >>>>> and can be easily added ;)
>> >>>>>
>> >>>>> Such readouts are certainly interesting to people like us. I have no
>> >>>>> objection to them being reported alongside a frequency readout. But
>> I
>> >>>>> think most people are not interested in “time savings” measured in
>> >>>>> milliseconds; they’re much more aware of the minute- and hour-level
>> time
>> >>>>> savings associated with greater bandwidth.
>> >>>>>
>> >>>>> - Jonathan Morton
>> >>>>>
>> >>>>> _______________________________________________
>> >>>>> Bloat mailing list
>> >>>>> Bloat@lists.bufferbloat.net
>> >>>>> https://lists.bufferbloat.net/listinfo/bloat
>> >>>>>
>> >>>>>
>> >>>>
>> >>>> _______________________________________________
>> >>>> Bloat mailing list
>> >>>> Bloat@lists.bufferbloat.net
>> >>>> https://lists.bufferbloat.net/listinfo/bloat
>> >>>>
>> >>>>
>> >> _______________________________________________
>> >> Bloat mailing list
>> >> Bloat@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/bloat
>> >>
>> >>
>> >
>>
>>
>> ----------
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 17:28 ` MUSCARIELLO Luca IMT/OLN
@ 2015-04-22 17:45 ` MUSCARIELLO Luca IMT/OLN
2015-04-23 5:27 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 18:22 ` Eric Dumazet
1 sibling, 1 reply; 183+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-22 17:45 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
I remember a paper by Stefan Savage of about 15 years ago where he
substantiates this in clearer terms.
If I find the paper I'll send the reference to the list.
On 04/22/2015 07:28 PM, MUSCARIELLO Luca IMT/OLN wrote:
> Exactly. Two same CC modules competing on the same link, one w pacing
> the other one w/o pacing.
> The latter will have negative impact on the former in FIFO. Not in FQ
> (fq_codel to clarify).
> And that's my incentive argument which comes from the flow isolation
> feature of FQ (_codel).
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 17:28 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 17:45 ` MUSCARIELLO Luca IMT/OLN
@ 2015-04-22 18:22 ` Eric Dumazet
2015-04-22 18:39 ` [Bloat] Pacing --- was " MUSCARIELLO Luca IMT/OLN
1 sibling, 1 reply; 183+ messages in thread
From: Eric Dumazet @ 2015-04-22 18:22 UTC (permalink / raw)
To: MUSCARIELLO Luca IMT/OLN; +Cc: bloat
On Wed, 2015-04-22 at 19:28 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> On 04/22/2015 07:16 PM, Eric Dumazet wrote:
> >
> > sch_fq adds *pacing*, which in itself has benefits, regardless of fair
> > queues : Smaller bursts, less self inflicted drops.
>
> This I understand. But it can't protect from non self inflicted drops.
It really does.
This is why we deployed sch_fq and let our competitors find this later.
>
> >
> > If flows are competing, this is the role of Congestion Control module,
> > not packet schedulers / AQM.
>
> Exactly. Two same CC modules competing on the same link, one w pacing
> the other one w/o pacing.
> The latter will have negative impact on the former in FIFO. Not in FQ
> (fq_codel to clarify).
Not on modern linux kernels, thanks to TCP Small Queues.
> And that's my incentive argument which comes from the flow isolation
> feature of FQ (_codel).
fq_codel is not something you can deploy on the backbone routers, for
known reasons.
sch_fq is something you can deploy on hosts, where the codel part is
irrelevant anyway (because of TCP Small Queues in modern linux kernels)
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] Pacing --- was DSLReports Speed Test has latency measurement built-in
2015-04-22 18:22 ` Eric Dumazet
@ 2015-04-22 18:39 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 19:05 ` Jonathan Morton
0 siblings, 1 reply; 183+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-22 18:39 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
This is not clear to me in general.
I can understand that for the first shot of the IW>=10 pacing is always
a win strategy no matter what queuing system you have
because it reduces the loss probability in that window. Still fq_codel
would reduce that probability even more.
But for long flows going far beyond that phase isn't clear to me why the
paced flow is not penalized by the non paced flow in FIFO.
TCP will start filling the pipe at some point in bursts and that would hurt.
Now, I forgot how sch_fq pacing rate is initialized to be effective from
the very first window.
On 04/22/2015 08:22 PM, Eric Dumazet wrote:
> It really does.
>
> This is why we deployed sch_fq and let our competitors find this later.
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] Pacing --- was DSLReports Speed Test has latency measurement built-in
2015-04-22 18:39 ` [Bloat] Pacing --- was " MUSCARIELLO Luca IMT/OLN
@ 2015-04-22 19:05 ` Jonathan Morton
0 siblings, 0 replies; 183+ messages in thread
From: Jonathan Morton @ 2015-04-22 19:05 UTC (permalink / raw)
To: MUSCARIELLO Luca IMT/OLN; +Cc: bloat
> On 22 Apr, 2015, at 21:39, MUSCARIELLO Luca IMT/OLN <luca.muscariello@orange.com> wrote:
>
> Now, I forgot how sch_fq pacing rate is initialized to be effective from the very first window.
IIRC, it’s basically a measurement of the RTT during the handshake, and you then pace to deliver the congestion window during that RTT (or, in practice, some large fraction of it). Subsequent changes in cwnd and RTT alter the pacing rate accordingly.
- Jonathan Morton
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 17:35 ` David Lang
@ 2015-04-23 1:37 ` Simon Barber
2015-04-24 16:54 ` David Lang
0 siblings, 1 reply; 183+ messages in thread
From: Simon Barber @ 2015-04-23 1:37 UTC (permalink / raw)
To: David Lang; +Cc: bloat
Does this happen even with Sack?
Simon
Sent with AquaMail for Android
http://www.aqua-mail.com
On April 22, 2015 10:36:11 AM David Lang <david@lang.hm> wrote:
> Data that's received and not used doesn't really matter (a tree falls in the
> woods type of thing).
>
> The head of line blocking can cause a chunk of packets to be retransmitted,
> even
> though the receiving machine got them the first time. So looking at the
> received
> bytes gives you a false picture of what is going on.
>
> David Lang
>
> On Wed, 22 Apr 2015, Simon Barber wrote:
>
> > The bumps are due to packet loss causing head of line blocking. Until the
> > lost packet is retransmitted the receiver can't release any subsequent
> > received packets to the application due to the requirement for in order
> > delivery. If you counted received bytes with a packet counter rather than
> > looking at application level you would be able to illustrate that data was
> > being received smoothly (even though out of order).
> >
> > Simon
> >
> > Sent with AquaMail for Android
> > http://www.aqua-mail.com
> >
> >
> > On April 21, 2015 7:21:09 AM David Lang <david@lang.hm> wrote:
> >
> >> On Tue, 21 Apr 2015, jb wrote:
> >>
> >> >> the receiver advertizes a large receive window, so the sender doesn't
> >> > pause > until there is that much data outstanding, or they get a timeout
> >> of
> >> > a packet as > a signal to slow down.
> >> >
> >> >> and because you have a gig-E link locally, your machine generates
> >> traffic
> >> > \
> >> >> very rapidly, until all that data is 'in flight'. but it's really
> >> sitting
> >> > in the buffer of
> >> >> router trying to get through.
> >> >
> >> > Hmm, then I have a quandary because I can easily solve the nasty bumpy
> >> > upload graphs by keeping the advertised receive window on the server
> >> capped
> >> > low, however then, paradoxically, there is no more sign of buffer bloat
> >> in
> >> > the result, at least for the upload phase.
> >> >
> >> > (The graph under the upload/download graphs for my results shows almost
> >> no
> >> > latency increase during the upload phase, now).
> >> >
> >> > Or, I can crank it back open again, serving people with fiber connections
> >> > without having to run heaps of streams in parallel -- and then have
> >> people
> >> > complain that the upload result is inefficient, or bumpy, vs what they
> >> > expect.
> >>
> >> well, many people expect it to be bumpy (I've heard ISPs explain to
> >> customers
> >> that when a link is full it is bumpy, that's just the way things work)
> >>
> >> > And I can't offer an option, because the server receive window (I think)
> >> > cannot be set on a case by case basis. You set it for all TCP and forget
> >> it.
> >>
> >> I think you are right
> >>
> >> > I suspect you guys are going to say the server should be left with a
> >> large
> >> > max receive window.. and let people complain to find out what their issue
> >> > is.
> >>
> >> what is your customer base? how important is it to provide faster service
> >> to teh
> >> fiber users? Are they transferring ISO images so the difference is
> >> significant
> >> to them? or are they downloading web pages where it's the difference
> >> between a
> >> half second and a quarter second? remember that you are seeing this on the
> >> upload side.
> >>
> >> in the long run, fixing the problem at the client side is the best thing to
> >> do,
> >> but in the meantime, you sometimes have to work around broken customer
> >> stuff.
> >>
> >> > BTW my setup is wire to billion 7800N, which is a DSL modem and router. I
> >> > believe it is a linux based (judging from the system log) device.
> >>
> >> if it's linux based, it would be interesting to learn what sort of settings
> >> it
> >> has. It may be one of the rarer devices that has something in place already
> >> to
> >> do active queue management.
> >>
> >> David Lang
> >>
> >> > cheers,
> >> > -Justin
> >> >
> >> > On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
> >> >
> >> >> On Tue, 21 Apr 2015, jb wrote:
> >> >>
> >> >> I've discovered something perhaps you guys can explain it better or
> >> shed
> >> >>> some light.
> >> >>> It isn't specifically to do with buffer bloat but it is to do with TCP
> >> >>> tuning.
> >> >>>
> >> >>> Attached is two pictures of my upload to New York speed test server
> >> with 1
> >> >>> stream.
> >> >>> It doesn't make any difference if it is 1 stream or 8 streams, the
> >> picture
> >> >>> and behaviour remains the same.
> >> >>> I am 200ms from new york so it qualifies as a fairly long (but not very
> >> >>> fat) pipe.
> >> >>>
> >> >>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535'
> >> (on
> >> >>> the server)
> >> >>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864'
> >> (on
> >> >>> the server)
> >> >>>
> >> >>> It actually doesn't matter what that last huge number is, once it goes
> >> >>> much
> >> >>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
> >> >>> upload speed.
> >> >>>
> >> >>> Now as I understand this setting, it is the tcp receive window that
> >> Linux
> >> >>> advertises, and the last number sets the maximum size it can get to
> >> (for
> >> >>> one TCP stream).
> >> >>>
> >> >>> For users with very fast upload speeds, they do not see an ugly bumpy
> >> >>> upload graph, it is smooth and sustained.
> >> >>> But for the majority of users (like me) with uploads less than 5 to
> >> >>> 10mbit,
> >> >>> we frequently see the ugly graph.
> >> >>>
> >> >>> The second tcp_rmem setting is how I have been running the speed test
> >> >>> servers.
> >> >>>
> >> >>> Up to now I thought this was just the distance of the speedtest from
> >> the
> >> >>> interface: perhaps the browser was buffering a lot, and didn't feed
> >> back
> >> >>> progress but now I realise the bumpy one is actually being influenced
> >> by
> >> >>> the server receive window.
> >> >>>
> >> >>> I guess my question is this: Why does ALLOWING a large receive window
> >> >>> appear to encourage problems with upload smoothness??
> >> >>>
> >> >>> This implies that setting the receive window should be done on a
> >> >>> connection
> >> >>> by connection basis: small for slow connections, large, for high speed,
> >> >>> long distance connections.
> >> >>>
> >> >>
> >> >> This is classic bufferbloat
> >> >>
> >> >> the receiver advertizes a large receive window, so the sender doesn't
> >> >> pause until there is that much data outstanding, or they get a timeout
> >> of a
> >> >> packet as a signal to slow down.
> >> >>
> >> >> and because you have a gig-E link locally, your machine generates
> >> traffic
> >> >> very rapidly, until all that data is 'in flight'. but it's really
> >> sitting
> >> >> in the buffer of a router trying to get through.
> >> >>
> >> >> then when a packet times out, the sender slows down a smidge and
> >> >> retransmits it. But the old packet is still sitting in a queue, eating
> >> >> bandwidth. the packets behind it are also going to timeout and be
> >> >> retransmitted before your first retransmitted packet gets through, so
> >> you
> >> >> have a large slug of data that's being retransmitted, and the first of
> >> the
> >> >> replacement data can't get through until the last of the old (timed out)
> >> >> data is transmitted.
> >> >>
> >> >> then when data starts flowing again, the sender again tries to fill up
> >> the
> >> >> window with data in flight.
> >> >>
> >> >> In addition, if I cap it to 65k, for reasons of smoothness,
> >> >>> that means the bandwidth delay product will keep maximum speed per
> >> upload
> >> >>> stream quite low. So a symmetric or gigabit connection is going to need
> >> a
> >> >>> ton of parallel streams to see full speed.
> >> >>>
> >> >>> Most puzzling is why would anything special be required on the Client
> >> -->
> >> >>> Server side of the equation
> >> >>> but nothing much appears wrong with the Server --> Client side, whether
> >> >>> speeds are very low (GPRS) or very high (gigabit).
> >> >>>
> >> >>
> >> >> but what window sizes are these clients advertising?
> >> >>
> >> >>
> >> >> Note that also I am not yet sure if smoothness == better throughput. I
> >> >>> have
> >> >>> noticed upload speeds for some people often being under their claimed
> >> sync
> >> >>> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
> >> >>> inefficiency. Maybe.
> >> >>>
> >> >>
> >> >> If you were to do a packet capture on the server side, you would see
> >> that
> >> >> you have a bunch of packets that are arriving multiple times, but the
> >> first
> >> >> time "does't count" because the replacement is already on the way.
> >> >>
> >> >> so your overall throughput is lower for two reasons
> >> >>
> >> >> 1. it's bursty, and there are times when the connection actually is idle
> >> >> (after you have a lot of timed out packets, the sender needs to ramp up
> >> >> it's speed again)
> >> >>
> >> >> 2. you are sending some packets multiple times, consuming more total
> >> >> bandwidth for the same 'goodput' (effective throughput)
> >> >>
> >> >> David Lang
> >> >>
> >> >>
> >> >> help!
> >> >>>
> >> >>>
> >> >>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net>
> >> >>> wrote:
> >> >>>
> >> >>> One thing users understand is slow web access. Perhaps translating
> >> the
> >> >>>> latency measurement into 'a typical web page will take X seconds
> >> longer
> >> >>>> to
> >> >>>> load', or even stating the impact as 'this latency causes a typical
> >> web
> >> >>>> page to load slower, as if your connection was only YY% of the
> >> measured
> >> >>>> speed.'
> >> >>>>
> >> >>>> Simon
> >> >>>>
> >> >>>> Sent with AquaMail for Android
> >> >>>> http://www.aqua-mail.com
> >> >>>>
> >> >>>>
> >> >>>>
> >> >>>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
> >> >>>> wrote:
> >> >>>>
> >> >>>>>>>> Frequency readouts are probably more accessible to the latter.
> >> >>>>
> >> >>>>>
> >> >>>>>>>> The frequency domain more accessible to laypersons? I have my
> >> >>>>>>>>
> >> >>>>>>> doubts ;)
> >> >>>>>
> >> >>>>>>
> >> >>>>>>> Gamers, at least, are familiar with “frames per second” and how
> >> that
> >> >>>>>>>
> >> >>>>>> corresponds to their monitor’s refresh rate.
> >> >>>>>
> >> >>>>>>
> >> >>>>>> I am sure they can easily transform back into time domain to
> >> get
> >> >>>>>>
> >> >>>>> the frame period ;) . I am partly kidding, I think your idea is
> >> great
> >> >>>>> in
> >> >>>>> that it is a truly positive value which could lend itself to being
> >> used
> >> >>>>> in
> >> >>>>> ISP/router manufacturer advertising, and hence might work in the real
> >> >>>>> work;
> >> >>>>> on the other hand I like to keep data as “raw” as possible (not that
> >> >>>>> ^(-1)
> >> >>>>> is a transformation worthy of being called data massage).
> >> >>>>>
> >> >>>>>>
> >> >>>>>> The desirable range of latencies, when converted to Hz, happens to
> >> be
> >> >>>>>>>
> >> >>>>>> roughly the same as the range of desirable frame rates.
> >> >>>>>
> >> >>>>>>
> >> >>>>>> Just to play devils advocate, the interesting part is time or
> >> >>>>>>
> >> >>>>> saving time so seconds or milliseconds are also intuitively
> >> >>>>> understandable
> >> >>>>> and can be easily added ;)
> >> >>>>>
> >> >>>>> Such readouts are certainly interesting to people like us. I have no
> >> >>>>> objection to them being reported alongside a frequency readout. But
> >> I
> >> >>>>> think most people are not interested in “time savings” measured in
> >> >>>>> milliseconds; they’re much more aware of the minute- and hour-level
> >> time
> >> >>>>> savings associated with greater bandwidth.
> >> >>>>>
> >> >>>>> - Jonathan Morton
> >> >>>>>
> >> >>>>> _______________________________________________
> >> >>>>> Bloat mailing list
> >> >>>>> Bloat@lists.bufferbloat.net
> >> >>>>> https://lists.bufferbloat.net/listinfo/bloat
> >> >>>>>
> >> >>>>>
> >> >>>>
> >> >>>> _______________________________________________
> >> >>>> Bloat mailing list
> >> >>>> Bloat@lists.bufferbloat.net
> >> >>>> https://lists.bufferbloat.net/listinfo/bloat
> >> >>>>
> >> >>>>
> >> >> _______________________________________________
> >> >> Bloat mailing list
> >> >> Bloat@lists.bufferbloat.net
> >> >> https://lists.bufferbloat.net/listinfo/bloat
> >> >>
> >> >>
> >> >
> >>
> >>
> >> ----------
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >>
> >
> >
> >
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-22 17:45 ` MUSCARIELLO Luca IMT/OLN
@ 2015-04-23 5:27 ` MUSCARIELLO Luca IMT/OLN
2015-04-23 6:48 ` Eric Dumazet
0 siblings, 1 reply; 183+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-23 5:27 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
one reference with pdf publicly available. On the website there are
various papers
on this topic. Others might me more relevant but I did not check all of
them.
Understanding the Performance of TCP Pacing,
Amit Aggarwal, Stefan Savage, and Tom Anderson,
IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
On 04/22/2015 07:45 PM, MUSCARIELLO Luca IMT/OLN wrote:
> I remember a paper by Stefan Savage of about 15 years ago where he
> substantiates this in clearer terms.
> If I find the paper I'll send the reference to the list.
>
>
> On 04/22/2015 07:28 PM, MUSCARIELLO Luca IMT/OLN wrote:
>> Exactly. Two same CC modules competing on the same link, one w pacing
>> the other one w/o pacing.
>> The latter will have negative impact on the former in FIFO. Not in FQ
>> (fq_codel to clarify).
>> And that's my incentive argument which comes from the flow isolation
>> feature of FQ (_codel).
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 5:27 ` MUSCARIELLO Luca IMT/OLN
@ 2015-04-23 6:48 ` Eric Dumazet
[not found] ` <CAH3Ss96VwE_fWNMOMOY4AgaEnVFtCP3rPDHSudOcHxckSDNMqQ@mail.gmail.com>
` (2 more replies)
0 siblings, 3 replies; 183+ messages in thread
From: Eric Dumazet @ 2015-04-23 6:48 UTC (permalink / raw)
To: MUSCARIELLO Luca IMT/OLN; +Cc: bloat
Wait, this is a 15 years old experiment using Reno and a single test
bed, using ns simulator.
Naive TCP pacing implementations were tried, and probably failed.
Pacing individual packet is quite bad, this is the first lesson one
learns when implementing TCP pacing, especially if you try to drive a
40Gbps NIC.
https://lwn.net/Articles/564978/
Also note we use usec based rtt samples, and nanosec high resolution
timers in fq. I suspect the ns simulator experiment had sync issues
because of using low resolution timers or simulation artifact, without
any jitter source.
Billions of flows are now 'paced', but keep in mind most packets are not
paced. We do not pace in slow start, and we do not pace when tcp is ACK
clocked.
Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
eventually have all packets being paced, using TSO 'clusters' for TCP.
On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> one reference with pdf publicly available. On the website there are
> various papers
> on this topic. Others might me more relevant but I did not check all of
> them.
> Understanding the Performance of TCP Pacing,
> Amit Aggarwal, Stefan Savage, and Tom Anderson,
> IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
>
> http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
[not found] ` <CAH3Ss96VwE_fWNMOMOY4AgaEnVFtCP3rPDHSudOcHxckSDNMqQ@mail.gmail.com>
@ 2015-04-23 10:08 ` jb
2015-04-24 8:18 ` Sebastian Moeller
0 siblings, 1 reply; 183+ messages in thread
From: jb @ 2015-04-23 10:08 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 2596 bytes --]
This is how I've changed the graph of latency under load per input from you
guys.
Taken away log axis.
Put in two bands. Yellow starts at double the idle latency, and goes to 4x
the idle latency
red starts there, and goes to the top. No red shows if no bars reach into
it.
And no yellow band shows if no bars get into that zone.
Is it more descriptive?
(sorry to the list moderator, gmail keeps sending under the wrong email and
I get a moderator message)
On Thu, Apr 23, 2015 at 8:05 PM, jb <justinbeech@gmail.com> wrote:
> This is how I've changed the graph of latency under load per input from
> you guys.
>
> Taken away log axis.
>
> Put in two bands. Yellow starts at double the idle latency, and goes to 4x
> the idle latency
> red starts there, and goes to the top. No red shows if no bars reach into
> it.
> And no yellow band shows if no bars get into that zone.
>
> Is it more descriptive?
>
>
> On Thu, Apr 23, 2015 at 4:48 PM, Eric Dumazet <eric.dumazet@gmail.com>
> wrote:
>
>> Wait, this is a 15 years old experiment using Reno and a single test
>> bed, using ns simulator.
>>
>> Naive TCP pacing implementations were tried, and probably failed.
>>
>> Pacing individual packet is quite bad, this is the first lesson one
>> learns when implementing TCP pacing, especially if you try to drive a
>> 40Gbps NIC.
>>
>> https://lwn.net/Articles/564978/
>>
>> Also note we use usec based rtt samples, and nanosec high resolution
>> timers in fq. I suspect the ns simulator experiment had sync issues
>> because of using low resolution timers or simulation artifact, without
>> any jitter source.
>>
>> Billions of flows are now 'paced', but keep in mind most packets are not
>> paced. We do not pace in slow start, and we do not pace when tcp is ACK
>> clocked.
>>
>> Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
>> eventually have all packets being paced, using TSO 'clusters' for TCP.
>>
>>
>>
>> On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
>> > one reference with pdf publicly available. On the website there are
>> > various papers
>> > on this topic. Others might me more relevant but I did not check all of
>> > them.
>>
>> > Understanding the Performance of TCP Pacing,
>> > Amit Aggarwal, Stefan Savage, and Tom Anderson,
>> > IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
>> >
>> > http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
[-- Attachment #2: Type: text/html, Size: 3958 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 6:48 ` Eric Dumazet
[not found] ` <CAH3Ss96VwE_fWNMOMOY4AgaEnVFtCP3rPDHSudOcHxckSDNMqQ@mail.gmail.com>
@ 2015-04-23 10:17 ` renaud sallantin
2015-04-23 14:10 ` Eric Dumazet
2015-04-23 13:17 ` MUSCARIELLO Luca IMT/OLN
2 siblings, 1 reply; 183+ messages in thread
From: renaud sallantin @ 2015-04-23 10:17 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 2472 bytes --]
Hi,
2015-04-23 8:48 GMT+02:00 Eric Dumazet <eric.dumazet@gmail.com>:
> Wait, this is a 15 years old experiment using Reno and a single test
> bed, using ns simulator.
>
> Naive TCP pacing implementations were tried, and probably failed.
>
> Pacing individual packet is quite bad, this is the first lesson one
> learns when implementing TCP pacing, especially if you try to drive a
> 40Gbps NIC.
>
> https://lwn.net/Articles/564978/
>
> Also note we use usec based rtt samples, and nanosec high resolution
> timers in fq. I suspect the ns simulator experiment had sync issues
> because of using low resolution timers or simulation artifact, without
> any jitter source.
>
> Billions of flows are now 'paced', but keep in mind most packets are not
> paced. We do not pace in slow start, and we do not pace when tcp is ACK
> clocked.
>
We did an extensive work on the Pacing in slow start and notably during a
large IW transmission.
Benefits are really outstanding! Our last implementation is just a slight
modification of FQ/pacing
- Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.; Beylot,
A.-L., "Initial spreading: A fast Start-Up TCP mechanism," *Local
Computer Networks (LCN), 2013 IEEE 38th Conference on* , vol., no.,
pp.492,499, 21-24 Oct. 2013
- Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.; Beylot,
A.-L., "A TCP model for short-lived flows to validate initial
spreading," *Local
Computer Networks (LCN), 2014 IEEE 39th Conference on* , vol., no.,
pp.177,184, 8-11 Sept. 2014
-
draft-sallantin-tcpm-initial-spreading, safe increase of the TCP's IW
Did you consider using it or something similar?
>
> Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
> eventually have all packets being paced, using TSO 'clusters' for TCP.
>
>
>
> On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> > one reference with pdf publicly available. On the website there are
> > various papers
> > on this topic. Others might me more relevant but I did not check all of
> > them.
>
> > Understanding the Performance of TCP Pacing,
> > Amit Aggarwal, Stefan Savage, and Tom Anderson,
> > IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
> >
> > http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 3762 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 6:48 ` Eric Dumazet
[not found] ` <CAH3Ss96VwE_fWNMOMOY4AgaEnVFtCP3rPDHSudOcHxckSDNMqQ@mail.gmail.com>
2015-04-23 10:17 ` renaud sallantin
@ 2015-04-23 13:17 ` MUSCARIELLO Luca IMT/OLN
2 siblings, 0 replies; 183+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-23 13:17 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
On 04/23/2015 08:48 AM, Eric Dumazet wrote:
> Wait, this is a 15 years old experiment using Reno and a single test
> bed, using ns simulator.
from that paper to nowadays several other studies have been made
and confirmed those first results. I did not check all the literature
though.
>
> Naive TCP pacing implementations were tried, and probably failed.
Except for the scaling levels that sch_fq is pushing to nowadays growth,
the concept was well analyzed in the past.
>
> Pacing individual packet is quite bad, this is the first lesson one
> learns when implementing TCP pacing, especially if you try to drive a
> 40Gbps NIC.
this is the main difference I think between 2000 and 2015 and main
source of misunderstanding.
>
> https://lwn.net/Articles/564978/
is there any other documentation other than this article?
>
> Also note we use usec based rtt samples, and nanosec high resolution
> timers in fq. I suspect the ns simulator experiment had sync issues
> because of using low resolution timers or simulation artifact, without
> any jitter source.
I suspect that the main difference was that all packets were paced.
The rates of the first experiments were made at a very low rate compared
to now
so the resolution was not supposed to be a problem.
> Billions of flows are now 'paced', but keep in mind most packets are not
> paced. We do not pace in slow start, and we do not pace when tcp is ACK
> clocked.
All right. I think this clarifies a lot to me. I did not find this
information anywhere though.
I guess I need to go through the internals to find all the active
features and possible
working configurations.
In short, by removing slow start + ack clocked phases, the mechanism
avoids the cwnd-size
line rate burst of packets which has a high probability to experience a
big loss phenomenon somewhere
along the path and maybe in the same local NIC, not necessarily in the
user access bottleneck.
This is something that happens in these days because of hardware
assisted framing and very high speed NICs
like what you mention. But 15 years ago none of those things existed and
TCP did not push such huge bursts.
In some cases I suspect no buffer today could accommodate such bursts
and the loss would be almost sure.
Then I wonder why hardware assisted framing implementations did not take
into account that.
I personally don't have the equipment to test in such cases but I see
the phenomenon.
Still, I believe that Savage's approach would have the merit to produce
very small queues
in the network (and all the benefits from that) but would be fragile, as
reported, and require fq(_codel)
in the network, at least in the access, to create incentives to do that
pacing.
>
> Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
> eventually have all packets being paced, using TSO 'clusters' for TCP.
>
>
>
> On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
>> one reference with pdf publicly available. On the website there are
>> various papers
>> on this topic. Others might me more relevant but I did not check all of
>> them.
>> Understanding the Performance of TCP Pacing,
>> Amit Aggarwal, Stefan Savage, and Tom Anderson,
>> IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
>>
>> http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
>
> .
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 10:17 ` renaud sallantin
@ 2015-04-23 14:10 ` Eric Dumazet
2015-04-23 14:38 ` renaud sallantin
0 siblings, 1 reply; 183+ messages in thread
From: Eric Dumazet @ 2015-04-23 14:10 UTC (permalink / raw)
To: renaud sallantin; +Cc: bloat
On Thu, 2015-04-23 at 12:17 +0200, renaud sallantin wrote:
> Hi,
> ...
>
> We did an extensive work on the Pacing in slow start and notably
> during a large IW transmission.
>
> Benefits are really outstanding! Our last implementation is just a
> slight modification of FQ/pacing
> * Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.;
> Beylot, A.-L., "Initial spreading: A fast Start-Up TCP
> mechanism," Local Computer Networks (LCN), 2013 IEEE 38th
> Conference on , vol., no., pp.492,499, 21-24 Oct. 2013
> * Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.;
> Beylot, A.-L., "A TCP model for short-lived flows to validate
> initial spreading," Local Computer Networks (LCN), 2014 IEEE
> 39th Conference on , vol., no., pp.177,184, 8-11 Sept. 2014
> draft-sallantin-tcpm-initial-spreading, safe increase of the TCP's IW
> Did you consider using it or something similar?
Absolutely. We play a lot with these parameters, but the real work is on
CC front now we have correct host queues and packet scheduler control.
Drops are no longer directly correlated to congestion on modern
networks, cubic has to be replaced.
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 14:10 ` Eric Dumazet
@ 2015-04-23 14:38 ` renaud sallantin
2015-04-23 15:52 ` Jonathan Morton
0 siblings, 1 reply; 183+ messages in thread
From: renaud sallantin @ 2015-04-23 14:38 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 1820 bytes --]
Le 23 avr. 2015 16:10, "Eric Dumazet" <eric.dumazet@gmail.com> a écrit :
>
> On Thu, 2015-04-23 at 12:17 +0200, renaud sallantin wrote:
> > Hi,
>
> > ...
> >
> > We did an extensive work on the Pacing in slow start and notably
> > during a large IW transmission.
> >
> > Benefits are really outstanding! Our last implementation is just a
> > slight modification of FQ/pacing
> > * Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.;
> > Beylot, A.-L., "Initial spreading: A fast Start-Up TCP
> > mechanism," Local Computer Networks (LCN), 2013 IEEE 38th
> > Conference on , vol., no., pp.492,499, 21-24 Oct. 2013
> > * Sallantin, R.; Baudoin, C.; Chaput, E.; Arnal, F.; Dubois, E.;
> > Beylot, A.-L., "A TCP model for short-lived flows to validate
> > initial spreading," Local Computer Networks (LCN), 2014 IEEE
> > 39th Conference on , vol., no., pp.177,184, 8-11 Sept. 2014
> > draft-sallantin-tcpm-initial-spreading, safe increase of the
TCP's IW
> > Did you consider using it or something similar?
>
>
> Absolutely. We play a lot with these parameters, but the real work is on
> CC front now we have correct host queues and packet scheduler control.
>
Do you really consider that the slow start is efficient?
I may miss something but RFC6928 has been pushed by Google because there
were a real need to update it. Results are very good, except when one
bottleneck link is shared by several connections. We demonstrated that an
appropriate spreading of the IW solves this and improves RFC6928
performance.
> Drops are no longer directly correlated to congestion on modern
> networks, cubic has to be replaced.
>
>
>
By curiosity, what is now responsible for the drops if not the congestion?
[-- Attachment #2: Type: text/html, Size: 2288 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 14:38 ` renaud sallantin
@ 2015-04-23 15:52 ` Jonathan Morton
2015-04-23 16:00 ` Simon Barber
0 siblings, 1 reply; 183+ messages in thread
From: Jonathan Morton @ 2015-04-23 15:52 UTC (permalink / raw)
To: renaud sallantin; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 603 bytes --]
> By curiosity, what is now responsible for the drops if not the congestion?
I think the point was not that observed drops are not caused by congestion,
but that congestion doesn't reliably cause drops. Correlation is not
causation.
There are also cases when drops are in fact caused by something other than
congestion, including faulty ADSL phone lines. Some local loop providers
have been known to explicitly consider several percent of packet loss due
to line conditions as "not a fault", to the consternation of the actual ISP
who was trying to provide a decent device over it.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 696 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 15:52 ` Jonathan Morton
@ 2015-04-23 16:00 ` Simon Barber
0 siblings, 0 replies; 183+ messages in thread
From: Simon Barber @ 2015-04-23 16:00 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 1094 bytes --]
Same thing applies for WiFi - oftentimes WiFi with poor signal levels
will cause drops, without congestion. This is something I'm working to
fix from the WiFi / L2 side. What are the solutions in L3? Some kind of
hybrid delay & drop based CC?
Simon
On 4/23/2015 8:52 AM, Jonathan Morton wrote:
>
> > By curiosity, what is now responsible for the drops if not the
> congestion?
>
> I think the point was not that observed drops are not caused by
> congestion, but that congestion doesn't reliably cause drops.
> Correlation is not causation.
>
> There are also cases when drops are in fact caused by something other
> than congestion, including faulty ADSL phone lines. Some local loop
> providers have been known to explicitly consider several percent of
> packet loss due to line conditions as "not a fault", to the
> consternation of the actual ISP who was trying to provide a decent
> device over it.
>
> - Jonathan Morton
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
[-- Attachment #2: Type: text/html, Size: 1896 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 10:08 ` jb
@ 2015-04-24 8:18 ` Sebastian Moeller
2015-04-24 8:29 ` Toke Høiland-Jørgensen
2015-04-25 2:24 ` Simon Barber
0 siblings, 2 replies; 183+ messages in thread
From: Sebastian Moeller @ 2015-04-24 8:18 UTC (permalink / raw)
To: jb; +Cc: bloat
Hi jb,
this looks great!
On Apr 23, 2015, at 12:08 , jb <justin@dslr.net> wrote:
> This is how I've changed the graph of latency under load per input from you guys.
>
> Taken away log axis.
>
> Put in two bands. Yellow starts at double the idle latency, and goes to 4x the idle latency
> red starts there, and goes to the top. No red shows if no bars reach into it.
> And no yellow band shows if no bars get into that zone.
>
> Is it more descriptive?
Mmmh, so the delay we see consists out of the delay caused by the distance to the server and the delay of the access technology, meaning the un-loaded latency can range from a few milliseconds to several 100s of milliseconds (for the poor sods behind a satellite link…). Any further latency developing under load should be independent of distance and access technology as those are already factored in the bade latency. In both the extreme cases multiples of the base-latency do not seem to be relevant measures of bloat, so I would like to argue that the yellow and the red zones should be based on fixed increments and not as a ratio of the base-latency. This is relevant as people on a slow/high-access-latency link have a much smaller tolerance for additional latency than people on a fast link if certain latency guarantees need to be met, and thresholds as a function of base-latency do not reflect this.
Now ideally the colors should not be based on the base-latency at all but should be at fixed total values, like 200 to 300 ms for voip (according to ITU-T G.114 for voip one-way delay <= 150 ms is recommended) in yellow, and say 400 to 600 ms in orange, 400ms is upper bound for good voip and 600ms for decent voip (according to ITU-T G.114,users are very satisfied up to 200 ms one way delay and satisfied up to roughly 300ms) so anything above 600 in deep red?
I know this is not perfect and the numbers will probably require severe "bike-shedding” (and I am not sure that ITU-T G.114 really iOS good source for the thresholds), but to get a discussion started here are the numbers again:
0 to 100 ms no color
101 to 200 ms green
201 to 400 ms yellow
401 to 600 ms orange
601 to 1000 ms red
1001 to infinity purple (or better marina red?)
Best Regards
Sebastian
>
> (sorry to the list moderator, gmail keeps sending under the wrong email and I get a moderator message)
>
> On Thu, Apr 23, 2015 at 8:05 PM, jb <justinbeech@gmail.com> wrote:
> This is how I've changed the graph of latency under load per input from you guys.
>
> Taken away log axis.
>
> Put in two bands. Yellow starts at double the idle latency, and goes to 4x the idle latency
> red starts there, and goes to the top. No red shows if no bars reach into it.
> And no yellow band shows if no bars get into that zone.
>
> Is it more descriptive?
>
>
> On Thu, Apr 23, 2015 at 4:48 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> Wait, this is a 15 years old experiment using Reno and a single test
> bed, using ns simulator.
>
> Naive TCP pacing implementations were tried, and probably failed.
>
> Pacing individual packet is quite bad, this is the first lesson one
> learns when implementing TCP pacing, especially if you try to drive a
> 40Gbps NIC.
>
> https://lwn.net/Articles/564978/
>
> Also note we use usec based rtt samples, and nanosec high resolution
> timers in fq. I suspect the ns simulator experiment had sync issues
> because of using low resolution timers or simulation artifact, without
> any jitter source.
>
> Billions of flows are now 'paced', but keep in mind most packets are not
> paced. We do not pace in slow start, and we do not pace when tcp is ACK
> clocked.
>
> Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
> eventually have all packets being paced, using TSO 'clusters' for TCP.
>
>
>
> On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> > one reference with pdf publicly available. On the website there are
> > various papers
> > on this topic. Others might me more relevant but I did not check all of
> > them.
>
> > Understanding the Performance of TCP Pacing,
> > Amit Aggarwal, Stefan Savage, and Tom Anderson,
> > IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
> >
> > http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 8:18 ` Sebastian Moeller
@ 2015-04-24 8:29 ` Toke Høiland-Jørgensen
2015-04-24 8:55 ` Sebastian Moeller
2015-04-24 15:20 ` Bill Ver Steeg (versteb)
2015-04-25 2:24 ` Simon Barber
1 sibling, 2 replies; 183+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-24 8:29 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
Sebastian Moeller <moeller0@gmx.de> writes:
> I know this is not perfect and the numbers will probably require
> severe "bike-shedding”
Since you're literally asking for it... ;)
In this case we're talking about *added* latency. So the ambition should
be zero, or so close to it as to be indiscernible. Furthermore, we know
that proper application of a good queue management algorithm can keep it
pretty close to this. Certainly under 20-30 ms of added latency. So from
this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
The other increments I have less opinions about, but 100 ms does seem to
be a nice round number, so do yellow from 30-100 ms, then start with the
reds somewhere above that, and range up into the deep red / purple /
black with skulls and fiery death as we go nearer and above one second?
I very much think that raising peoples expectations and being quite
ambitious about what to expect is an important part of this. Of course
the base latency is going to vary, but the added latency shouldn't. And
sine we have the technology to make sure it doesn't, calling out bad
results when we see them is reasonable!
-Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 8:29 ` Toke Høiland-Jørgensen
@ 2015-04-24 8:55 ` Sebastian Moeller
2015-04-24 9:02 ` Toke Høiland-Jørgensen
` (2 more replies)
2015-04-24 15:20 ` Bill Ver Steeg (versteb)
1 sibling, 3 replies; 183+ messages in thread
From: Sebastian Moeller @ 2015-04-24 8:55 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: bloat
Hi Toke,
On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> Sebastian Moeller <moeller0@gmx.de> writes:
>
>> I know this is not perfect and the numbers will probably require
>> severe "bike-shedding”
>
> Since you're literally asking for it... ;)
>
>
> In this case we're talking about *added* latency. So the ambition should
> be zero, or so close to it as to be indiscernible. Furthermore, we know
> that proper application of a good queue management algorithm can keep it
> pretty close to this. Certainly under 20-30 ms of added latency. So from
> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
Oh, I can get behind that easily, I just thought basing the limits on externally relevant total latency thresholds would directly tell the user which applications might run well on his link. Sure this means that people on a satellite link most likely will miss out the acceptable voip threshold by their base-latency alone, but guess what telephony via satellite leaves something to be desired. That said if the alternative is no telephony I would take 1 second one-way delay any day ;).
What I liked about fixed thresholds is that the test would give a good indication what kind of uses are going to work well on the link under load, given that during load both base and induced latency come into play. I agree that 300ms as first threshold is rather unambiguous though (and I am certain that remote X11 will require a massively lower RTT unless one likes to think of remote desktop as an oil tanker simulator ;) )
>
> The other increments I have less opinions about, but 100 ms does seem to
> be a nice round number, so do yellow from 30-100 ms, then start with the
> reds somewhere above that, and range up into the deep red / purple /
> black with skulls and fiery death as we go nearer and above one second?
>
>
> I very much think that raising peoples expectations and being quite
> ambitious about what to expect is an important part of this. Of course
> the base latency is going to vary, but the added latency shouldn't. And
> sine we have the technology to make sure it doesn't, calling out bad
> results when we see them is reasonable!
Okay so this would turn into:
base latency to base latency + 30 ms: green
base latency + 31 ms to base latency + 100 ms: yellow
base latency + 101 ms to base latency + 200 ms: orange?
base latency + 201 ms to base latency + 500 ms: red
base latency + 501 ms to base latency + 1000 ms: fire
base latency + 1001 ms to infinity: fire & brimstone
correct?
>
> -Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 8:55 ` Sebastian Moeller
@ 2015-04-24 9:02 ` Toke Høiland-Jørgensen
2015-04-24 13:32 ` jb
2015-04-25 3:15 ` Simon Barber
2015-04-25 3:23 ` Simon Barber
2 siblings, 1 reply; 183+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-24 9:02 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
Sebastian Moeller <moeller0@gmx.de> writes:
> Oh, I can get behind that easily, I just thought basing the
> limits on externally relevant total latency thresholds would directly
> tell the user which applications might run well on his link. Sure this
> means that people on a satellite link most likely will miss out the
> acceptable voip threshold by their base-latency alone, but guess what
> telephony via satellite leaves something to be desired. That said if
> the alternative is no telephony I would take 1 second one-way delay
> any day ;).
Well I agree that this is relevant information in relation to the total
link latency. But keeping the issues separate has value, I think,
because you can potentially fix your bufferbloat, but increasing the
speed of light to get better base latency on your satellite link is
probably out of scope for now (or at least for a couple of hundred more
years: http://theinfosphere.org/Speed_of_light).
> What I liked about fixed thresholds is that the test would give
> a good indication what kind of uses are going to work well on the link
> under load, given that during load both base and induced latency come
> into play. I agree that 300ms as first threshold is rather unambiguous
> though (and I am certain that remote X11 will require a massively
> lower RTT unless one likes to think of remote desktop as an oil tanker
> simulator ;) )
Oh, I'm all for fixed thresholds! As I said, the goal should be (close
to) zero added latency...
> Okay so this would turn into:
>
> base latency to base latency + 30 ms: green
> base latency + 31 ms to base latency + 100 ms: yellow
> base latency + 101 ms to base latency + 200 ms: orange?
> base latency + 201 ms to base latency + 500 ms: red
> base latency + 501 ms to base latency + 1000 ms: fire
> base latency + 1001 ms to infinity: fire & brimstone
>
> correct?
Yup, something like that :)
-Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 9:02 ` Toke Høiland-Jørgensen
@ 2015-04-24 13:32 ` jb
2015-04-24 13:58 ` Toke Høiland-Jørgensen
2015-04-24 16:51 ` David Lang
0 siblings, 2 replies; 183+ messages in thread
From: jb @ 2015-04-24 13:32 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 3523 bytes --]
Don't you want to accuse the size of the buffer, rather than the latency?
For example, say someone has some hardware and their line is fairly slow.
it might be RED on the graph because the buffer is quite big relative to the
bandwidth delay product of the line. A test is telling them they have
bloated buffers.
Then they upgrade their product speed to a much faster product, and suddenly
that buffer is fairly small, the incremental latency is low, and no longer
shows
RED on a test.
What changed? the hardware didn't change. Just the speed changed. So the
test is saying that for your particular speed, the buffers are too big. But
for a
higher speed, they may be quite ok.
If you add 100ms to a 1gigabit product the buffer has to be what, ~10mb?
but adding 100ms to my feeble line is quite easy, the billion router can
have
a buffer of just 100kb and it is too high. But that same billion in front
of a
gigabit modem is only going to add at most 1ms to latency and nobody
would complain.
Ok I think I talked myself around in a complete circle: a buffer is only
bad IF
it increases latency under load. Not because of its size. It might explain
why
these fiber connection tests don't show much latency change, because
their buffers are really inconsequential at those higher speeds?
On Fri, Apr 24, 2015 at 7:02 PM, Toke Høiland-Jørgensen <toke@toke.dk>
wrote:
> Sebastian Moeller <moeller0@gmx.de> writes:
>
> > Oh, I can get behind that easily, I just thought basing the
> > limits on externally relevant total latency thresholds would directly
> > tell the user which applications might run well on his link. Sure this
> > means that people on a satellite link most likely will miss out the
> > acceptable voip threshold by their base-latency alone, but guess what
> > telephony via satellite leaves something to be desired. That said if
> > the alternative is no telephony I would take 1 second one-way delay
> > any day ;).
>
> Well I agree that this is relevant information in relation to the total
> link latency. But keeping the issues separate has value, I think,
> because you can potentially fix your bufferbloat, but increasing the
> speed of light to get better base latency on your satellite link is
> probably out of scope for now (or at least for a couple of hundred more
> years: http://theinfosphere.org/Speed_of_light).
>
> > What I liked about fixed thresholds is that the test would give
> > a good indication what kind of uses are going to work well on the link
> > under load, given that during load both base and induced latency come
> > into play. I agree that 300ms as first threshold is rather unambiguous
> > though (and I am certain that remote X11 will require a massively
> > lower RTT unless one likes to think of remote desktop as an oil tanker
> > simulator ;) )
>
> Oh, I'm all for fixed thresholds! As I said, the goal should be (close
> to) zero added latency...
>
> > Okay so this would turn into:
> >
> > base latency to base latency + 30 ms: green
> > base latency + 31 ms to base latency + 100 ms: yellow
> > base latency + 101 ms to base latency + 200 ms: orange?
> > base latency + 201 ms to base latency + 500 ms: red
> > base latency + 501 ms to base latency + 1000 ms: fire
> > base latency + 1001 ms to infinity:
> fire & brimstone
> >
> > correct?
>
> Yup, something like that :)
>
> -Toke
>
[-- Attachment #2: Type: text/html, Size: 4573 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 13:32 ` jb
@ 2015-04-24 13:58 ` Toke Høiland-Jørgensen
2015-04-24 16:51 ` David Lang
1 sibling, 0 replies; 183+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-24 13:58 UTC (permalink / raw)
To: jb; +Cc: bloat
jb <justin@dslr.net> writes:
> Ok I think I talked myself around in a complete circle: a buffer is
> only bad IF it increases latency under load. Not because of its size.
Exactly! :)
Some buffering is actually needed to absorb transient bursts. This is
also the reason why smart queue management is needed instead of just
adjusting the size of the buffer (setting aside that you don't always
know which speed to size it for).
> It might explain why these fiber connection tests don't show much
> latency change, because their buffers are really inconsequential at
> those higher speeds?
Well, bufferbloat certainly tends to be *worse* at lower speeds. But it
can occur at gigabit speeds as well. For instance, running two ports
into one on a gigabit switch can add quite a bit of latency.
For some devices, *driving* a fast link can be challenging, though. So
for fibre connections you may not actually bottleneck on the bloated
link, but on the CPU, some other link that's not as bloated as the
access link, etc...
-Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 8:29 ` Toke Høiland-Jørgensen
2015-04-24 8:55 ` Sebastian Moeller
@ 2015-04-24 15:20 ` Bill Ver Steeg (versteb)
1 sibling, 0 replies; 183+ messages in thread
From: Bill Ver Steeg (versteb) @ 2015-04-24 15:20 UTC (permalink / raw)
To: Toke Høiland-Jørgensen, Sebastian Moeller; +Cc: bloat
For a very low speed link, I suggest that 100ms is not the right target. At 1 Mbps (which is a downstream number that I occasionally see in an ISP), 100ms is only nine 1400 byte packets.
Non-paced IW10 would suggest that you need to have at least a 10-deep target. Concurrent flows probably drive the target a bit higher. An FQ_AQM solution would have a different target than an AQM solution.
Does anybody have data that quantifies the best target delay for FQ_Codel and Codel/PIE?
Bvs
-----Original Message-----
From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Toke Høiland-Jørgensen
Sent: Friday, April 24, 2015 4:29 AM
To: Sebastian Moeller
Cc: bloat
Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in
Sebastian Moeller <moeller0@gmx.de> writes:
> I know this is not perfect and the numbers will probably require
> severe "bike-shedding”
Since you're literally asking for it... ;)
In this case we're talking about *added* latency. So the ambition should be zero, or so close to it as to be indiscernible. Furthermore, we know that proper application of a good queue management algorithm can keep it pretty close to this. Certainly under 20-30 ms of added latency. So from this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
The other increments I have less opinions about, but 100 ms does seem to be a nice round number, so do yellow from 30-100 ms, then start with the reds somewhere above that, and range up into the deep red / purple / black with skulls and fiery death as we go nearer and above one second?
I very much think that raising peoples expectations and being quite ambitious about what to expect is an important part of this. Of course the base latency is going to vary, but the added latency shouldn't. And sine we have the technology to make sure it doesn't, calling out bad results when we see them is reasonable!
-Toke
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 13:32 ` jb
2015-04-24 13:58 ` Toke Høiland-Jørgensen
@ 2015-04-24 16:51 ` David Lang
1 sibling, 0 replies; 183+ messages in thread
From: David Lang @ 2015-04-24 16:51 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: TEXT/Plain, Size: 3936 bytes --]
On Fri, 24 Apr 2015, jb wrote:
> Don't you want to accuse the size of the buffer, rather than the latency?
The size of the buffer really doesn't matter. The latency is what hurts.
in theory, you could have a massive buffer for some low-priority non-TCP bulk
protocol (non-TCP so that it can do it's own retries of lost blocks rather then
the TCP method) with no problem and no impact on the user experience.
the problem is how the buffer is managed, not that it exists.
David Lang
> For example, say someone has some hardware and their line is fairly slow.
> it might be RED on the graph because the buffer is quite big relative to the
> bandwidth delay product of the line. A test is telling them they have
> bloated buffers.
>
> Then they upgrade their product speed to a much faster product, and suddenly
> that buffer is fairly small, the incremental latency is low, and no longer
> shows
> RED on a test.
>
> What changed? the hardware didn't change. Just the speed changed. So the
> test is saying that for your particular speed, the buffers are too big. But
> for a
> higher speed, they may be quite ok.
>
> If you add 100ms to a 1gigabit product the buffer has to be what, ~10mb?
> but adding 100ms to my feeble line is quite easy, the billion router can
> have
> a buffer of just 100kb and it is too high. But that same billion in front
> of a
> gigabit modem is only going to add at most 1ms to latency and nobody
> would complain.
>
> Ok I think I talked myself around in a complete circle: a buffer is only
> bad IF
> it increases latency under load. Not because of its size. It might explain
> why
> these fiber connection tests don't show much latency change, because
> their buffers are really inconsequential at those higher speeds?
>
>
> On Fri, Apr 24, 2015 at 7:02 PM, Toke Høiland-Jørgensen <toke@toke.dk>
> wrote:
>
>> Sebastian Moeller <moeller0@gmx.de> writes:
>>
>>> Oh, I can get behind that easily, I just thought basing the
>>> limits on externally relevant total latency thresholds would directly
>>> tell the user which applications might run well on his link. Sure this
>>> means that people on a satellite link most likely will miss out the
>>> acceptable voip threshold by their base-latency alone, but guess what
>>> telephony via satellite leaves something to be desired. That said if
>>> the alternative is no telephony I would take 1 second one-way delay
>>> any day ;).
>>
>> Well I agree that this is relevant information in relation to the total
>> link latency. But keeping the issues separate has value, I think,
>> because you can potentially fix your bufferbloat, but increasing the
>> speed of light to get better base latency on your satellite link is
>> probably out of scope for now (or at least for a couple of hundred more
>> years: http://theinfosphere.org/Speed_of_light).
>>
>>> What I liked about fixed thresholds is that the test would give
>>> a good indication what kind of uses are going to work well on the link
>>> under load, given that during load both base and induced latency come
>>> into play. I agree that 300ms as first threshold is rather unambiguous
>>> though (and I am certain that remote X11 will require a massively
>>> lower RTT unless one likes to think of remote desktop as an oil tanker
>>> simulator ;) )
>>
>> Oh, I'm all for fixed thresholds! As I said, the goal should be (close
>> to) zero added latency...
>>
>>> Okay so this would turn into:
>>>
>>> base latency to base latency + 30 ms: green
>>> base latency + 31 ms to base latency + 100 ms: yellow
>>> base latency + 101 ms to base latency + 200 ms: orange?
>>> base latency + 201 ms to base latency + 500 ms: red
>>> base latency + 501 ms to base latency + 1000 ms: fire
>>> base latency + 1001 ms to infinity:
>> fire & brimstone
>>>
>>> correct?
>>
>> Yup, something like that :)
>>
>> -Toke
>>
>
[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 1:37 ` Simon Barber
@ 2015-04-24 16:54 ` David Lang
2015-04-24 17:00 ` Rick Jones
0 siblings, 1 reply; 183+ messages in thread
From: David Lang @ 2015-04-24 16:54 UTC (permalink / raw)
To: Simon Barber; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 13587 bytes --]
Good question. I don't know.
However, it seems to me that if the receiver starts accepting and acking data
out of order, all sorts of other issues come up (what does this do to sequence
number randomization and the ability for an attacker to spew random data that
will show up somewhere in the window for example)
David Lang
On Wed, 22 Apr 2015, Simon Barber wrote:
> Does this happen even with Sack?
>
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
>
> On April 22, 2015 10:36:11 AM David Lang <david@lang.hm> wrote:
>
>> Data that's received and not used doesn't really matter (a tree falls in
>> the
>> woods type of thing).
>>
>> The head of line blocking can cause a chunk of packets to be retransmitted,
>> even
>> though the receiving machine got them the first time. So looking at the
>> received
>> bytes gives you a false picture of what is going on.
>>
>> David Lang
>>
>> On Wed, 22 Apr 2015, Simon Barber wrote:
>>
>> > The bumps are due to packet loss causing head of line blocking. Until the
>> > lost packet is retransmitted the receiver can't release any subsequent
>> > received packets to the application due to the requirement for in order
>> > delivery. If you counted received bytes with a packet counter rather than
>> > looking at application level you would be able to illustrate that data
>> was
>> > being received smoothly (even though out of order).
>> >
>> > Simon
>> >
>> > Sent with AquaMail for Android
>> > http://www.aqua-mail.com
>> >
>> >
>> > On April 21, 2015 7:21:09 AM David Lang <david@lang.hm> wrote:
>> >
>> >> On Tue, 21 Apr 2015, jb wrote:
>> >>
>> >> >> the receiver advertizes a large receive window, so the sender doesn't
>> >> > pause > until there is that much data outstanding, or they get a
>> timeout
>> >> of
>> >> > a packet as > a signal to slow down.
>> >> >
>> >> >> and because you have a gig-E link locally, your machine generates
>> >> traffic
>> >> > \
>> >> >> very rapidly, until all that data is 'in flight'. but it's really
>> >> sitting
>> >> > in the buffer of
>> >> >> router trying to get through.
>> >> >
>> >> > Hmm, then I have a quandary because I can easily solve the nasty bumpy
>> >> > upload graphs by keeping the advertised receive window on the server
>> >> capped
>> >> > low, however then, paradoxically, there is no more sign of buffer
>> bloat
>> >> in
>> >> > the result, at least for the upload phase.
>> >> >
>> >> > (The graph under the upload/download graphs for my results shows
>> almost
>> >> no
>> >> > latency increase during the upload phase, now).
>> >> >
>> >> > Or, I can crank it back open again, serving people with fiber
>> connections
>> >> > without having to run heaps of streams in parallel -- and then have
>> >> people
>> >> > complain that the upload result is inefficient, or bumpy, vs what they
>> >> > expect.
>> >>
>> >> well, many people expect it to be bumpy (I've heard ISPs explain to
>> >> customers
>> >> that when a link is full it is bumpy, that's just the way things work)
>> >>
>> >> > And I can't offer an option, because the server receive window (I
>> think)
>> >> > cannot be set on a case by case basis. You set it for all TCP and
>> forget
>> >> it.
>> >>
>> >> I think you are right
>> >>
>> >> > I suspect you guys are going to say the server should be left with a
>> >> large
>> >> > max receive window.. and let people complain to find out what their
>> issue
>> >> > is.
>> >>
>> >> what is your customer base? how important is it to provide faster
>> service
>> >> to teh
>> >> fiber users? Are they transferring ISO images so the difference is
>> >> significant
>> >> to them? or are they downloading web pages where it's the difference
>> >> between a
>> >> half second and a quarter second? remember that you are seeing this on
>> the
>> >> upload side.
>> >>
>> >> in the long run, fixing the problem at the client side is the best thing
>> to
>> >> do,
>> >> but in the meantime, you sometimes have to work around broken customer
>> >> stuff.
>> >>
>> >> > BTW my setup is wire to billion 7800N, which is a DSL modem and
>> router. I
>> >> > believe it is a linux based (judging from the system log) device.
>> >>
>> >> if it's linux based, it would be interesting to learn what sort of
>> settings
>> >> it
>> >> has. It may be one of the rarer devices that has something in place
>> already
>> >> to
>> >> do active queue management.
>> >>
>> >> David Lang
>> >>
>> >> > cheers,
>> >> > -Justin
>> >> >
>> >> > On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
>> >> >
>> >> >> On Tue, 21 Apr 2015, jb wrote:
>> >> >>
>> >> >> I've discovered something perhaps you guys can explain it better or
>> >> shed
>> >> >>> some light.
>> >> >>> It isn't specifically to do with buffer bloat but it is to do with
>> TCP
>> >> >>> tuning.
>> >> >>>
>> >> >>> Attached is two pictures of my upload to New York speed test server
>> >> with 1
>> >> >>> stream.
>> >> >>> It doesn't make any difference if it is 1 stream or 8 streams, the
>> >> picture
>> >> >>> and behaviour remains the same.
>> >> >>> I am 200ms from new york so it qualifies as a fairly long (but not
>> very
>> >> >>> fat) pipe.
>> >> >>>
>> >> >>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535'
>> >> (on
>> >> >>> the server)
>> >> >>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535
>> 67108864'
>> >> (on
>> >> >>> the server)
>> >> >>>
>> >> >>> It actually doesn't matter what that last huge number is, once it
>> goes
>> >> >>> much
>> >> >>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on
>> the
>> >> >>> upload speed.
>> >> >>>
>> >> >>> Now as I understand this setting, it is the tcp receive window that
>> >> Linux
>> >> >>> advertises, and the last number sets the maximum size it can get to
>> >> (for
>> >> >>> one TCP stream).
>> >> >>>
>> >> >>> For users with very fast upload speeds, they do not see an ugly
>> bumpy
>> >> >>> upload graph, it is smooth and sustained.
>> >> >>> But for the majority of users (like me) with uploads less than 5 to
>> >> >>> 10mbit,
>> >> >>> we frequently see the ugly graph.
>> >> >>>
>> >> >>> The second tcp_rmem setting is how I have been running the speed
>> test
>> >> >>> servers.
>> >> >>>
>> >> >>> Up to now I thought this was just the distance of the speedtest from
>> >> the
>> >> >>> interface: perhaps the browser was buffering a lot, and didn't feed
>> >> back
>> >> >>> progress but now I realise the bumpy one is actually being
>> influenced
>> >> by
>> >> >>> the server receive window.
>> >> >>>
>> >> >>> I guess my question is this: Why does ALLOWING a large receive
>> window
>> >> >>> appear to encourage problems with upload smoothness??
>> >> >>>
>> >> >>> This implies that setting the receive window should be done on a
>> >> >>> connection
>> >> >>> by connection basis: small for slow connections, large, for high
>> speed,
>> >> >>> long distance connections.
>> >> >>>
>> >> >>
>> >> >> This is classic bufferbloat
>> >> >>
>> >> >> the receiver advertizes a large receive window, so the sender doesn't
>> >> >> pause until there is that much data outstanding, or they get a
>> timeout
>> >> of a
>> >> >> packet as a signal to slow down.
>> >> >>
>> >> >> and because you have a gig-E link locally, your machine generates
>> >> traffic
>> >> >> very rapidly, until all that data is 'in flight'. but it's really
>> >> sitting
>> >> >> in the buffer of a router trying to get through.
>> >> >>
>> >> >> then when a packet times out, the sender slows down a smidge and
>> >> >> retransmits it. But the old packet is still sitting in a queue,
>> eating
>> >> >> bandwidth. the packets behind it are also going to timeout and be
>> >> >> retransmitted before your first retransmitted packet gets through, so
>> >> you
>> >> >> have a large slug of data that's being retransmitted, and the first
>> of
>> >> the
>> >> >> replacement data can't get through until the last of the old (timed
>> out)
>> >> >> data is transmitted.
>> >> >>
>> >> >> then when data starts flowing again, the sender again tries to fill
>> up
>> >> the
>> >> >> window with data in flight.
>> >> >>
>> >> >> In addition, if I cap it to 65k, for reasons of smoothness,
>> >> >>> that means the bandwidth delay product will keep maximum speed per
>> >> upload
>> >> >>> stream quite low. So a symmetric or gigabit connection is going to
>> need
>> >> a
>> >> >>> ton of parallel streams to see full speed.
>> >> >>>
>> >> >>> Most puzzling is why would anything special be required on the
>> Client
>> >> -->
>> >> >>> Server side of the equation
>> >> >>> but nothing much appears wrong with the Server --> Client side,
>> whether
>> >> >>> speeds are very low (GPRS) or very high (gigabit).
>> >> >>>
>> >> >>
>> >> >> but what window sizes are these clients advertising?
>> >> >>
>> >> >>
>> >> >> Note that also I am not yet sure if smoothness == better throughput.
>> I
>> >> >>> have
>> >> >>> noticed upload speeds for some people often being under their
>> claimed
>> >> sync
>> >> >>> rate by 10 or 20% but I've no logs that show the bumpy graph is
>> showing
>> >> >>> inefficiency. Maybe.
>> >> >>>
>> >> >>
>> >> >> If you were to do a packet capture on the server side, you would see
>> >> that
>> >> >> you have a bunch of packets that are arriving multiple times, but the
>> >> first
>> >> >> time "does't count" because the replacement is already on the way.
>> >> >>
>> >> >> so your overall throughput is lower for two reasons
>> >> >>
>> >> >> 1. it's bursty, and there are times when the connection actually is
>> idle
>> >> >> (after you have a lot of timed out packets, the sender needs to ramp
>> up
>> >> >> it's speed again)
>> >> >>
>> >> >> 2. you are sending some packets multiple times, consuming more total
>> >> >> bandwidth for the same 'goodput' (effective throughput)
>> >> >>
>> >> >> David Lang
>> >> >>
>> >> >>
>> >> >> help!
>> >> >>>
>> >> >>>
>> >> >>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber
>> <simon@superduper.net>
>> >> >>> wrote:
>> >> >>>
>> >> >>> One thing users understand is slow web access. Perhaps translating
>> >> the
>> >> >>>> latency measurement into 'a typical web page will take X seconds
>> >> longer
>> >> >>>> to
>> >> >>>> load', or even stating the impact as 'this latency causes a typical
>> >> web
>> >> >>>> page to load slower, as if your connection was only YY% of the
>> >> measured
>> >> >>>> speed.'
>> >> >>>>
>> >> >>>> Simon
>> >> >>>>
>> >> >>>> Sent with AquaMail for Android
>> >> >>>> http://www.aqua-mail.com
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>> On April 19, 2015 1:54:19 PM Jonathan Morton
>> <chromatix99@gmail.com>
>> >> >>>> wrote:
>> >> >>>>
>> >> >>>>>>>> Frequency readouts are probably more accessible to the latter.
>> >> >>>>
>> >> >>>>>
>> >> >>>>>>>> The frequency domain more accessible to laypersons? I have
>> my
>> >> >>>>>>>>
>> >> >>>>>>> doubts ;)
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>>> Gamers, at least, are familiar with “frames per second” and how
>> >> that
>> >> >>>>>>>
>> >> >>>>>> corresponds to their monitor’s refresh rate.
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>> I am sure they can easily transform back into time domain
>> to
>> >> get
>> >> >>>>>>
>> >> >>>>> the frame period ;) . I am partly kidding, I think your idea is
>> >> great
>> >> >>>>> in
>> >> >>>>> that it is a truly positive value which could lend itself to being
>> >> used
>> >> >>>>> in
>> >> >>>>> ISP/router manufacturer advertising, and hence might work in the
>> real
>> >> >>>>> work;
>> >> >>>>> on the other hand I like to keep data as “raw” as possible (not
>> that
>> >> >>>>> ^(-1)
>> >> >>>>> is a transformation worthy of being called data massage).
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>> The desirable range of latencies, when converted to Hz, happens
>> to
>> >> be
>> >> >>>>>>>
>> >> >>>>>> roughly the same as the range of desirable frame rates.
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>> Just to play devils advocate, the interesting part is time
>> or
>> >> >>>>>>
>> >> >>>>> saving time so seconds or milliseconds are also intuitively
>> >> >>>>> understandable
>> >> >>>>> and can be easily added ;)
>> >> >>>>>
>> >> >>>>> Such readouts are certainly interesting to people like us. I have
>> no
>> >> >>>>> objection to them being reported alongside a frequency readout.
>> But
>> >> I
>> >> >>>>> think most people are not interested in “time savings” measured in
>> >> >>>>> milliseconds; they’re much more aware of the minute- and
>> hour-level
>> >> time
>> >> >>>>> savings associated with greater bandwidth.
>> >> >>>>>
>> >> >>>>> - Jonathan Morton
>> >> >>>>>
>> >> >>>>> _______________________________________________
>> >> >>>>> Bloat mailing list
>> >> >>>>> Bloat@lists.bufferbloat.net
>> >> >>>>> https://lists.bufferbloat.net/listinfo/bloat
>> >> >>>>>
>> >> >>>>>
>> >> >>>>
>> >> >>>> _______________________________________________
>> >> >>>> Bloat mailing list
>> >> >>>> Bloat@lists.bufferbloat.net
>> >> >>>> https://lists.bufferbloat.net/listinfo/bloat
>> >> >>>>
>> >> >>>>
>> >> >> _______________________________________________
>> >> >> Bloat mailing list
>> >> >> Bloat@lists.bufferbloat.net
>> >> >> https://lists.bufferbloat.net/listinfo/bloat
>> >> >>
>> >> >>
>> >> >
>> >>
>> >>
>> >> ----------
>> >> _______________________________________________
>> >> Bloat mailing list
>> >> Bloat@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/bloat
>> >>
>> >
>> >
>> >
>
>
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 16:54 ` David Lang
@ 2015-04-24 17:00 ` Rick Jones
0 siblings, 0 replies; 183+ messages in thread
From: Rick Jones @ 2015-04-24 17:00 UTC (permalink / raw)
To: bloat
Selective ACKnowledgement in TCP does not change the in-order semantics
of TCP as seen by applications using it. Data is always presented to
the receiving application in order. What SACK does is make it more
likely that holes in the sequence of data will be filled-in sooner via
retransmissions, and help avoid retransmitting data already received but
past the first "hole" in the data sequence.
rick jones
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 8:18 ` Sebastian Moeller
2015-04-24 8:29 ` Toke Høiland-Jørgensen
@ 2015-04-25 2:24 ` Simon Barber
1 sibling, 0 replies; 183+ messages in thread
From: Simon Barber @ 2015-04-25 2:24 UTC (permalink / raw)
To: Sebastian Moeller, jb; +Cc: bloat
Perhaps where the green is should depend on the customer's access type. For
instance someone on fiber should have a much better ping than someone on
3G. But I agree this should be a fixed scale, not dependent on idle ping
time. Although VoIP might be good up to 100ms, gamers would want lower values.
Simon
Sent with AquaMail for Android
http://www.aqua-mail.com
On April 24, 2015 1:19:08 AM Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi jb,
>
> this looks great!
>
> On Apr 23, 2015, at 12:08 , jb <justin@dslr.net> wrote:
>
> > This is how I've changed the graph of latency under load per input from
> you guys.
> >
> > Taken away log axis.
> >
> > Put in two bands. Yellow starts at double the idle latency, and goes to
> 4x the idle latency
> > red starts there, and goes to the top. No red shows if no bars reach into it.
> > And no yellow band shows if no bars get into that zone.
> >
> > Is it more descriptive?
>
> Mmmh, so the delay we see consists out of the delay caused by the distance
> to the server and the delay of the access technology, meaning the un-loaded
> latency can range from a few milliseconds to several 100s of milliseconds
> (for the poor sods behind a satellite link…). Any further latency
> developing under load should be independent of distance and access
> technology as those are already factored in the bade latency. In both the
> extreme cases multiples of the base-latency do not seem to be relevant
> measures of bloat, so I would like to argue that the yellow and the red
> zones should be based on fixed increments and not as a ratio of the
> base-latency. This is relevant as people on a slow/high-access-latency link
> have a much smaller tolerance for additional latency than people on a fast
> link if certain latency guarantees need to be met, and thresholds as a
> function of base-latency do not reflect this.
> Now ideally the colors should not be based on the base-latency at all but
> should be at fixed total values, like 200 to 300 ms for voip (according to
> ITU-T G.114 for voip one-way delay <= 150 ms is recommended) in yellow, and
> say 400 to 600 ms in orange, 400ms is upper bound for good voip and 600ms
> for decent voip (according to ITU-T G.114,users are very satisfied up to
> 200 ms one way delay and satisfied up to roughly 300ms) so anything above
> 600 in deep red?
> I know this is not perfect and the numbers will probably require severe
> "bike-shedding” (and I am not sure that ITU-T G.114 really iOS good source
> for the thresholds), but to get a discussion started here are the numbers
> again:
> 0 to 100 ms no color
> 101 to 200 ms green
> 201 to 400 ms yellow
> 401 to 600 ms orange
> 601 to 1000 ms red
> 1001 to infinity purple (or better marina red?)
>
> Best Regards
> Sebastian
>
>
> >
> > (sorry to the list moderator, gmail keeps sending under the wrong email
> and I get a moderator message)
> >
> > On Thu, Apr 23, 2015 at 8:05 PM, jb <justinbeech@gmail.com> wrote:
> > This is how I've changed the graph of latency under load per input from
> you guys.
> >
> > Taken away log axis.
> >
> > Put in two bands. Yellow starts at double the idle latency, and goes to
> 4x the idle latency
> > red starts there, and goes to the top. No red shows if no bars reach into it.
> > And no yellow band shows if no bars get into that zone.
> >
> > Is it more descriptive?
> >
> >
> > On Thu, Apr 23, 2015 at 4:48 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> > Wait, this is a 15 years old experiment using Reno and a single test
> > bed, using ns simulator.
> >
> > Naive TCP pacing implementations were tried, and probably failed.
> >
> > Pacing individual packet is quite bad, this is the first lesson one
> > learns when implementing TCP pacing, especially if you try to drive a
> > 40Gbps NIC.
> >
> > https://lwn.net/Articles/564978/
> >
> > Also note we use usec based rtt samples, and nanosec high resolution
> > timers in fq. I suspect the ns simulator experiment had sync issues
> > because of using low resolution timers or simulation artifact, without
> > any jitter source.
> >
> > Billions of flows are now 'paced', but keep in mind most packets are not
> > paced. We do not pace in slow start, and we do not pace when tcp is ACK
> > clocked.
> >
> > Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
> > eventually have all packets being paced, using TSO 'clusters' for TCP.
> >
> >
> >
> > On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> > > one reference with pdf publicly available. On the website there are
> > > various papers
> > > on this topic. Others might me more relevant but I did not check all of
> > > them.
> >
> > > Understanding the Performance of TCP Pacing,
> > > Amit Aggarwal, Stefan Savage, and Tom Anderson,
> > > IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
> > >
> > > http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 8:55 ` Sebastian Moeller
2015-04-24 9:02 ` Toke Høiland-Jørgensen
@ 2015-04-25 3:15 ` Simon Barber
2015-04-25 4:04 ` Dave Taht
2015-04-25 3:23 ` Simon Barber
2 siblings, 1 reply; 183+ messages in thread
From: Simon Barber @ 2015-04-25 3:15 UTC (permalink / raw)
To: bloat, justin
I think it might be useful to have a 'latency guide' for users. It would
say things like
100ms - VoIP applications work well
250ms - VoIP applications - conversation is not as natural as it could
be, although users may not notice this.
500ms - VoIP applications begin to have awkward pauses in conversation.
1000ms - VoIP applications have significant annoying pauses in conversation.
2000ms - VoIP unusable for most interactive conversations.
0-50ms - web pages load snappily
250ms - web pages can often take an extra second to appear, even on the
highest bandwidth links
1000ms - web pages load significantly slower than they should, taking
several extra seconds to appear, even on the highest bandwidth links
2000ms+ - web browsing is heavily slowed, with many seconds or even 10s
of seconds of delays for pages to load, even on the highest bandwidth links.
Gaming.... some kind of guide here....
Simon
On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
> Hi Toke,
>
> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>
>> Sebastian Moeller <moeller0@gmx.de> writes:
>>
>>> I know this is not perfect and the numbers will probably require
>>> severe "bike-shedding”
>> Since you're literally asking for it... ;)
>>
>>
>> In this case we're talking about *added* latency. So the ambition should
>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>> that proper application of a good queue management algorithm can keep it
>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
> Oh, I can get behind that easily, I just thought basing the limits on externally relevant total latency thresholds would directly tell the user which applications might run well on his link. Sure this means that people on a satellite link most likely will miss out the acceptable voip threshold by their base-latency alone, but guess what telephony via satellite leaves something to be desired. That said if the alternative is no telephony I would take 1 second one-way delay any day ;).
> What I liked about fixed thresholds is that the test would give a good indication what kind of uses are going to work well on the link under load, given that during load both base and induced latency come into play. I agree that 300ms as first threshold is rather unambiguous though (and I am certain that remote X11 will require a massively lower RTT unless one likes to think of remote desktop as an oil tanker simulator ;) )
>
>> The other increments I have less opinions about, but 100 ms does seem to
>> be a nice round number, so do yellow from 30-100 ms, then start with the
>> reds somewhere above that, and range up into the deep red / purple /
>> black with skulls and fiery death as we go nearer and above one second?
>>
>>
>> I very much think that raising peoples expectations and being quite
>> ambitious about what to expect is an important part of this. Of course
>> the base latency is going to vary, but the added latency shouldn't. And
>> sine we have the technology to make sure it doesn't, calling out bad
>> results when we see them is reasonable!
> Okay so this would turn into:
>
> base latency to base latency + 30 ms: green
> base latency + 31 ms to base latency + 100 ms: yellow
> base latency + 101 ms to base latency + 200 ms: orange?
> base latency + 201 ms to base latency + 500 ms: red
> base latency + 501 ms to base latency + 1000 ms: fire
> base latency + 1001 ms to infinity: fire & brimstone
>
> correct?
>
>
>> -Toke
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 8:55 ` Sebastian Moeller
2015-04-24 9:02 ` Toke Høiland-Jørgensen
2015-04-25 3:15 ` Simon Barber
@ 2015-04-25 3:23 ` Simon Barber
2 siblings, 0 replies; 183+ messages in thread
From: Simon Barber @ 2015-04-25 3:23 UTC (permalink / raw)
To: Sebastian Moeller, Toke Høiland-Jørgensen; +Cc: bloat
On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
> Okay so this would turn into: base latency to base latency + 30 ms:
> green base latency + 31 ms to base latency + 100 ms: yellow base
> latency + 101 ms to base latency + 200 ms: orange? base latency + 201
> ms to base latency + 500 ms: red base latency + 501 ms to base latency
> + 1000 ms: fire base latency + 1001 ms to infinity: fire & brimstone
> correct?
I don't think the reference should be a measured 'base latency' - but it
should be a fixed value that is different for different access types.
E.G. Satellite access should show green up to about 650 or 700ms, but
fiber should show green up to 50ms max. Perhaps add in speed of light to
account for distance from the user to the test server.
Simon
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-25 3:15 ` Simon Barber
@ 2015-04-25 4:04 ` Dave Taht
2015-04-25 4:26 ` Simon Barber
0 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2015-04-25 4:04 UTC (permalink / raw)
To: Simon Barber; +Cc: bloat
simon all your numbers are too large by at least a factor of 2. I
think also you are thinking about total latency, rather than induced
latency and jitter.
Please see my earlier email laying out the bands. And gettys' manifesto.
If you are thinking in terms of voip, less than 30ms *jitter* is what
you want, and a latency increase of 30ms is a proxy for also holding
jitter that low.
On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
> I think it might be useful to have a 'latency guide' for users. It would say
> things like
>
> 100ms - VoIP applications work well
> 250ms - VoIP applications - conversation is not as natural as it could be,
> although users may not notice this.
> 500ms - VoIP applications begin to have awkward pauses in conversation.
> 1000ms - VoIP applications have significant annoying pauses in conversation.
> 2000ms - VoIP unusable for most interactive conversations.
>
> 0-50ms - web pages load snappily
> 250ms - web pages can often take an extra second to appear, even on the
> highest bandwidth links
> 1000ms - web pages load significantly slower than they should, taking
> several extra seconds to appear, even on the highest bandwidth links
> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
> seconds of delays for pages to load, even on the highest bandwidth links.
>
> Gaming.... some kind of guide here....
>
> Simon
>
>
>
>
> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>
>> Hi Toke,
>>
>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>
>>> Sebastian Moeller <moeller0@gmx.de> writes:
>>>
>>>> I know this is not perfect and the numbers will probably require
>>>> severe "bike-shedding”
>>>
>>> Since you're literally asking for it... ;)
>>>
>>>
>>> In this case we're talking about *added* latency. So the ambition should
>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>> that proper application of a good queue management algorithm can keep it
>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>
>> Oh, I can get behind that easily, I just thought basing the limits
>> on externally relevant total latency thresholds would directly tell the user
>> which applications might run well on his link. Sure this means that people
>> on a satellite link most likely will miss out the acceptable voip threshold
>> by their base-latency alone, but guess what telephony via satellite leaves
>> something to be desired. That said if the alternative is no telephony I
>> would take 1 second one-way delay any day ;).
>> What I liked about fixed thresholds is that the test would give a
>> good indication what kind of uses are going to work well on the link under
>> load, given that during load both base and induced latency come into play. I
>> agree that 300ms as first threshold is rather unambiguous though (and I am
>> certain that remote X11 will require a massively lower RTT unless one likes
>> to think of remote desktop as an oil tanker simulator ;) )
>>
>>> The other increments I have less opinions about, but 100 ms does seem to
>>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>> reds somewhere above that, and range up into the deep red / purple /
>>> black with skulls and fiery death as we go nearer and above one second?
>>>
>>>
>>> I very much think that raising peoples expectations and being quite
>>> ambitious about what to expect is an important part of this. Of course
>>> the base latency is going to vary, but the added latency shouldn't. And
>>> sine we have the technology to make sure it doesn't, calling out bad
>>> results when we see them is reasonable!
>>
>> Okay so this would turn into:
>>
>> base latency to base latency + 30 ms: green
>> base latency + 31 ms to base latency + 100 ms: yellow
>> base latency + 101 ms to base latency + 200 ms: orange?
>> base latency + 201 ms to base latency + 500 ms: red
>> base latency + 501 ms to base latency + 1000 ms: fire
>> base latency + 1001 ms to infinity:
>> fire & brimstone
>>
>> correct?
>>
>>
>>> -Toke
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-25 4:04 ` Dave Taht
@ 2015-04-25 4:26 ` Simon Barber
2015-04-25 6:03 ` Sebastian Moeller
0 siblings, 1 reply; 183+ messages in thread
From: Simon Barber @ 2015-04-25 4:26 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
Certainly the VoIP numbers are for peak total latency, and while Justin is
measuring total latency because he is only taking a few samples the peak
values will be a little higher.
Simon
Sent with AquaMail for Android
http://www.aqua-mail.com
On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
> simon all your numbers are too large by at least a factor of 2. I
> think also you are thinking about total latency, rather than induced
> latency and jitter.
>
> Please see my earlier email laying out the bands. And gettys' manifesto.
>
> If you are thinking in terms of voip, less than 30ms *jitter* is what
> you want, and a latency increase of 30ms is a proxy for also holding
> jitter that low.
>
>
> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
> > I think it might be useful to have a 'latency guide' for users. It would say
> > things like
> >
> > 100ms - VoIP applications work well
> > 250ms - VoIP applications - conversation is not as natural as it could be,
> > although users may not notice this.
> > 500ms - VoIP applications begin to have awkward pauses in conversation.
> > 1000ms - VoIP applications have significant annoying pauses in conversation.
> > 2000ms - VoIP unusable for most interactive conversations.
> >
> > 0-50ms - web pages load snappily
> > 250ms - web pages can often take an extra second to appear, even on the
> > highest bandwidth links
> > 1000ms - web pages load significantly slower than they should, taking
> > several extra seconds to appear, even on the highest bandwidth links
> > 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
> > seconds of delays for pages to load, even on the highest bandwidth links.
> >
> > Gaming.... some kind of guide here....
> >
> > Simon
> >
> >
> >
> >
> > On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
> >>
> >> Hi Toke,
> >>
> >> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> >>
> >>> Sebastian Moeller <moeller0@gmx.de> writes:
> >>>
> >>>> I know this is not perfect and the numbers will probably require
> >>>> severe "bike-shedding”
> >>>
> >>> Since you're literally asking for it... ;)
> >>>
> >>>
> >>> In this case we're talking about *added* latency. So the ambition should
> >>> be zero, or so close to it as to be indiscernible. Furthermore, we know
> >>> that proper application of a good queue management algorithm can keep it
> >>> pretty close to this. Certainly under 20-30 ms of added latency. So from
> >>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
> >>
> >> Oh, I can get behind that easily, I just thought basing the limits
> >> on externally relevant total latency thresholds would directly tell the user
> >> which applications might run well on his link. Sure this means that people
> >> on a satellite link most likely will miss out the acceptable voip threshold
> >> by their base-latency alone, but guess what telephony via satellite leaves
> >> something to be desired. That said if the alternative is no telephony I
> >> would take 1 second one-way delay any day ;).
> >> What I liked about fixed thresholds is that the test would give a
> >> good indication what kind of uses are going to work well on the link under
> >> load, given that during load both base and induced latency come into play. I
> >> agree that 300ms as first threshold is rather unambiguous though (and I am
> >> certain that remote X11 will require a massively lower RTT unless one likes
> >> to think of remote desktop as an oil tanker simulator ;) )
> >>
> >>> The other increments I have less opinions about, but 100 ms does seem to
> >>> be a nice round number, so do yellow from 30-100 ms, then start with the
> >>> reds somewhere above that, and range up into the deep red / purple /
> >>> black with skulls and fiery death as we go nearer and above one second?
> >>>
> >>>
> >>> I very much think that raising peoples expectations and being quite
> >>> ambitious about what to expect is an important part of this. Of course
> >>> the base latency is going to vary, but the added latency shouldn't. And
> >>> sine we have the technology to make sure it doesn't, calling out bad
> >>> results when we see them is reasonable!
> >>
> >> Okay so this would turn into:
> >>
> >> base latency to base latency + 30 ms: green
> >> base latency + 31 ms to base latency + 100 ms: yellow
> >> base latency + 101 ms to base latency + 200 ms: orange?
> >> base latency + 201 ms to base latency + 500 ms: red
> >> base latency + 501 ms to base latency + 1000 ms: fire
> >> base latency + 1001 ms to infinity:
> >> fire & brimstone
> >>
> >> correct?
> >>
> >>
> >>> -Toke
> >>
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-25 4:26 ` Simon Barber
@ 2015-04-25 6:03 ` Sebastian Moeller
2015-04-27 16:39 ` Dave Taht
2015-05-06 5:08 ` Simon Barber
0 siblings, 2 replies; 183+ messages in thread
From: Sebastian Moeller @ 2015-04-25 6:03 UTC (permalink / raw)
To: Simon Barber; +Cc: bloat
Hi Simon, hi List
On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
Best Regards
Sebastian
>
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
>
> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
>
>> simon all your numbers are too large by at least a factor of 2. I
>> think also you are thinking about total latency, rather than induced
>> latency and jitter.
>>
>> Please see my earlier email laying out the bands. And gettys' manifesto.
>>
>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>> you want, and a latency increase of 30ms is a proxy for also holding
>> jitter that low.
>>
>>
>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>> > I think it might be useful to have a 'latency guide' for users. It would say
>> > things like
>> >
>> > 100ms - VoIP applications work well
>> > 250ms - VoIP applications - conversation is not as natural as it could be,
>> > although users may not notice this.
The only way to detect whether a conversation is natural is if users notice, I would say...
>> > 500ms - VoIP applications begin to have awkward pauses in conversation.
>> > 1000ms - VoIP applications have significant annoying pauses in conversation.
>> > 2000ms - VoIP unusable for most interactive conversations.
>> >
>> > 0-50ms - web pages load snappily
>> > 250ms - web pages can often take an extra second to appear, even on the
>> > highest bandwidth links
>> > 1000ms - web pages load significantly slower than they should, taking
>> > several extra seconds to appear, even on the highest bandwidth links
>> > 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>> > seconds of delays for pages to load, even on the highest bandwidth links.
>> >
>> > Gaming.... some kind of guide here....
>> >
>> > Simon
>> >
>> >
>> >
>> >
>> > On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>> >>
>> >> Hi Toke,
>> >>
>> >> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>> >>
>> >>> Sebastian Moeller <moeller0@gmx.de> writes:
>> >>>
>> >>>> I know this is not perfect and the numbers will probably require
>> >>>> severe "bike-shedding”
>> >>>
>> >>> Since you're literally asking for it... ;)
>> >>>
>> >>>
>> >>> In this case we're talking about *added* latency. So the ambition should
>> >>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>> >>> that proper application of a good queue management algorithm can keep it
>> >>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>> >>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>> >>
>> >> Oh, I can get behind that easily, I just thought basing the limits
>> >> on externally relevant total latency thresholds would directly tell the user
>> >> which applications might run well on his link. Sure this means that people
>> >> on a satellite link most likely will miss out the acceptable voip threshold
>> >> by their base-latency alone, but guess what telephony via satellite leaves
>> >> something to be desired. That said if the alternative is no telephony I
>> >> would take 1 second one-way delay any day ;).
>> >> What I liked about fixed thresholds is that the test would give a
>> >> good indication what kind of uses are going to work well on the link under
>> >> load, given that during load both base and induced latency come into play. I
>> >> agree that 300ms as first threshold is rather unambiguous though (and I am
>> >> certain that remote X11 will require a massively lower RTT unless one likes
>> >> to think of remote desktop as an oil tanker simulator ;) )
>> >>
>> >>> The other increments I have less opinions about, but 100 ms does seem to
>> >>> be a nice round number, so do yellow from 30-100 ms, then start with the
>> >>> reds somewhere above that, and range up into the deep red / purple /
>> >>> black with skulls and fiery death as we go nearer and above one second?
>> >>>
>> >>>
>> >>> I very much think that raising peoples expectations and being quite
>> >>> ambitious about what to expect is an important part of this. Of course
>> >>> the base latency is going to vary, but the added latency shouldn't. And
>> >>> sine we have the technology to make sure it doesn't, calling out bad
>> >>> results when we see them is reasonable!
>> >>
>> >> Okay so this would turn into:
>> >>
>> >> base latency to base latency + 30 ms: green
>> >> base latency + 31 ms to base latency + 100 ms: yellow
>> >> base latency + 101 ms to base latency + 200 ms: orange?
>> >> base latency + 201 ms to base latency + 500 ms: red
>> >> base latency + 501 ms to base latency + 1000 ms: fire
>> >> base latency + 1001 ms to infinity:
>> >> fire & brimstone
>> >>
>> >> correct?
>> >>
>> >>
>> >>> -Toke
>> >>
>> >> _______________________________________________
>> >> Bloat mailing list
>> >> Bloat@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/bloat
>> >
>> >
>> > _______________________________________________
>> > Bloat mailing list
>> > Bloat@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>>
>> --
>> Dave Täht
>> Open Networking needs **Open Source Hardware**
>>
>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-25 6:03 ` Sebastian Moeller
@ 2015-04-27 16:39 ` Dave Taht
2015-04-28 7:18 ` Sebastian Moeller
2015-05-06 5:08 ` Simon Barber
1 sibling, 1 reply; 183+ messages in thread
From: Dave Taht @ 2015-04-27 16:39 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
On Fri, Apr 24, 2015 at 11:03 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Simon, hi List
>
> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
>
>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
>
> If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
At one level I am utterly prepared to set new (And lower) standards
for latency, and not necessarily pay attention to compromise driven
standards processes established in the 70s and 80s, but to the actual
user experience numbers that jim cited in the fq+aqm manefesto on his
blog.
I consider induced latencies of 30ms as a "green" band because that is
the outer limit of the range modern aqm technologies can achieve (fq
can get closer to 0). There was a lot of debate about 20ms being the
right figure for induced latency and/or jitter, a year or two back,
and we settled on 30ms for both, so that number is already a
compromise figure.
It is highly likely that folk here are not aware of the extra-ordinary
amount of debate that went into deciding the ultimate ATM cell size
back in the day. The eu wanted 32 bytes, the US 48, both because that
was basically a good size for the local continental distance and echo
cancellation stuff, at the time.
In the case of voip, jitter is actually more important than latency.
Modern codecs and coding techniques can tolerate 30ms of jitter, just
barely, without sound artifacts. >60ms, boom, crackle, hiss.
> Best Regards
> Sebastian
>
>
>>
>> Simon
>>
>> Sent with AquaMail for Android
>> http://www.aqua-mail.com
>>
>>
>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>>> simon all your numbers are too large by at least a factor of 2. I
>>> think also you are thinking about total latency, rather than induced
>>> latency and jitter.
>>>
>>> Please see my earlier email laying out the bands. And gettys' manifesto.
>>>
>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>>> you want, and a latency increase of 30ms is a proxy for also holding
>>> jitter that low.
>>>
>>>
>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>>> > I think it might be useful to have a 'latency guide' for users. It would say
>>> > things like
>>> >
>>> > 100ms - VoIP applications work well
>>> > 250ms - VoIP applications - conversation is not as natural as it could be,
>>> > although users may not notice this.
>
> The only way to detect whether a conversation is natural is if users notice, I would say...
>
>>> > 500ms - VoIP applications begin to have awkward pauses in conversation.
>>> > 1000ms - VoIP applications have significant annoying pauses in conversation.
>>> > 2000ms - VoIP unusable for most interactive conversations.
>>> >
>>> > 0-50ms - web pages load snappily
>>> > 250ms - web pages can often take an extra second to appear, even on the
>>> > highest bandwidth links
>>> > 1000ms - web pages load significantly slower than they should, taking
>>> > several extra seconds to appear, even on the highest bandwidth links
>>> > 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>>> > seconds of delays for pages to load, even on the highest bandwidth links.
>>> >
>>> > Gaming.... some kind of guide here....
>>> >
>>> > Simon
>>> >
>>> >
>>> >
>>> >
>>> > On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>> >>
>>> >> Hi Toke,
>>> >>
>>> >> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>> >>
>>> >>> Sebastian Moeller <moeller0@gmx.de> writes:
>>> >>>
>>> >>>> I know this is not perfect and the numbers will probably require
>>> >>>> severe "bike-shedding”
>>> >>>
>>> >>> Since you're literally asking for it... ;)
>>> >>>
>>> >>>
>>> >>> In this case we're talking about *added* latency. So the ambition should
>>> >>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>> >>> that proper application of a good queue management algorithm can keep it
>>> >>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>> >>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>> >>
>>> >> Oh, I can get behind that easily, I just thought basing the limits
>>> >> on externally relevant total latency thresholds would directly tell the user
>>> >> which applications might run well on his link. Sure this means that people
>>> >> on a satellite link most likely will miss out the acceptable voip threshold
>>> >> by their base-latency alone, but guess what telephony via satellite leaves
>>> >> something to be desired. That said if the alternative is no telephony I
>>> >> would take 1 second one-way delay any day ;).
>>> >> What I liked about fixed thresholds is that the test would give a
>>> >> good indication what kind of uses are going to work well on the link under
>>> >> load, given that during load both base and induced latency come into play. I
>>> >> agree that 300ms as first threshold is rather unambiguous though (and I am
>>> >> certain that remote X11 will require a massively lower RTT unless one likes
>>> >> to think of remote desktop as an oil tanker simulator ;) )
>>> >>
>>> >>> The other increments I have less opinions about, but 100 ms does seem to
>>> >>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>> >>> reds somewhere above that, and range up into the deep red / purple /
>>> >>> black with skulls and fiery death as we go nearer and above one second?
>>> >>>
>>> >>>
>>> >>> I very much think that raising peoples expectations and being quite
>>> >>> ambitious about what to expect is an important part of this. Of course
>>> >>> the base latency is going to vary, but the added latency shouldn't. And
>>> >>> sine we have the technology to make sure it doesn't, calling out bad
>>> >>> results when we see them is reasonable!
>>> >>
>>> >> Okay so this would turn into:
>>> >>
>>> >> base latency to base latency + 30 ms: green
>>> >> base latency + 31 ms to base latency + 100 ms: yellow
>>> >> base latency + 101 ms to base latency + 200 ms: orange?
>>> >> base latency + 201 ms to base latency + 500 ms: red
>>> >> base latency + 501 ms to base latency + 1000 ms: fire
>>> >> base latency + 1001 ms to infinity:
>>> >> fire & brimstone
>>> >>
>>> >> correct?
>>> >>
>>> >>
>>> >>> -Toke
>>> >>
>>> >> _______________________________________________
>>> >> Bloat mailing list
>>> >> Bloat@lists.bufferbloat.net
>>> >> https://lists.bufferbloat.net/listinfo/bloat
>>> >
>>> >
>>> > _______________________________________________
>>> > Bloat mailing list
>>> > Bloat@lists.bufferbloat.net
>>> > https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>>
>>> --
>>> Dave Täht
>>> Open Networking needs **Open Source Hardware**
>>>
>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-27 16:39 ` Dave Taht
@ 2015-04-28 7:18 ` Sebastian Moeller
2015-04-28 8:01 ` David Lang
2015-04-28 14:02 ` Dave Taht
0 siblings, 2 replies; 183+ messages in thread
From: Sebastian Moeller @ 2015-04-28 7:18 UTC (permalink / raw)
To: Dave Täht; +Cc: bloat
Hi Dave,
On Apr 27, 2015, at 18:39 , Dave Taht <dave.taht@gmail.com> wrote:
> On Fri, Apr 24, 2015 at 11:03 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
>> Hi Simon, hi List
>>
>> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
>>
>>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
>>
>> If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
>
> At one level I am utterly prepared to set new (And lower) standards
> for latency, and not necessarily pay attention to compromise driven
> standards processes established in the 70s and 80s, but to the actual
> user experience numbers that jim cited in the fq+aqm manefesto on his
> blog.
I am not sure I git the right one, could you please post a link to the document you are referring to? My personal issue with new standards is that it is going to be harder to convince others that these are real and not simply selected to push our agenda., hence using other peoples numbers, preferably numbers backed up by research ;) I also note that in the ITU numbers I dragged into the discussion the measurement pretends to be mouth to ear (one way) delay, so for intermediate buffering the thresholds need to be lower to allow for sampling interval (I think typically 10ms for the usual codecs G.711 and G.722), further sender processing and receiver processing, so I guess for the ITU thresholds we should subtract say 30ms for processing and then doube it to go from one-way delay to RTT. Now I am amazed how large the resulting RTTs actually are, so I assume I need to scrutinize the psycophysics experiments that hopefully underlay those numbers...
>
> I consider induced latencies of 30ms as a "green" band because that is
> the outer limit of the range modern aqm technologies can achieve (fq
> can get closer to 0). There was a lot of debate about 20ms being the
> right figure for induced latency and/or jitter, a year or two back,
> and we settled on 30ms for both, so that number is already a
> compromise figure.
Ah, I think someone brought this up already, do we need to make allowances for slow links? If a full packet traversal is already 16ms can we really expect 30ms? And should we even care, I mean, a slow link is a slow link and will have some drawbacks maybe we should just expose those instead of rationalizing them away? On the other hand I tend to think that in the end it is all about the cumulative performance of the link for most users, i.e. if the link allows glitch-free voip while heavy up- and downloads go on, normal users should not care one iota what the induced latency actually is (aqm or no aqm as long as the link behaves well nothing needs changing)
>
> It is highly likely that folk here are not aware of the extra-ordinary
> amount of debate that went into deciding the ultimate ATM cell size
> back in the day. The eu wanted 32 bytes, the US 48, both because that
> was basically a good size for the local continental distance and echo
> cancellation stuff, at the time.
>
> In the case of voip, jitter is actually more important than latency.
> Modern codecs and coding techniques can tolerate 30ms of jitter, just
> barely, without sound artifacts. >60ms, boom, crackle, hiss.
Ah, and here is were I understand why my simplistic model from above fails; induced latency will contribute significantly to jitter and hence is a good proxy for link-suitability for real-time applications. So I agree using the induced latency as measure to base the color bands from sounds like a good approach.
>
>
>> Best Regards
>> Sebastian
>>
>>
>>>
>>> Simon
>>>
>>> Sent with AquaMail for Android
>>> http://www.aqua-mail.com
>>>
>>>
>>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>>> simon all your numbers are too large by at least a factor of 2. I
>>>> think also you are thinking about total latency, rather than induced
>>>> latency and jitter.
>>>>
>>>> Please see my earlier email laying out the bands. And gettys' manifesto.
>>>>
>>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>>>> you want, and a latency increase of 30ms is a proxy for also holding
>>>> jitter that low.
>>>>
>>>>
>>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>>>>> I think it might be useful to have a 'latency guide' for users. It would say
>>>>> things like
>>>>>
>>>>> 100ms - VoIP applications work well
>>>>> 250ms - VoIP applications - conversation is not as natural as it could be,
>>>>> although users may not notice this.
>>
>> The only way to detect whether a conversation is natural is if users notice, I would say...
>>
>>>>> 500ms - VoIP applications begin to have awkward pauses in conversation.
>>>>> 1000ms - VoIP applications have significant annoying pauses in conversation.
>>>>> 2000ms - VoIP unusable for most interactive conversations.
>>>>>
>>>>> 0-50ms - web pages load snappily
>>>>> 250ms - web pages can often take an extra second to appear, even on the
>>>>> highest bandwidth links
>>>>> 1000ms - web pages load significantly slower than they should, taking
>>>>> several extra seconds to appear, even on the highest bandwidth links
>>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>>>>> seconds of delays for pages to load, even on the highest bandwidth links.
>>>>>
>>>>> Gaming.... some kind of guide here....
>>>>>
>>>>> Simon
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>>>>>
>>>>>> Hi Toke,
>>>>>>
>>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>>>>>
>>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
>>>>>>>
>>>>>>>> I know this is not perfect and the numbers will probably require
>>>>>>>> severe "bike-shedding”
>>>>>>>
>>>>>>> Since you're literally asking for it... ;)
>>>>>>>
>>>>>>>
>>>>>>> In this case we're talking about *added* latency. So the ambition should
>>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>>>>>> that proper application of a good queue management algorithm can keep it
>>>>>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>>>>>
>>>>>> Oh, I can get behind that easily, I just thought basing the limits
>>>>>> on externally relevant total latency thresholds would directly tell the user
>>>>>> which applications might run well on his link. Sure this means that people
>>>>>> on a satellite link most likely will miss out the acceptable voip threshold
>>>>>> by their base-latency alone, but guess what telephony via satellite leaves
>>>>>> something to be desired. That said if the alternative is no telephony I
>>>>>> would take 1 second one-way delay any day ;).
>>>>>> What I liked about fixed thresholds is that the test would give a
>>>>>> good indication what kind of uses are going to work well on the link under
>>>>>> load, given that during load both base and induced latency come into play. I
>>>>>> agree that 300ms as first threshold is rather unambiguous though (and I am
>>>>>> certain that remote X11 will require a massively lower RTT unless one likes
>>>>>> to think of remote desktop as an oil tanker simulator ;) )
>>>>>>
>>>>>>> The other increments I have less opinions about, but 100 ms does seem to
>>>>>>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>>>>>> reds somewhere above that, and range up into the deep red / purple /
>>>>>>> black with skulls and fiery death as we go nearer and above one second?
>>>>>>>
>>>>>>>
>>>>>>> I very much think that raising peoples expectations and being quite
>>>>>>> ambitious about what to expect is an important part of this. Of course
>>>>>>> the base latency is going to vary, but the added latency shouldn't. And
>>>>>>> sine we have the technology to make sure it doesn't, calling out bad
>>>>>>> results when we see them is reasonable!
>>>>>>
>>>>>> Okay so this would turn into:
>>>>>>
>>>>>> base latency to base latency + 30 ms: green
>>>>>> base latency + 31 ms to base latency + 100 ms: yellow
>>>>>> base latency + 101 ms to base latency + 200 ms: orange?
>>>>>> base latency + 201 ms to base latency + 500 ms: red
>>>>>> base latency + 501 ms to base latency + 1000 ms: fire
>>>>>> base latency + 1001 ms to infinity:
>>>>>> fire & brimstone
>>>>>>
>>>>>> correct?
>>>>>>
>>>>>>
>>>>>>> -Toke
>>>>>>
>>>>>> _______________________________________________
>>>>>> Bloat mailing list
>>>>>> Bloat@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>>
>>>> --
>>>> Dave Täht
>>>> Open Networking needs **Open Source Hardware**
>>>>
>>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 7:18 ` Sebastian Moeller
@ 2015-04-28 8:01 ` David Lang
2015-04-28 8:19 ` Toke Høiland-Jørgensen
` (2 more replies)
2015-04-28 14:02 ` Dave Taht
1 sibling, 3 replies; 183+ messages in thread
From: David Lang @ 2015-04-28 8:01 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
On Tue, 28 Apr 2015, Sebastian Moeller wrote:
>>
>> I consider induced latencies of 30ms as a "green" band because that is
>> the outer limit of the range modern aqm technologies can achieve (fq
>> can get closer to 0). There was a lot of debate about 20ms being the
>> right figure for induced latency and/or jitter, a year or two back,
>> and we settled on 30ms for both, so that number is already a
>> compromise figure.
>
> Ah, I think someone brought this up already, do we need to make
> allowances for slow links? If a full packet traversal is already 16ms can we
> really expect 30ms? And should we even care, I mean, a slow link is a slow
> link and will have some drawbacks maybe we should just expose those instead of
> rationalizing them away? On the other hand I tend to think that in the end it
> is all about the cumulative performance of the link for most users, i.e. if
> the link allows glitch-free voip while heavy up- and downloads go on, normal
> users should not care one iota what the induced latency actually is (aqm or no
> aqm as long as the link behaves well nothing needs changing)
>
>>
>> It is highly likely that folk here are not aware of the extra-ordinary
>> amount of debate that went into deciding the ultimate ATM cell size
>> back in the day. The eu wanted 32 bytes, the US 48, both because that
>> was basically a good size for the local continental distance and echo
>> cancellation stuff, at the time.
>>
>> In the case of voip, jitter is actually more important than latency.
>> Modern codecs and coding techniques can tolerate 30ms of jitter, just
>> barely, without sound artifacts. >60ms, boom, crackle, hiss.
>
> Ah, and here is were I understand why my simplistic model from above
> fails; induced latency will contribute significantly to jitter and hence is a
> good proxy for link-suitability for real-time applications. So I agree using
> the induced latency as measure to base the color bands from sounds like a good
> approach.
>
Voice is actually remarkably tolerant of pure latency. While 60ms of jitter
makes a connection almost unusalbe, a few hundred ms of consistant latency
isn't a problem. IIRC (from my college days when ATM was the new, hot
technology) you have to get up to around a second of latency before
pure-consistant latency starts to break things.
Gaming and high frequency trading care about the minimum latency a LOT. but most
other things are far more sentitive to jitter than pure latency. [1]
The trouble with bufferbloat induced latency is that it is highly variable based
on exactly how much other data is in the queue, so under the wrong conditions,
all latency caused by buffering shows up as jitter.
David Lang
[1] pure latency will degrade the experience for many things, but usually in a
fairly graceful manner.
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 8:01 ` David Lang
@ 2015-04-28 8:19 ` Toke Høiland-Jørgensen
2015-04-28 15:42 ` David Lang
2015-04-28 8:38 ` Sebastian Moeller
2015-04-28 11:04 ` Mikael Abrahamsson
2 siblings, 1 reply; 183+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-28 8:19 UTC (permalink / raw)
To: David Lang; +Cc: bloat
David Lang <david@lang.hm> writes:
> Voice is actually remarkably tolerant of pure latency. While 60ms of
> jitter makes a connection almost unusalbe, a few hundred ms of
> consistant latency isn't a problem. IIRC (from my college days when
> ATM was the new, hot technology) you have to get up to around a second
> of latency before pure-consistant latency starts to break things.
Well isn't that more a case of "the human brain will compensate for the
latency". Sure, you *can* talk to someone with half a second of delay,
but it's bloody *annoying*. :P
That, for me, is the main reason to go with lower figures. I don't want
to just be able to physically talk with someone without the codec
breaking, I want to be able to *enjoy* the experience and not be totally
exhausted by latency fatigue afterwards.
One of the things that really struck a chord with me was hearing the
people from the LoLa project
(http://www.conservatorio.trieste.it/artistica/ricerca/progetto-lola-low-latency/lola-case-study.pdf)
talk about how using their big fancy concert video conferencing system
to just talk to each other, it was like having a real face-to-face
conversation with none of the annoyances of regular video chat.
-Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 8:01 ` David Lang
2015-04-28 8:19 ` Toke Høiland-Jørgensen
@ 2015-04-28 8:38 ` Sebastian Moeller
2015-04-28 12:09 ` Rich Brown
2015-04-28 15:39 ` David Lang
2015-04-28 11:04 ` Mikael Abrahamsson
2 siblings, 2 replies; 183+ messages in thread
From: Sebastian Moeller @ 2015-04-28 8:38 UTC (permalink / raw)
To: David Lang; +Cc: bloat
Hi David,
On Apr 28, 2015, at 10:01 , David Lang <david@lang.hm> wrote:
> On Tue, 28 Apr 2015, Sebastian Moeller wrote:
>
>>>
>>> I consider induced latencies of 30ms as a "green" band because that is
>>> the outer limit of the range modern aqm technologies can achieve (fq
>>> can get closer to 0). There was a lot of debate about 20ms being the
>>> right figure for induced latency and/or jitter, a year or two back,
>>> and we settled on 30ms for both, so that number is already a
>>> compromise figure.
>>
>> Ah, I think someone brought this up already, do we need to make allowances for slow links? If a full packet traversal is already 16ms can we really expect 30ms? And should we even care, I mean, a slow link is a slow link and will have some drawbacks maybe we should just expose those instead of rationalizing them away? On the other hand I tend to think that in the end it is all about the cumulative performance of the link for most users, i.e. if the link allows glitch-free voip while heavy up- and downloads go on, normal users should not care one iota what the induced latency actually is (aqm or no aqm as long as the link behaves well nothing needs changing)
>>
>>>
>>> It is highly likely that folk here are not aware of the extra-ordinary
>>> amount of debate that went into deciding the ultimate ATM cell size
>>> back in the day. The eu wanted 32 bytes, the US 48, both because that
>>> was basically a good size for the local continental distance and echo
>>> cancellation stuff, at the time.
>>>
>>> In the case of voip, jitter is actually more important than latency.
>>> Modern codecs and coding techniques can tolerate 30ms of jitter, just
>>> barely, without sound artifacts. >60ms, boom, crackle, hiss.
>>
>> Ah, and here is were I understand why my simplistic model from above fails; induced latency will contribute significantly to jitter and hence is a good proxy for link-suitability for real-time applications. So I agree using the induced latency as measure to base the color bands from sounds like a good approach.
>>
>
> Voice is actually remarkably tolerant of pure latency. While 60ms of jitter makes a connection almost unusalbe, a few hundred ms of consistant latency isn't a problem. IIRC (from my college days when ATM was the new, hot technology) you have to get up to around a second of latency before pure-consistant latency starts to break things.
Well, what I want to see is a study, preferably psychophysics not modeling ;), showing the different latency “tolerances” of humans. I am certain that humans can adjust to even dozens of seconds de;ays if need be, but the goal should be fluent and seamless conversation not interleaved monologues. Thanks for giving a bound for jitter, do you have any reference for perceptional jitter thresholds or some such?
>
> Gaming and high frequency trading care about the minimum latency a LOT. but most other things are far more sentitive to jitter than pure latency. [1]
Sure, but it is easy to “loose” latency but impossible to reclaim, so we should aim for lowest latency ;) . Now as long as jitter has a bound one can trade jitter for latency, by simply buffering more at the receiver thereby ironing out (a part of the) the jitter while introducing additional latency. One reason why I still thing that absolute latency thresholds have some value as they would allow to assess how much of a “budget” one has to flatten out jitter, but I digress. I also think now, that conflating absolute latency and buffer bloat will not really help (unless everybody understands induced latency by heart ;) )….
>
> The trouble with bufferbloat induced latency is that it is highly variable based on exactly how much other data is in the queue, so under the wrong conditions, all latency caused by buffering shows up as jitter.
That is how I understood Dave’s mail, thanks for confirming that.
Best Regards
Sebastian
>
> David Lang
>
> [1] pure latency will degrade the experience for many things, but usually in a fairly graceful manner.
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 8:01 ` David Lang
2015-04-28 8:19 ` Toke Høiland-Jørgensen
2015-04-28 8:38 ` Sebastian Moeller
@ 2015-04-28 11:04 ` Mikael Abrahamsson
2015-04-28 11:49 ` Sebastian Moeller
2015-04-28 14:06 ` Dave Taht
2 siblings, 2 replies; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-04-28 11:04 UTC (permalink / raw)
To: David Lang; +Cc: bloat
On Tue, 28 Apr 2015, David Lang wrote:
> Voice is actually remarkably tolerant of pure latency. While 60ms of jitter
> makes a connection almost unusalbe, a few hundred ms of consistant latency
> isn't a problem. IIRC (from my college days when ATM was the new, hot
> technology) you have to get up to around a second of latency before
> pure-consistant latency starts to break things.
I would say most people start to get trouble when talking to each other
when the RTT exceeds around 500-600ms.
I mostly agree with
http://www.cisco.com/c/en/us/support/docs/voice/voice-quality/5125-delay-details.html
but RTT of over 500ms is not fun. You basically can't have a heated
argument/discussion when the RTT is higher than this :P
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 11:04 ` Mikael Abrahamsson
@ 2015-04-28 11:49 ` Sebastian Moeller
2015-04-28 12:24 ` Mikael Abrahamsson
2015-04-28 14:06 ` Dave Taht
1 sibling, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2015-04-28 11:49 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
Hi Mikhail,
On Apr 28, 2015, at 13:04 , Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Tue, 28 Apr 2015, David Lang wrote:
>
>> Voice is actually remarkably tolerant of pure latency. While 60ms of jitter makes a connection almost unusalbe, a few hundred ms of consistant latency isn't a problem. IIRC (from my college days when ATM was the new, hot technology) you have to get up to around a second of latency before pure-consistant latency starts to break things.
>
> I would say most people start to get trouble when talking to each other when the RTT exceeds around 500-600ms.
>
> I mostly agree with http://www.cisco.com/c/en/us/support/docs/voice/voice-quality/5125-delay-details.html but RTT of over 500ms is not fun. You basically can't have a heated argument/discussion when the RTT is higher than this :P
From "Table 4.1 Delay Specifications” of that link we basically have a recapitulation of the ITU-T G.114 source, one-way mouth to ear latency thresholds for acceptable voip performance. The rest of the link discusses additional sources of latency and should allow to come up with a reasonable estimate how much of the latency budget can be spend on the transit. So in my mind an decent thresholds would be (150ms mouth-to-ear-delay - sender-processing - receiver-processing) * 2. Then again I think the discussion turned to relating buffer-bloat inured latency as jitter source, so the thresholds should be framed in a jitter-budget, not pure latency ;).
Best Regards
Sebastian
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 8:38 ` Sebastian Moeller
@ 2015-04-28 12:09 ` Rich Brown
2015-04-28 15:26 ` David Lang
2015-04-28 15:39 ` David Lang
1 sibling, 1 reply; 183+ messages in thread
From: Rich Brown @ 2015-04-28 12:09 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
On Apr 28, 2015, at 4:38 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
> Well, what I want to see is a study, preferably psychophysics not modeling ;), showing the different latency “tolerances” of humans. I am certain that humans can adjust to even dozens of seconds de;ays if need be, but the goal should be fluent and seamless conversation not interleaved monologues. Thanks for giving a bound for jitter, do you have any reference for perceptional jitter thresholds or some such?
An anecdote (we don't need no stinkin' studies :-)
I frequently listen to the same interview on NPR twice: first at say, 6:20 am when the news is breaking, and then at the 8:20am rebroadcast.
The first interview is live, sometimes with significant satellite delays between the two parties. The sound quality is fine. But the pauses between question and answer (waiting for the satellite propagation) sometimes make the responder seem a little "slow witted" - as if they have to struggle to compose their response.
But the rebroadcast gets "tuned up" by NPR audio folks, and those pauses get edited out. I was amazed how the conversation takes on a completely different flavor: any negative impression goes away without that latency.
So, what lesson do I learn from this? Pure latency *does* affect the nature of the conversation - it may not be fluent and seamless if there's a satellite link's worth of latency involved.
Although not being exhibited in this case, I can believe that jitter plays worse havoc on a conversation. I'll also bet that induced latency is a good proxy for jitter.
Rich
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 11:49 ` Sebastian Moeller
@ 2015-04-28 12:24 ` Mikael Abrahamsson
2015-04-28 13:44 ` Sebastian Moeller
0 siblings, 1 reply; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-04-28 12:24 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1539 bytes --]
On Tue, 28 Apr 2015, Sebastian Moeller wrote:
> From "Table 4.1 Delay Specifications” of that link we basically
> have a recapitulation of the ITU-T G.114 source, one-way mouth to ear
> latency thresholds for acceptable voip performance. The rest of the link
> discusses additional sources of latency and should allow to come up with
> a reasonable estimate how much of the latency budget can be spend on the
> transit. So in my mind an decent thresholds would be (150ms
> mouth-to-ear-delay - sender-processing - receiver-processing) * 2. Then
> again I think the discussion turned to relating buffer-bloat inured
> latency as jitter source, so the thresholds should be framed in a
> jitter-budget, not pure latency ;).
Yes, it's all about mouth-to-ear and then back again. I have historically
been involved a few times in analyzing end-to-end latency when customer
complaints came in about delay, it seemed that customers started
complaining around 450-550 ms RTT (mouth-network-ear-mouth-network-ear).
This usually was a result of multiple PDV (Packet Delay Variation, a.k.a
jitter) buffers due media conversions on the voice path, for instance when
there was VoIP-TDM-VoIP-ATM-VoIP and potentially even more conversions due
to VoIP/PSTN/Mobile interaction.
So this is one reason I am interested in the bufferbloat movement, because
with less bufferbloat then one can get away with smaller PDV buffers,
which means less end-to-end delay for realtime applications.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 12:24 ` Mikael Abrahamsson
@ 2015-04-28 13:44 ` Sebastian Moeller
2015-04-28 19:09 ` Rick Jones
0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2015-04-28 13:44 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
Hi Mikael,
On Apr 28, 2015, at 14:24 , Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Tue, 28 Apr 2015, Sebastian Moeller wrote:
>
>> From "Table 4.1 Delay Specifications” of that link we basically have a recapitulation of the ITU-T G.114 source, one-way mouth to ear latency thresholds for acceptable voip performance. The rest of the link discusses additional sources of latency and should allow to come up with a reasonable estimate how much of the latency budget can be spend on the transit. So in my mind an decent thresholds would be (150ms mouth-to-ear-delay - sender-processing - receiver-processing) * 2. Then again I think the discussion turned to relating buffer-bloat inured latency as jitter source, so the thresholds should be framed in a jitter-budget, not pure latency ;).
>
> Yes, it's all about mouth-to-ear and then back again. I have historically been involved a few times in analyzing end-to-end latency when customer complaints came in about delay, it seemed that customers started complaining around 450-550 ms RTT (mouth-network-ear-mouth-network-ear).
Ah, this fits with the ITU figure 1 data, at ~250ms one-way delay they switch from “users very satisfied” to “users satisfied”, also showing that the ITU had very patient subjects in their tests… So if we need to allow for sampling and de-jittering at both ends costing say 50ms we end up with a threshold of acceptable total threshold of ~400ms network RTT for decent voip conversations. Actually lets assume the sender takes 30ms for sampling and packetizing, and the recover takes actual jointer ms for its dejittering filter/buffer, then we can draw the threshold as a function of maximum latency under load increase...
Do you have numbers for acceptable jitter levels?
>
> This usually was a result of multiple PDV (Packet Delay Variation, a.k.a jitter) buffers due media conversions on the voice path,
This sucks.
> for instance when there was VoIP-TDM-VoIP-ATM-VoIP and potentially even more conversions due to VoIP/PSTN/Mobile interaction.
I hope the future will cut this down to at max one transition, or preferably none ;) (with both PSTN and TDM slowly going the way of the Dodo).
Best Regards
Sebastian
>
> So this is one reason I am interested in the bufferbloat movement, because with less bufferbloat then one can get away with smaller PDV buffers, which means less end-to-end delay for realtime applications.
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 7:18 ` Sebastian Moeller
2015-04-28 8:01 ` David Lang
@ 2015-04-28 14:02 ` Dave Taht
1 sibling, 0 replies; 183+ messages in thread
From: Dave Taht @ 2015-04-28 14:02 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
On Tue, Apr 28, 2015 at 12:18 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Dave,
>
> On Apr 27, 2015, at 18:39 , Dave Taht <dave.taht@gmail.com> wrote:
>
>> On Fri, Apr 24, 2015 at 11:03 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>> Hi Simon, hi List
>>>
>>> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
>>>
>>>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
>>>
>>> If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
>>
>> At one level I am utterly prepared to set new (And lower) standards
>> for latency, and not necessarily pay attention to compromise driven
>> standards processes established in the 70s and 80s, but to the actual
>> user experience numbers that jim cited in the fq+aqm manefesto on his
>> blog.
>
> I am not sure I git the right one, could you please post a link to the document you are referring to?
I tend to refer to this as the fq+aqm "manifesto":
https://gettys.wordpress.com/2013/07/10/low-latency-requires-smart-queuing-traditional-aqm-is-not-enough/
Although jim takes too long to get to the fq portion of it. This was
because the uphill battle with the ietf was all about e2e vs aqm
techniques with FQ hardly on the table at all when we started.
Also I view many of the numbers he cited as *outer bounds* for
latency. While some might claim a band can make good music with 30ms
latency, I generally have to stay within 8 feet or less of the
drummer....
>My personal issue with new standards is that it is going to be harder to convince others that these are real and not simply selected to push our agenda., hence using other peoples numbers, preferably numbers backed up by research ;)
Sebastian pointed out to me privately about the old ATM dispute:
"The way I heard the story, it was France pushing for 32 bytes
as this would allow a national net without the need for echo
cancelation, while the US already required echo cancelation and wanted
64 bytes. In true salomonic fashion 48 bytes was selected, pleasing no
one ;) (see http://cntic03.hit.bme.hu/meres/ATMFAQ/d7.htm). Nice
story."
>I also note that in the ITU numbers I dragged into the discussion the measurement pretends to be mouth to ear (one way) delay, so for intermediate buffering the thresholds need to be lower to allow for sampling interval (I think typically 10ms for the usual codecs G.711 and G.722), further sender processing and receiver processing, so I guess for the ITU thresholds we should subtract say 30ms for processing and then doube it to go from one-way delay to RTT. Now I am amazed how large the resulting RTTs actually are, so I assume I need to scrutinize the psycophysics experiments that hopefully underlay those numbers...
The analogy I use when discussing this with people in the real world,
goes like this: "Here we are discussing this around a lunch table. a
single millisecond is about a foot, and I am about 3 feet from you, so
the ideal latency for conversation is much, much less than the maximum
laid out by multiple standards for voice. Shall we try to have this
conversation from 30ms/feet apart?"
Less latency = more intimacy. How many here have had a whispered
conversation into a lover's ear? Would it have been anywhere near as
good if you were across the hall?
Far be it for me to project internet latency reductions as being key
to achieving world peace and better mutual understanding[1], but this
simple comparison of real world latencies to established standards
makes a ton of sense to me and everyone I have tried this comparison
on.
The existing standards for voice were driven by what was achievable at
the time, more than they were driven by psychoacoustics. I am glad
that the opus voice codec can get as low as 2.7ms latency, and sad
that we have to capture whole frames (~16ms) nowadays for video.
Perhaps we will see scanline video grabbers re-emerge as a viable
videoconferencing tool one day.
>
>>
>> I consider induced latencies of 30ms as a "green" band because that is
>> the outer limit of the range modern aqm technologies can achieve (fq
>> can get closer to 0). There was a lot of debate about 20ms being the
>> right figure for induced latency and/or jitter, a year or two back,
>> and we settled on 30ms for both, so that number is already a
>> compromise figure.
>
> Ah, I think someone brought this up already, do we need to make allowances for slow links? If a full packet traversal is already 16ms can we really expect 30ms? And should we even care, I mean, a slow link is a slow link and will have some drawbacks maybe we should just expose those instead of rationalizing them away? On the other hand I tend to think that in the end it is all about the cumulative performance of the link for most users, i.e. if the link allows glitch-free voip while heavy up- and downloads go on, normal users should not care one iota what the induced latency actually is (aqm or no aqm as long as the link behaves well nothing needs changing)
>
>>
>> It is highly likely that folk here are not aware of the extra-ordinary
>> amount of debate that went into deciding the ultimate ATM cell size
>> back in the day. The eu wanted 32 bytes, the US 48, both because that
>> was basically a good size for the local continental distance and echo
>> cancellation stuff, at the time.
>>
>> In the case of voip, jitter is actually more important than latency.
>> Modern codecs and coding techniques can tolerate 30ms of jitter, just
>> barely, without sound artifacts. >60ms, boom, crackle, hiss.
>
> Ah, and here is were I understand why my simplistic model from above fails; induced latency will contribute significantly to jitter and hence is a good proxy for link-suitability for real-time applications. So I agree using the induced latency as measure to base the color bands from sounds like a good approach.
Yea! Let's do that!
[1] http://the-edge.blogspot.com/2003_07_27_the-edge_archive.html#105975402040143728
>
>
>>
>>
>>> Best Regards
>>> Sebastian
>>>
>>>
>>>>
>>>> Simon
>>>>
>>>> Sent with AquaMail for Android
>>>> http://www.aqua-mail.com
>>>>
>>>>
>>>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
>>>>
>>>>> simon all your numbers are too large by at least a factor of 2. I
>>>>> think also you are thinking about total latency, rather than induced
>>>>> latency and jitter.
>>>>>
>>>>> Please see my earlier email laying out the bands. And gettys' manifesto.
>>>>>
>>>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>>>>> you want, and a latency increase of 30ms is a proxy for also holding
>>>>> jitter that low.
>>>>>
>>>>>
>>>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>>>>>> I think it might be useful to have a 'latency guide' for users. It would say
>>>>>> things like
>>>>>>
>>>>>> 100ms - VoIP applications work well
>>>>>> 250ms - VoIP applications - conversation is not as natural as it could be,
>>>>>> although users may not notice this.
>>>
>>> The only way to detect whether a conversation is natural is if users notice, I would say...
>>>
>>>>>> 500ms - VoIP applications begin to have awkward pauses in conversation.
>>>>>> 1000ms - VoIP applications have significant annoying pauses in conversation.
>>>>>> 2000ms - VoIP unusable for most interactive conversations.
>>>>>>
>>>>>> 0-50ms - web pages load snappily
>>>>>> 250ms - web pages can often take an extra second to appear, even on the
>>>>>> highest bandwidth links
>>>>>> 1000ms - web pages load significantly slower than they should, taking
>>>>>> several extra seconds to appear, even on the highest bandwidth links
>>>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>>>>>> seconds of delays for pages to load, even on the highest bandwidth links.
>>>>>>
>>>>>> Gaming.... some kind of guide here....
>>>>>>
>>>>>> Simon
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>>>>>>
>>>>>>> Hi Toke,
>>>>>>>
>>>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>>>>>>
>>>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
>>>>>>>>
>>>>>>>>> I know this is not perfect and the numbers will probably require
>>>>>>>>> severe "bike-shedding”
>>>>>>>>
>>>>>>>> Since you're literally asking for it... ;)
>>>>>>>>
>>>>>>>>
>>>>>>>> In this case we're talking about *added* latency. So the ambition should
>>>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>>>>>>> that proper application of a good queue management algorithm can keep it
>>>>>>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>>>>>>
>>>>>>> Oh, I can get behind that easily, I just thought basing the limits
>>>>>>> on externally relevant total latency thresholds would directly tell the user
>>>>>>> which applications might run well on his link. Sure this means that people
>>>>>>> on a satellite link most likely will miss out the acceptable voip threshold
>>>>>>> by their base-latency alone, but guess what telephony via satellite leaves
>>>>>>> something to be desired. That said if the alternative is no telephony I
>>>>>>> would take 1 second one-way delay any day ;).
>>>>>>> What I liked about fixed thresholds is that the test would give a
>>>>>>> good indication what kind of uses are going to work well on the link under
>>>>>>> load, given that during load both base and induced latency come into play. I
>>>>>>> agree that 300ms as first threshold is rather unambiguous though (and I am
>>>>>>> certain that remote X11 will require a massively lower RTT unless one likes
>>>>>>> to think of remote desktop as an oil tanker simulator ;) )
>>>>>>>
>>>>>>>> The other increments I have less opinions about, but 100 ms does seem to
>>>>>>>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>>>>>>> reds somewhere above that, and range up into the deep red / purple /
>>>>>>>> black with skulls and fiery death as we go nearer and above one second?
>>>>>>>>
>>>>>>>>
>>>>>>>> I very much think that raising peoples expectations and being quite
>>>>>>>> ambitious about what to expect is an important part of this. Of course
>>>>>>>> the base latency is going to vary, but the added latency shouldn't. And
>>>>>>>> sine we have the technology to make sure it doesn't, calling out bad
>>>>>>>> results when we see them is reasonable!
>>>>>>>
>>>>>>> Okay so this would turn into:
>>>>>>>
>>>>>>> base latency to base latency + 30 ms: green
>>>>>>> base latency + 31 ms to base latency + 100 ms: yellow
>>>>>>> base latency + 101 ms to base latency + 200 ms: orange?
>>>>>>> base latency + 201 ms to base latency + 500 ms: red
>>>>>>> base latency + 501 ms to base latency + 1000 ms: fire
>>>>>>> base latency + 1001 ms to infinity:
>>>>>>> fire & brimstone
>>>>>>>
>>>>>>> correct?
>>>>>>>
>>>>>>>
>>>>>>>> -Toke
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Bloat mailing list
>>>>>>> Bloat@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Bloat mailing list
>>>>>> Bloat@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Dave Täht
>>>>> Open Networking needs **Open Source Hardware**
>>>>>
>>>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>>>
>>>>
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>>
>>
>> --
>> Dave Täht
>> Open Networking needs **Open Source Hardware**
>>
>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 11:04 ` Mikael Abrahamsson
2015-04-28 11:49 ` Sebastian Moeller
@ 2015-04-28 14:06 ` Dave Taht
1 sibling, 0 replies; 183+ messages in thread
From: Dave Taht @ 2015-04-28 14:06 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
On Tue, Apr 28, 2015 at 4:04 AM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Tue, 28 Apr 2015, David Lang wrote:
>
>> Voice is actually remarkably tolerant of pure latency. While 60ms of
>> jitter makes a connection almost unusalbe, a few hundred ms of consistant
>> latency isn't a problem. IIRC (from my college days when ATM was the new,
>> hot technology) you have to get up to around a second of latency before
>> pure-consistant latency starts to break things.
>
>
> I would say most people start to get trouble when talking to each other when
> the RTT exceeds around 500-600ms.
>
> I mostly agree with
> http://www.cisco.com/c/en/us/support/docs/voice/voice-quality/5125-delay-details.html
> but RTT of over 500ms is not fun. You basically can't have a heated
> argument/discussion when the RTT is higher than this :P
Thx for busting me up this morning!
But what you say is not strictly true. When the RTT goes way, way, way
up (as in email) it becomes much more possible to have an unresolvable
heated argument that cannot be shut down down without invocation of
godwin's law.
Short RTTs (as in personal meetings - and perhaps, one day in a more
bloat-free universe without annoying jitter), make it both more
possible to have a heated argument... and a resolution.
A shared beer, helps too, also. :)
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 12:09 ` Rich Brown
@ 2015-04-28 15:26 ` David Lang
0 siblings, 0 replies; 183+ messages in thread
From: David Lang @ 2015-04-28 15:26 UTC (permalink / raw)
To: Rich Brown; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 2073 bytes --]
On Tue, 28 Apr 2015, Rich Brown wrote:
> On Apr 28, 2015, at 4:38 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
>> Well, what I want to see is a study, preferably psychophysics not modeling ;), showing the different latency “tolerances” of humans. I am certain that humans can adjust to even dozens of seconds de;ays if need be, but the goal should be fluent and seamless conversation not interleaved monologues. Thanks for giving a bound for jitter, do you have any reference for perceptional jitter thresholds or some such?
>
> An anecdote (we don't need no stinkin' studies :-)
>
> I frequently listen to the same interview on NPR twice: first at say, 6:20 am when the news is breaking, and then at the 8:20am rebroadcast.
>
> The first interview is live, sometimes with significant satellite delays between the two parties. The sound quality is fine. But the pauses between question and answer (waiting for the satellite propagation) sometimes make the responder seem a little "slow witted" - as if they have to struggle to compose their response.
>
> But the rebroadcast gets "tuned up" by NPR audio folks, and those pauses get edited out. I was amazed how the conversation takes on a completely different flavor: any negative impression goes away without that latency.
>
> So, what lesson do I learn from this? Pure latency *does* affect the nature of the conversation - it may not be fluent and seamless if there's a satellite link's worth of latency involved.
>
> Although not being exhibited in this case, I can believe that jitter plays worse havoc on a conversation. I'll also bet that induced latency is a good proxy for jitter.
satellite round trip latency is on the order of 1 second, which is at the far
end of what can be tolerated for VoIP.
Go back to the '80s and '90s when the phone companies were looking at converting
from POTS long-distance lines to digital (with ATM) and there was a lot of work
done at the time about voice communication and what 'sounds good'. This is a lot
fo what drove the ATM design, predictable latency.
David Lang
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 8:38 ` Sebastian Moeller
2015-04-28 12:09 ` Rich Brown
@ 2015-04-28 15:39 ` David Lang
1 sibling, 0 replies; 183+ messages in thread
From: David Lang @ 2015-04-28 15:39 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 5376 bytes --]
On Tue, 28 Apr 2015, Sebastian Moeller wrote:
> Hi David,
>
> On Apr 28, 2015, at 10:01 , David Lang <david@lang.hm> wrote:
>
>> On Tue, 28 Apr 2015, Sebastian Moeller wrote:
>>
>>>>
>>>> I consider induced latencies of 30ms as a "green" band because that is
>>>> the outer limit of the range modern aqm technologies can achieve (fq
>>>> can get closer to 0). There was a lot of debate about 20ms being the
>>>> right figure for induced latency and/or jitter, a year or two back,
>>>> and we settled on 30ms for both, so that number is already a
>>>> compromise figure.
>>>
>>> Ah, I think someone brought this up already, do we need to make allowances for slow links? If a full packet traversal is already 16ms can we really expect 30ms? And should we even care, I mean, a slow link is a slow link and will have some drawbacks maybe we should just expose those instead of rationalizing them away? On the other hand I tend to think that in the end it is all about the cumulative performance of the link for most users, i.e. if the link allows glitch-free voip while heavy up- and downloads go on, normal users should not care one iota what the induced latency actually is (aqm or no aqm as long as the link behaves well nothing needs changing)
>>>
>>>>
>>>> It is highly likely that folk here are not aware of the extra-ordinary
>>>> amount of debate that went into deciding the ultimate ATM cell size
>>>> back in the day. The eu wanted 32 bytes, the US 48, both because that
>>>> was basically a good size for the local continental distance and echo
>>>> cancellation stuff, at the time.
>>>>
>>>> In the case of voip, jitter is actually more important than latency.
>>>> Modern codecs and coding techniques can tolerate 30ms of jitter, just
>>>> barely, without sound artifacts. >60ms, boom, crackle, hiss.
>>>
>>> Ah, and here is were I understand why my simplistic model from above fails; induced latency will contribute significantly to jitter and hence is a good proxy for link-suitability for real-time applications. So I agree using the induced latency as measure to base the color bands from sounds like a good approach.
>>>
>>
>> Voice is actually remarkably tolerant of pure latency. While 60ms of jitter makes a connection almost unusalbe, a few hundred ms of consistant latency isn't a problem. IIRC (from my college days when ATM was the new, hot technology) you have to get up to around a second of latency before pure-consistant latency starts to break things.
>
> Well, what I want to see is a study, preferably psychophysics not modeling ;), showing the different latency “tolerances” of humans. I am certain that humans can adjust to even dozens of seconds de;ays if need be, but the goal should be fluent and seamless conversation not interleaved monologues. Thanks for giving a bound for jitter, do you have any reference for perceptional jitter thresholds or some such?
lots of this sort of work was done back in the late '80s when ATM was being
developed and long-distance digital networks were first being deployed.
>
>>
>> Gaming and high frequency trading care about the minimum latency a LOT. but most other things are far more sentitive to jitter than pure latency. [1]
>
> Sure, but it is easy to “loose” latency but impossible to reclaim, so we should aim for lowest latency ;) . Now as long as jitter has a bound one can trade jitter for latency, by simply buffering more at the receiver thereby ironing out (a part of the) the jitter while introducing additional latency. One reason why I still thing that absolute latency thresholds have some value as they would allow to assess how much of a “budget” one has to flatten out jitter, but I digress. I also think now, that conflating absolute latency and buffer bloat will not really help (unless everybody understands induced latency by heart ;) )….
I agree we should be aiming for the lowest latency we can reasonably maintain,
but this topic started with the question of where to put the 'green' band on a
latency test, and labeling a point as 'VoIP stops working'
base latency + 30-60 ms of buffer induced latency is a reasonble limit, with a
cap somewhere in the 300-500 ms range (and it can be survived with a cap close
to 1 sec, as long as the buffer induced latency that causes jitter remains
small)
As we are looking at what labels to put on the graphs, we need to decide if we
want to pick examples based on jitter (i.e. relative to base latency) or
absolutes (total latency)
given the wide variation in base latency due to different technologies, I think
it would be best to try and use relative latency examples if we can.
Changing topic slightly, I wonder if it would make sense to have an optional
second column to the results page, that shows a 'known good' or 'best known'
report to show the user what they could be getting from their connection if they
were using the right equipment?
David Lang
>>
>> The trouble with bufferbloat induced latency is that it is highly variable based on exactly how much other data is in the queue, so under the wrong conditions, all latency caused by buffering shows up as jitter.
>
> That is how I understood Dave’s mail, thanks for confirming that.
>
> Best Regards
> Sebastian
>
>>
>> David Lang
>>
>> [1] pure latency will degrade the experience for many things, but usually in a fairly graceful manner.
>>
>
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 8:19 ` Toke Høiland-Jørgensen
@ 2015-04-28 15:42 ` David Lang
0 siblings, 0 replies; 183+ messages in thread
From: David Lang @ 2015-04-28 15:42 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1564 bytes --]
On Tue, 28 Apr 2015, Toke Høiland-Jørgensen wrote:
> David Lang <david@lang.hm> writes:
>
>> Voice is actually remarkably tolerant of pure latency. While 60ms of
>> jitter makes a connection almost unusalbe, a few hundred ms of
>> consistant latency isn't a problem. IIRC (from my college days when
>> ATM was the new, hot technology) you have to get up to around a second
>> of latency before pure-consistant latency starts to break things.
>
> Well isn't that more a case of "the human brain will compensate for the
> latency". Sure, you *can* talk to someone with half a second of delay,
> but it's bloody *annoying*. :P
we aren't disagreeing here. "a few hundred ms of consistant latency" is starts
to top out around the half second range.
But if we are labeling something "VoIP breaks here", then it needs to be broken,
not just annoying to some peopel.
David Lang
> That, for me, is the main reason to go with lower figures. I don't want
> to just be able to physically talk with someone without the codec
> breaking, I want to be able to *enjoy* the experience and not be totally
> exhausted by latency fatigue afterwards.
>
> One of the things that really struck a chord with me was hearing the
> people from the LoLa project
> (http://www.conservatorio.trieste.it/artistica/ricerca/progetto-lola-low-latency/lola-case-study.pdf)
> talk about how using their big fancy concert video conferencing system
> to just talk to each other, it was like having a real face-to-face
> conversation with none of the annoyances of regular video chat.
>
> -Toke
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-28 13:44 ` Sebastian Moeller
@ 2015-04-28 19:09 ` Rick Jones
0 siblings, 0 replies; 183+ messages in thread
From: Rick Jones @ 2015-04-28 19:09 UTC (permalink / raw)
To: bloat
On 04/28/2015 06:44 AM, Sebastian Moeller wrote:
> Ah, this fits with the ITU figure 1 data, at ~250ms one-way delay
> they switch from “users very satisfied” to “users satisfied”, also
> showing that the ITU had very patient subjects in their tests…
And/Or didn't want to upset constituents sending phone calls via GEO
satellites...
rick jones
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-25 6:03 ` Sebastian Moeller
2015-04-27 16:39 ` Dave Taht
@ 2015-05-06 5:08 ` Simon Barber
2015-05-06 8:50 ` Sebastian Moeller
1 sibling, 1 reply; 183+ messages in thread
From: Simon Barber @ 2015-05-06 5:08 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
Hi Sebastian,
My numbers are what I've personally come up with after working for many
years with VoIP - they have no other basis. One thing is that you have
to compare apples to apples - the ITU numbers are for acoustic one way
delay. The poor state of jitter buffer implementations that almost every
VoIP app or device has means that to hit these acoustic delay numbers
you need significantly lower network delays. Also note that these
numbers are worst case, which must include trip halfway around the globe
- if you can hit the numbers with half globe propagation then you will
hit much better numbers for 'local calls'.
Simon
On 4/24/2015 11:03 PM, Sebastian Moeller wrote:
> Hi Simon, hi List
>
> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
>
>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
> If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
>
> Best Regards
> Sebastian
>
>
>> Simon
>>
>> Sent with AquaMail for Android
>> http://www.aqua-mail.com
>>
>>
>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>>> simon all your numbers are too large by at least a factor of 2. I
>>> think also you are thinking about total latency, rather than induced
>>> latency and jitter.
>>>
>>> Please see my earlier email laying out the bands. And gettys' manifesto.
>>>
>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>>> you want, and a latency increase of 30ms is a proxy for also holding
>>> jitter that low.
>>>
>>>
>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>>>> I think it might be useful to have a 'latency guide' for users. It would say
>>>> things like
>>>>
>>>> 100ms - VoIP applications work well
>>>> 250ms - VoIP applications - conversation is not as natural as it could be,
>>>> although users may not notice this.
> The only way to detect whether a conversation is natural is if users notice, I would say...
>
>>>> 500ms - VoIP applications begin to have awkward pauses in conversation.
>>>> 1000ms - VoIP applications have significant annoying pauses in conversation.
>>>> 2000ms - VoIP unusable for most interactive conversations.
>>>>
>>>> 0-50ms - web pages load snappily
>>>> 250ms - web pages can often take an extra second to appear, even on the
>>>> highest bandwidth links
>>>> 1000ms - web pages load significantly slower than they should, taking
>>>> several extra seconds to appear, even on the highest bandwidth links
>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>>>> seconds of delays for pages to load, even on the highest bandwidth links.
>>>>
>>>> Gaming.... some kind of guide here....
>>>>
>>>> Simon
>>>>
>>>>
>>>>
>>>>
>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>>>> Hi Toke,
>>>>>
>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>>>>
>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
>>>>>>
>>>>>>> I know this is not perfect and the numbers will probably require
>>>>>>> severe "bike-shedding”
>>>>>> Since you're literally asking for it... ;)
>>>>>>
>>>>>>
>>>>>> In this case we're talking about *added* latency. So the ambition should
>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>>>>> that proper application of a good queue management algorithm can keep it
>>>>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>>>> Oh, I can get behind that easily, I just thought basing the limits
>>>>> on externally relevant total latency thresholds would directly tell the user
>>>>> which applications might run well on his link. Sure this means that people
>>>>> on a satellite link most likely will miss out the acceptable voip threshold
>>>>> by their base-latency alone, but guess what telephony via satellite leaves
>>>>> something to be desired. That said if the alternative is no telephony I
>>>>> would take 1 second one-way delay any day ;).
>>>>> What I liked about fixed thresholds is that the test would give a
>>>>> good indication what kind of uses are going to work well on the link under
>>>>> load, given that during load both base and induced latency come into play. I
>>>>> agree that 300ms as first threshold is rather unambiguous though (and I am
>>>>> certain that remote X11 will require a massively lower RTT unless one likes
>>>>> to think of remote desktop as an oil tanker simulator ;) )
>>>>>
>>>>>> The other increments I have less opinions about, but 100 ms does seem to
>>>>>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>>>>> reds somewhere above that, and range up into the deep red / purple /
>>>>>> black with skulls and fiery death as we go nearer and above one second?
>>>>>>
>>>>>>
>>>>>> I very much think that raising peoples expectations and being quite
>>>>>> ambitious about what to expect is an important part of this. Of course
>>>>>> the base latency is going to vary, but the added latency shouldn't. And
>>>>>> sine we have the technology to make sure it doesn't, calling out bad
>>>>>> results when we see them is reasonable!
>>>>> Okay so this would turn into:
>>>>>
>>>>> base latency to base latency + 30 ms: green
>>>>> base latency + 31 ms to base latency + 100 ms: yellow
>>>>> base latency + 101 ms to base latency + 200 ms: orange?
>>>>> base latency + 201 ms to base latency + 500 ms: red
>>>>> base latency + 501 ms to base latency + 1000 ms: fire
>>>>> base latency + 1001 ms to infinity:
>>>>> fire & brimstone
>>>>>
>>>>> correct?
>>>>>
>>>>>
>>>>>> -Toke
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>> --
>>> Dave Täht
>>> Open Networking needs **Open Source Hardware**
>>>
>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-06 5:08 ` Simon Barber
@ 2015-05-06 8:50 ` Sebastian Moeller
2015-05-06 15:30 ` Jim Gettys
0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2015-05-06 8:50 UTC (permalink / raw)
To: Simon Barber; +Cc: bloat
Hi Simon,
On May 6, 2015, at 07:08 , Simon Barber <simon@superduper.net> wrote:
> Hi Sebastian,
>
> My numbers are what I've personally come up with after working for many years with VoIP - they have no other basis.
I did not intend to be-little such numbers at all, I just wanted to propose that we either use generally accepted scientifically measured numbers or make such measurements our self.
> One thing is that you have to compare apples to apples - the ITU numbers are for acoustic one way delay.
True, and this is why we easily can estimate the delay cost of different stages of the whole voip one-way pipeline to deuce how much latent budget we have for the network (aka buffer bloat on the way), but still bases our numbers on some reference for mouth-to-ear-delay. I think we can conservatively estimate the latency cost of the sampling, sender processing and receiver processing (outside of the de-jitter buffering) seem harder to estimate reliably, to my untrained eye.
> The poor state of jitter buffer implementations that almost every VoIP app or device has means that to hit these acoustic delay numbers you need significantly lower network delays.
I fully agree, and if we can estimate this I think we can justify deductions from the mouth-to-ear budget. I would as first approximation assume that what we call latency under load increase to be tightly correlated with jitter, so we could take our “bloat-measurement” in ms an directly deduct it from the budget (or if we want to accept occasional voice degradation we can pick a sufficiently high percentile, but that is implementation detail).
> Also note that these numbers are worst case, which must include trip halfway around the globe - if you can hit the numbers with half globe propagation then you will hit much better numbers for 'local calls’.
We could turn this around by estimating to what distance voip quality will be good/decent/acceptable/lughable…
Best Regards
Sebastian
>
> Simon
>
>
> On 4/24/2015 11:03 PM, Sebastian Moeller wrote:
>> Hi Simon, hi List
>>
>> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
>>
>>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
>> If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
>>
>> Best Regards
>> Sebastian
>>
>>
>>> Simon
>>>
>>> Sent with AquaMail for Android
>>> http://www.aqua-mail.com
>>>
>>>
>>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>>> simon all your numbers are too large by at least a factor of 2. I
>>>> think also you are thinking about total latency, rather than induced
>>>> latency and jitter.
>>>>
>>>> Please see my earlier email laying out the bands. And gettys' manifesto.
>>>>
>>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
>>>> you want, and a latency increase of 30ms is a proxy for also holding
>>>> jitter that low.
>>>>
>>>>
>>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
>>>>> I think it might be useful to have a 'latency guide' for users. It would say
>>>>> things like
>>>>>
>>>>> 100ms - VoIP applications work well
>>>>> 250ms - VoIP applications - conversation is not as natural as it could be,
>>>>> although users may not notice this.
>> The only way to detect whether a conversation is natural is if users notice, I would say...
>>
>>>>> 500ms - VoIP applications begin to have awkward pauses in conversation.
>>>>> 1000ms - VoIP applications have significant annoying pauses in conversation.
>>>>> 2000ms - VoIP unusable for most interactive conversations.
>>>>>
>>>>> 0-50ms - web pages load snappily
>>>>> 250ms - web pages can often take an extra second to appear, even on the
>>>>> highest bandwidth links
>>>>> 1000ms - web pages load significantly slower than they should, taking
>>>>> several extra seconds to appear, even on the highest bandwidth links
>>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
>>>>> seconds of delays for pages to load, even on the highest bandwidth links.
>>>>>
>>>>> Gaming.... some kind of guide here....
>>>>>
>>>>> Simon
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
>>>>>> Hi Toke,
>>>>>>
>>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>>>>>
>>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
>>>>>>>
>>>>>>>> I know this is not perfect and the numbers will probably require
>>>>>>>> severe "bike-shedding”
>>>>>>> Since you're literally asking for it... ;)
>>>>>>>
>>>>>>>
>>>>>>> In this case we're talking about *added* latency. So the ambition should
>>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
>>>>>>> that proper application of a good queue management algorithm can keep it
>>>>>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
>>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
>>>>>> Oh, I can get behind that easily, I just thought basing the limits
>>>>>> on externally relevant total latency thresholds would directly tell the user
>>>>>> which applications might run well on his link. Sure this means that people
>>>>>> on a satellite link most likely will miss out the acceptable voip threshold
>>>>>> by their base-latency alone, but guess what telephony via satellite leaves
>>>>>> something to be desired. That said if the alternative is no telephony I
>>>>>> would take 1 second one-way delay any day ;).
>>>>>> What I liked about fixed thresholds is that the test would give a
>>>>>> good indication what kind of uses are going to work well on the link under
>>>>>> load, given that during load both base and induced latency come into play. I
>>>>>> agree that 300ms as first threshold is rather unambiguous though (and I am
>>>>>> certain that remote X11 will require a massively lower RTT unless one likes
>>>>>> to think of remote desktop as an oil tanker simulator ;) )
>>>>>>
>>>>>>> The other increments I have less opinions about, but 100 ms does seem to
>>>>>>> be a nice round number, so do yellow from 30-100 ms, then start with the
>>>>>>> reds somewhere above that, and range up into the deep red / purple /
>>>>>>> black with skulls and fiery death as we go nearer and above one second?
>>>>>>>
>>>>>>>
>>>>>>> I very much think that raising peoples expectations and being quite
>>>>>>> ambitious about what to expect is an important part of this. Of course
>>>>>>> the base latency is going to vary, but the added latency shouldn't. And
>>>>>>> sine we have the technology to make sure it doesn't, calling out bad
>>>>>>> results when we see them is reasonable!
>>>>>> Okay so this would turn into:
>>>>>>
>>>>>> base latency to base latency + 30 ms: green
>>>>>> base latency + 31 ms to base latency + 100 ms: yellow
>>>>>> base latency + 101 ms to base latency + 200 ms: orange?
>>>>>> base latency + 201 ms to base latency + 500 ms: red
>>>>>> base latency + 501 ms to base latency + 1000 ms: fire
>>>>>> base latency + 1001 ms to infinity:
>>>>>> fire & brimstone
>>>>>>
>>>>>> correct?
>>>>>>
>>>>>>
>>>>>>> -Toke
>>>>>> _______________________________________________
>>>>>> Bloat mailing list
>>>>>> Bloat@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>> --
>>>> Dave Täht
>>>> Open Networking needs **Open Source Hardware**
>>>>
>>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-06 8:50 ` Sebastian Moeller
@ 2015-05-06 15:30 ` Jim Gettys
2015-05-06 18:03 ` Sebastian Moeller
2015-05-06 20:25 ` Jonathan Morton
0 siblings, 2 replies; 183+ messages in thread
From: Jim Gettys @ 2015-05-06 15:30 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 9871 bytes --]
On Wed, May 6, 2015 at 4:50 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Simon,
>
> On May 6, 2015, at 07:08 , Simon Barber <simon@superduper.net> wrote:
>
> > Hi Sebastian,
> >
> > My numbers are what I've personally come up with after working for many
> years with VoIP - they have no other basis.
>
> I did not intend to be-little such numbers at all, I just wanted
> to propose that we either use generally accepted scientifically measured
> numbers or make such measurements our self.
>
> > One thing is that you have to compare apples to apples - the ITU numbers
> are for acoustic one way delay.
>
> True, and this is why we easily can estimate the delay cost of different
> stages of the whole voip one-way pipeline to deuce how much latent budget
> we have for the network (aka buffer bloat on the way), but still bases our
> numbers on some reference for mouth-to-ear-delay. I think we can
> conservatively estimate the latency cost of the sampling, sender processing
> and receiver processing (outside of the de-jitter buffering) seem harder to
> estimate reliably, to my untrained eye.
>
> > The poor state of jitter buffer implementations that almost every VoIP
> app or device has means that to hit these acoustic delay numbers you need
> significantly lower network delays.
>
> I fully agree, and if we can estimate this I think we can justify
> deductions from the mouth-to-ear budget. I would as first approximation
> assume that what we call latency under load increase to be tightly
> correlated with jitter, so we could take our “bloat-measurement” in ms an
> directly deduct it from the budget (or if we want to accept occasional
> voice degradation we can pick a sufficiently high percentile, but that is
> implementation detail).
>
> > Also note that these numbers are worst case, which must include trip
> halfway around the globe - if you can hit the numbers with half globe
> propagation then you will hit much better numbers for 'local calls’.
>
> We could turn this around by estimating to what distance voip
> quality will be good/decent/acceptable/lughable…
>
>
>
Mean RTT is almost useless for VOIP and teleconferencing. What matters is
the RTT + jitter; a VOIP or teleconferencing application cannot function at
a given latency unless the "drop outs" caused by late packets is low enough
to not be obnoxious to human perception; there are a number of techniques
to hide such late packet dropouts but all of them (short of FEC) damage the
audio stream.
So ideally, not only do you measure the delay, you also measure the jitter
to be able to figure out a realistic operating point for such applications.
- Jim
>
>
>
> >
> > Simon
> >
> >
> > On 4/24/2015 11:03 PM, Sebastian Moeller wrote:
> >> Hi Simon, hi List
> >>
> >> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
> >>
> >>> Certainly the VoIP numbers are for peak total latency, and while
> Justin is measuring total latency because he is only taking a few samples
> the peak values will be a little higher.
> >> If your voip number are for peak total latency they need
> literature citations to back them up, as they are way shorter than what the
> ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not
> "married” to the ITU numbers but I think we should use generally accepted
> numbers here and not bake our own thresholds (and for all I know your
> numbers are fine, I just don’t know where they are coming from ;) )
> >>
> >> Best Regards
> >> Sebastian
> >>
> >>
> >>> Simon
> >>>
> >>> Sent with AquaMail for Android
> >>> http://www.aqua-mail.com
> >>>
> >>>
> >>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
> >>>
> >>>> simon all your numbers are too large by at least a factor of 2. I
> >>>> think also you are thinking about total latency, rather than induced
> >>>> latency and jitter.
> >>>>
> >>>> Please see my earlier email laying out the bands. And gettys'
> manifesto.
> >>>>
> >>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
> >>>> you want, and a latency increase of 30ms is a proxy for also holding
> >>>> jitter that low.
> >>>>
> >>>>
> >>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net>
> wrote:
> >>>>> I think it might be useful to have a 'latency guide' for users. It
> would say
> >>>>> things like
> >>>>>
> >>>>> 100ms - VoIP applications work well
> >>>>> 250ms - VoIP applications - conversation is not as natural as it
> could be,
> >>>>> although users may not notice this.
> >> The only way to detect whether a conversation is natural is if
> users notice, I would say...
> >>
> >>>>> 500ms - VoIP applications begin to have awkward pauses in
> conversation.
> >>>>> 1000ms - VoIP applications have significant annoying pauses in
> conversation.
> >>>>> 2000ms - VoIP unusable for most interactive conversations.
> >>>>>
> >>>>> 0-50ms - web pages load snappily
> >>>>> 250ms - web pages can often take an extra second to appear, even on
> the
> >>>>> highest bandwidth links
> >>>>> 1000ms - web pages load significantly slower than they should, taking
> >>>>> several extra seconds to appear, even on the highest bandwidth links
> >>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even
> 10s of
> >>>>> seconds of delays for pages to load, even on the highest bandwidth
> links.
> >>>>>
> >>>>> Gaming.... some kind of guide here....
> >>>>>
> >>>>> Simon
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
> >>>>>> Hi Toke,
> >>>>>>
> >>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk>
> wrote:
> >>>>>>
> >>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
> >>>>>>>
> >>>>>>>> I know this is not perfect and the numbers will probably require
> >>>>>>>> severe "bike-shedding”
> >>>>>>> Since you're literally asking for it... ;)
> >>>>>>>
> >>>>>>>
> >>>>>>> In this case we're talking about *added* latency. So the ambition
> should
> >>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we
> know
> >>>>>>> that proper application of a good queue management algorithm can
> keep it
> >>>>>>> pretty close to this. Certainly under 20-30 ms of added latency.
> So from
> >>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to
> 30 ms.
> >>>>>> Oh, I can get behind that easily, I just thought basing the
> limits
> >>>>>> on externally relevant total latency thresholds would directly tell
> the user
> >>>>>> which applications might run well on his link. Sure this means that
> people
> >>>>>> on a satellite link most likely will miss out the acceptable voip
> threshold
> >>>>>> by their base-latency alone, but guess what telephony via satellite
> leaves
> >>>>>> something to be desired. That said if the alternative is no
> telephony I
> >>>>>> would take 1 second one-way delay any day ;).
> >>>>>> What I liked about fixed thresholds is that the test would
> give a
> >>>>>> good indication what kind of uses are going to work well on the
> link under
> >>>>>> load, given that during load both base and induced latency come
> into play. I
> >>>>>> agree that 300ms as first threshold is rather unambiguous though
> (and I am
> >>>>>> certain that remote X11 will require a massively lower RTT unless
> one likes
> >>>>>> to think of remote desktop as an oil tanker simulator ;) )
> >>>>>>
> >>>>>>> The other increments I have less opinions about, but 100 ms does
> seem to
> >>>>>>> be a nice round number, so do yellow from 30-100 ms, then start
> with the
> >>>>>>> reds somewhere above that, and range up into the deep red / purple
> /
> >>>>>>> black with skulls and fiery death as we go nearer and above one
> second?
> >>>>>>>
> >>>>>>>
> >>>>>>> I very much think that raising peoples expectations and being quite
> >>>>>>> ambitious about what to expect is an important part of this. Of
> course
> >>>>>>> the base latency is going to vary, but the added latency
> shouldn't. And
> >>>>>>> sine we have the technology to make sure it doesn't, calling out
> bad
> >>>>>>> results when we see them is reasonable!
> >>>>>> Okay so this would turn into:
> >>>>>>
> >>>>>> base latency to base latency + 30 ms:
> green
> >>>>>> base latency + 31 ms to base latency + 100 ms: yellow
> >>>>>> base latency + 101 ms to base latency + 200 ms: orange?
> >>>>>> base latency + 201 ms to base latency + 500 ms: red
> >>>>>> base latency + 501 ms to base latency + 1000 ms: fire
> >>>>>> base latency + 1001 ms to infinity:
> >>>>>> fire & brimstone
> >>>>>>
> >>>>>> correct?
> >>>>>>
> >>>>>>
> >>>>>>> -Toke
> >>>>>> _______________________________________________
> >>>>>> Bloat mailing list
> >>>>>> Bloat@lists.bufferbloat.net
> >>>>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>>
> >>>>> _______________________________________________
> >>>>> Bloat mailing list
> >>>>> Bloat@lists.bufferbloat.net
> >>>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>
> >>>>
> >>>> --
> >>>> Dave Täht
> >>>> Open Networking needs **Open Source Hardware**
> >>>>
> >>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> >>>
> >>> _______________________________________________
> >>> Bloat mailing list
> >>> Bloat@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/bloat
> >
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 14751 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-06 15:30 ` Jim Gettys
@ 2015-05-06 18:03 ` Sebastian Moeller
2015-05-06 20:25 ` Jonathan Morton
1 sibling, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2015-05-06 18:03 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
Hi Jim, hi List,
On May 6, 2015, at 17:30 , Jim Gettys <jg@freedesktop.org> wrote:
>
>
> On Wed, May 6, 2015 at 4:50 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Simon,
>
> On May 6, 2015, at 07:08 , Simon Barber <simon@superduper.net> wrote:
>
> > Hi Sebastian,
> >
> > My numbers are what I've personally come up with after working for many years with VoIP - they have no other basis.
>
> I did not intend to be-little such numbers at all, I just wanted to propose that we either use generally accepted scientifically measured numbers or make such measurements our self.
>
> > One thing is that you have to compare apples to apples - the ITU numbers are for acoustic one way delay.
>
> True, and this is why we easily can estimate the delay cost of different stages of the whole voip one-way pipeline to deuce how much latent budget we have for the network (aka buffer bloat on the way), but still bases our numbers on some reference for mouth-to-ear-delay. I think we can conservatively estimate the latency cost of the sampling, sender processing and receiver processing (outside of the de-jitter buffering) seem harder to estimate reliably, to my untrained eye.
>
> > The poor state of jitter buffer implementations that almost every VoIP app or device has means that to hit these acoustic delay numbers you need significantly lower network delays.
>
> I fully agree, and if we can estimate this I think we can justify deductions from the mouth-to-ear budget. I would as first approximation assume that what we call latency under load increase to be tightly correlated with jitter, so we could take our “bloat-measurement” in ms an directly deduct it from the budget (or if we want to accept occasional voice degradation we can pick a sufficiently high percentile, but that is implementation detail).
>
> > Also note that these numbers are worst case, which must include trip halfway around the globe - if you can hit the numbers with half globe propagation then you will hit much better numbers for 'local calls’.
>
> We could turn this around by estimating to what distance voip quality will be good/decent/acceptable/lughable…
>
>
>
> Mean RTT is almost useless for VOIP and teleconferencing. What matters is the RTT + jitter; a VOIP or teleconferencing application cannot function at a given latency unless the "drop outs" caused by late packets is low enough to not be obnoxious to human perception; there are a number of techniques to hide such late packet dropouts but all of them (short of FEC) damage the audio stream.
>
> So ideally, not only do you measure the delay, you also measure the jitter to be able to figure out a realistic operating point for such applications.
Yes, I fully endorse this! What I tried to propose in a slightly convoluted way, was to use the fact that we have a handle on at least one major jitter source, namely the induced latency under load, so we can take the mean RTT and a proxy for the jitter into account. We basically can say that with a given jitter (assuming a proper dimension) we can estimate how far a given one-way mouth to ear delay will carry a voip call: e.g.: ;et’s take the 150ms ITU number just for a start, subtract an empirically estimated max induced latency of say 60ms (to account for the required de-jitter buffer), as well as 20ms sample-time-per-voip packet (and then just ignore all other processing overhead) and end up with 150-60-20 = 70ms which at 5ms per 1000Km means 70/5 * 1000 = 14000km, which still is decent it also shows that at shorter distances the delay will be shorter. Or to turn this argument around the same system with a induced latency/jitter of 10ms will carry (150-10-20)/5 * 1000 = 24000 km.
TL;DR with a measured latency under load we have a decent estimator for the jitter and can for example express the gain of beating buffer bloat in increased reach (be it voip call distance or for gamers distance to servers with still acceptable reactivity or some such).
Best Regards
Sebastian
> - Jim
>
>
>
>
>
>
> >
> > Simon
> >
> >
> > On 4/24/2015 11:03 PM, Sebastian Moeller wrote:
> >> Hi Simon, hi List
> >>
> >> On Apr 25, 2015, at 06:26 , Simon Barber <simon@superduper.net> wrote:
> >>
> >>> Certainly the VoIP numbers are for peak total latency, and while Justin is measuring total latency because he is only taking a few samples the peak values will be a little higher.
> >> If your voip number are for peak total latency they need literature citations to back them up, as they are way shorter than what the ITU recommends for one-way-latency (see ITU-T G.114, Fig. 1). I am not "married” to the ITU numbers but I think we should use generally accepted numbers here and not bake our own thresholds (and for all I know your numbers are fine, I just don’t know where they are coming from ;) )
> >>
> >> Best Regards
> >> Sebastian
> >>
> >>
> >>> Simon
> >>>
> >>> Sent with AquaMail for Android
> >>> http://www.aqua-mail.com
> >>>
> >>>
> >>> On April 24, 2015 9:04:45 PM Dave Taht <dave.taht@gmail.com> wrote:
> >>>
> >>>> simon all your numbers are too large by at least a factor of 2. I
> >>>> think also you are thinking about total latency, rather than induced
> >>>> latency and jitter.
> >>>>
> >>>> Please see my earlier email laying out the bands. And gettys' manifesto.
> >>>>
> >>>> If you are thinking in terms of voip, less than 30ms *jitter* is what
> >>>> you want, and a latency increase of 30ms is a proxy for also holding
> >>>> jitter that low.
> >>>>
> >>>>
> >>>> On Fri, Apr 24, 2015 at 8:15 PM, Simon Barber <simon@superduper.net> wrote:
> >>>>> I think it might be useful to have a 'latency guide' for users. It would say
> >>>>> things like
> >>>>>
> >>>>> 100ms - VoIP applications work well
> >>>>> 250ms - VoIP applications - conversation is not as natural as it could be,
> >>>>> although users may not notice this.
> >> The only way to detect whether a conversation is natural is if users notice, I would say...
> >>
> >>>>> 500ms - VoIP applications begin to have awkward pauses in conversation.
> >>>>> 1000ms - VoIP applications have significant annoying pauses in conversation.
> >>>>> 2000ms - VoIP unusable for most interactive conversations.
> >>>>>
> >>>>> 0-50ms - web pages load snappily
> >>>>> 250ms - web pages can often take an extra second to appear, even on the
> >>>>> highest bandwidth links
> >>>>> 1000ms - web pages load significantly slower than they should, taking
> >>>>> several extra seconds to appear, even on the highest bandwidth links
> >>>>> 2000ms+ - web browsing is heavily slowed, with many seconds or even 10s of
> >>>>> seconds of delays for pages to load, even on the highest bandwidth links.
> >>>>>
> >>>>> Gaming.... some kind of guide here....
> >>>>>
> >>>>> Simon
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 4/24/2015 1:55 AM, Sebastian Moeller wrote:
> >>>>>> Hi Toke,
> >>>>>>
> >>>>>> On Apr 24, 2015, at 10:29 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> >>>>>>
> >>>>>>> Sebastian Moeller <moeller0@gmx.de> writes:
> >>>>>>>
> >>>>>>>> I know this is not perfect and the numbers will probably require
> >>>>>>>> severe "bike-shedding”
> >>>>>>> Since you're literally asking for it... ;)
> >>>>>>>
> >>>>>>>
> >>>>>>> In this case we're talking about *added* latency. So the ambition should
> >>>>>>> be zero, or so close to it as to be indiscernible. Furthermore, we know
> >>>>>>> that proper application of a good queue management algorithm can keep it
> >>>>>>> pretty close to this. Certainly under 20-30 ms of added latency. So from
> >>>>>>> this, IMO the 'green' or 'excellent' score should be from zero to 30 ms.
> >>>>>> Oh, I can get behind that easily, I just thought basing the limits
> >>>>>> on externally relevant total latency thresholds would directly tell the user
> >>>>>> which applications might run well on his link. Sure this means that people
> >>>>>> on a satellite link most likely will miss out the acceptable voip threshold
> >>>>>> by their base-latency alone, but guess what telephony via satellite leaves
> >>>>>> something to be desired. That said if the alternative is no telephony I
> >>>>>> would take 1 second one-way delay any day ;).
> >>>>>> What I liked about fixed thresholds is that the test would give a
> >>>>>> good indication what kind of uses are going to work well on the link under
> >>>>>> load, given that during load both base and induced latency come into play. I
> >>>>>> agree that 300ms as first threshold is rather unambiguous though (and I am
> >>>>>> certain that remote X11 will require a massively lower RTT unless one likes
> >>>>>> to think of remote desktop as an oil tanker simulator ;) )
> >>>>>>
> >>>>>>> The other increments I have less opinions about, but 100 ms does seem to
> >>>>>>> be a nice round number, so do yellow from 30-100 ms, then start with the
> >>>>>>> reds somewhere above that, and range up into the deep red / purple /
> >>>>>>> black with skulls and fiery death as we go nearer and above one second?
> >>>>>>>
> >>>>>>>
> >>>>>>> I very much think that raising peoples expectations and being quite
> >>>>>>> ambitious about what to expect is an important part of this. Of course
> >>>>>>> the base latency is going to vary, but the added latency shouldn't. And
> >>>>>>> sine we have the technology to make sure it doesn't, calling out bad
> >>>>>>> results when we see them is reasonable!
> >>>>>> Okay so this would turn into:
> >>>>>>
> >>>>>> base latency to base latency + 30 ms: green
> >>>>>> base latency + 31 ms to base latency + 100 ms: yellow
> >>>>>> base latency + 101 ms to base latency + 200 ms: orange?
> >>>>>> base latency + 201 ms to base latency + 500 ms: red
> >>>>>> base latency + 501 ms to base latency + 1000 ms: fire
> >>>>>> base latency + 1001 ms to infinity:
> >>>>>> fire & brimstone
> >>>>>>
> >>>>>> correct?
> >>>>>>
> >>>>>>
> >>>>>>> -Toke
> >>>>>> _______________________________________________
> >>>>>> Bloat mailing list
> >>>>>> Bloat@lists.bufferbloat.net
> >>>>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>>
> >>>>> _______________________________________________
> >>>>> Bloat mailing list
> >>>>> Bloat@lists.bufferbloat.net
> >>>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>>
> >>>>
> >>>> --
> >>>> Dave Täht
> >>>> Open Networking needs **Open Source Hardware**
> >>>>
> >>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> >>>
> >>> _______________________________________________
> >>> Bloat mailing list
> >>> Bloat@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/bloat
> >
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-06 15:30 ` Jim Gettys
2015-05-06 18:03 ` Sebastian Moeller
@ 2015-05-06 20:25 ` Jonathan Morton
2015-05-06 20:43 ` Toke Høiland-Jørgensen
` (2 more replies)
1 sibling, 3 replies; 183+ messages in thread
From: Jonathan Morton @ 2015-05-06 20:25 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 576 bytes --]
So, as a proposed methodology, how does this sound:
Determine a reasonable ballpark figure for typical codec and jitter-buffer
delay (one way). Fix this as a constant value for the benchmark.
Measure the baseline network delays (round trip) to various reference
points worldwide.
Measure the maximum induced delays in each direction.
For each reference point, sum two sets of constant delays, the baseline
network delay, and both directions' induced delays.
Compare these totals to twice the ITU benchmark figures, rate accordingly,
and plot on a map.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 699 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-06 20:25 ` Jonathan Morton
@ 2015-05-06 20:43 ` Toke Høiland-Jørgensen
2015-05-07 7:33 ` Sebastian Moeller
2015-05-07 4:29 ` Mikael Abrahamsson
2015-05-07 6:19 ` Sebastian Moeller
2 siblings, 1 reply; 183+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-05-06 20:43 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
Jonathan Morton <chromatix99@gmail.com> writes:
> Compare these totals to twice the ITU benchmark figures, rate
> accordingly, and plot on a map.
A nice way of visualising this can be 'radius of reach within n
milliseconds'. Or, 'number of people reachable within n ms'. This paper
uses that (or something very similar) to visualise the benefits of
speed-of-light internet:
http://web.engr.illinois.edu/~singla2/papers/hotnets14.pdf
That same paper uses 30 ms as an 'instant response' number, btw, citing
this: http://plato.stanford.edu/entries/consciousness-temporal/empirical-findings.html
-Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-06 20:25 ` Jonathan Morton
2015-05-06 20:43 ` Toke Høiland-Jørgensen
@ 2015-05-07 4:29 ` Mikael Abrahamsson
2015-05-07 7:08 ` jb
2015-05-07 6:19 ` Sebastian Moeller
2 siblings, 1 reply; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07 4:29 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
On Wed, 6 May 2015, Jonathan Morton wrote:
> So, as a proposed methodology, how does this sound:
>
> Determine a reasonable ballpark figure for typical codec and jitter-buffer
> delay (one way). Fix this as a constant value for the benchmark.
Commercial grade VoIP systems running in a controlled environment
typically (in my experience) come with 40ms PDV (Packet Delay Variation,
let's not call it jitter, the timing people get upset if you call it
jitter) buffer. These systems typically do not work well over the Internet
as we here all know, 40ms is quite low PDV on a FIFO based Internet
access. Applications actually designed to work on the Internet have PDV
buffers that adapt according to what PDV is seen, and so they can both
increase and decrease in size over the time of a call.
I'd say ballpark reasonable figure for VoIP and video conferencing of
reasonable PDV is in the 50-100ms range or so, where lower of course is
better. It's basically impossible to have really low PDV on a 1 megabit/s
link because a full size 1500 byte packet will take close to 10ms to
transmit, but it's perfectly feasable to keep it under 10-20ms when the
link speed increases. If we say that 1 megabit/s (typical ADSL up speed)is
the lower bound of speed where one can expect VoIP to work together with
other Internet traffic, then 50-100ms should be technically attainable if
the vendor/operator actually tries to reduce bufferbloat/PDV.
> Measure the maximum induced delays in each direction.
Depending on the length of the test, it might make sense to aim for 95th
or 99th percentile, ie throw away the one or few worst values as these
might be outliers. But generally I agree with your proposed terminology.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-06 20:25 ` Jonathan Morton
2015-05-06 20:43 ` Toke Høiland-Jørgensen
2015-05-07 4:29 ` Mikael Abrahamsson
@ 2015-05-07 6:19 ` Sebastian Moeller
2 siblings, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2015-05-07 6:19 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
Hi Jonathan,
On May 6, 2015, at 22:25 , Jonathan Morton <chromatix99@gmail.com> wrote:
> So, as a proposed methodology, how does this sound:
>
> Determine a reasonable ballpark figure for typical codec and jitter-buffer delay (one way). Fix this as a constant value for the benchmark.
But we can do better, assuming captive de-jitter buffers (and they better are), we can take the induced latency per direction as first approximation of the required de-jitter buffer size.
>
> Measure the baseline network delays (round trip) to various reference points worldwide.
>
> Measure the maximum induced delays in each direction.
>
> For each reference point, sum two sets of constant delays, the baseline network delay, and both directions' induced delays.
I think we should not count the de-jitter buffer and the actually PDV twice, as far as I understand the principle of the de-jittering is to introduce a buffer deep enough to smooth out the real variable packet latency, so at best we should count max(induced latency per direction, de-jitter buffer depth per direction), so the induced latency (or a suitable high percentile if we aim for good enough instead of perfect) is the best estimator we have for the jitter-induced delay. But this is not my line of work so I could b out to lunch here...
>
> Compare these totals to twice the ITU benchmark figures, rate accordingly, and plot on a map.
I like the map idea (and I think I have seen something like this recently, I think visualizing propagation speed in fiber). Now any map just based on actual distance on the earth’s surface is going to give a lower bound, but that should still be a decent estimate (unless something nefarious like http://research.dyn.com/2013/11/mitm-internet-hijacking/ is going on then all bets are off ;) )
Best Regards
Sebastian
>
> - Jonathan Morton
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 4:29 ` Mikael Abrahamsson
@ 2015-05-07 7:08 ` jb
2015-05-07 7:18 ` Jonathan Morton
2015-05-07 7:19 ` Mikael Abrahamsson
0 siblings, 2 replies; 183+ messages in thread
From: jb @ 2015-05-07 7:08 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: Jonathan Morton, bloat
[-- Attachment #1: Type: text/plain, Size: 2744 bytes --]
I am working on a multi-location jitter test (sorry PDV!) and it is showing
a lot of promise.
For the purposes of reporting jitter, what kind of time measurement horizon
is acceptable
and what is the +/- output actually based on, statistically ?
For example - is one minute or more of jitter measurements, with the +/-
being
the 2rd std deviation, reasonable or is there some generally accepted
definition ?
ping reports an "mdev" which is
SQRT(SUM(RTT*RTT) / N – (SUM(RTT)/N)^2)
but I've seen jitter defined as maximum and minimum RTT around the average
however that seems very influenced by one outlier measurement.
thanks
On Thu, May 7, 2015 at 2:29 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Wed, 6 May 2015, Jonathan Morton wrote:
>
> So, as a proposed methodology, how does this sound:
>>
>> Determine a reasonable ballpark figure for typical codec and jitter-buffer
>> delay (one way). Fix this as a constant value for the benchmark.
>>
>
> Commercial grade VoIP systems running in a controlled environment
> typically (in my experience) come with 40ms PDV (Packet Delay Variation,
> let's not call it jitter, the timing people get upset if you call it
> jitter) buffer. These systems typically do not work well over the Internet
> as we here all know, 40ms is quite low PDV on a FIFO based Internet access.
> Applications actually designed to work on the Internet have PDV buffers
> that adapt according to what PDV is seen, and so they can both increase and
> decrease in size over the time of a call.
>
> I'd say ballpark reasonable figure for VoIP and video conferencing of
> reasonable PDV is in the 50-100ms range or so, where lower of course is
> better. It's basically impossible to have really low PDV on a 1 megabit/s
> link because a full size 1500 byte packet will take close to 10ms to
> transmit, but it's perfectly feasable to keep it under 10-20ms when the
> link speed increases. If we say that 1 megabit/s (typical ADSL up speed)is
> the lower bound of speed where one can expect VoIP to work together with
> other Internet traffic, then 50-100ms should be technically attainable if
> the vendor/operator actually tries to reduce bufferbloat/PDV.
>
> Measure the maximum induced delays in each direction.
>>
>
> Depending on the length of the test, it might make sense to aim for 95th
> or 99th percentile, ie throw away the one or few worst values as these
> might be outliers. But generally I agree with your proposed terminology.
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 3666 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 7:08 ` jb
@ 2015-05-07 7:18 ` Jonathan Morton
2015-05-07 7:24 ` Mikael Abrahamsson
2015-05-07 7:37 ` [Bloat] DSLReports Speed Test has latency measurement built-in Sebastian Moeller
2015-05-07 7:19 ` Mikael Abrahamsson
1 sibling, 2 replies; 183+ messages in thread
From: Jonathan Morton @ 2015-05-07 7:18 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 745 bytes --]
It may depend on the application's tolerance to packet loss. A packet
delayed further than the jitter buffer's tolerance counts as lost, so *IF*
jitter is randomly distributed, jitter can be traded off against loss. For
those purposes, standard deviation may be a valid metric.
However the more common characteristic is that delay is sometimes low (link
idle) and sometimes high (buffer full) and rarely in between. In other
words, delay samples are not statistically independent; loss due to jitter
is bursty, and real-time applications like VoIP can't cope with that. For
that reason, and due to your low temporal sampling rate, you should take
the peak delay observed under load and compare it to the average during
idle.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 816 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 7:08 ` jb
2015-05-07 7:18 ` Jonathan Morton
@ 2015-05-07 7:19 ` Mikael Abrahamsson
1 sibling, 0 replies; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07 7:19 UTC (permalink / raw)
To: jb; +Cc: Jonathan Morton, bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1182 bytes --]
On Thu, 7 May 2015, jb wrote:
> I am working on a multi-location jitter test (sorry PDV!) and it is
> showing a lot of promise. For the purposes of reporting jitter, what
> kind of time measurement horizon is acceptable and what is the +/-
> output actually based on, statistically ?
>
> For example - is one minute or more of jitter measurements, with the +/-
> being the 2rd std deviation, reasonable or is there some generally
> accepted definition ?
>
> ping reports an "mdev" which is
> SQRT(SUM(RTT*RTT) / N – (SUM(RTT)/N)^2)
> but I've seen jitter defined as maximum and minimum RTT around the average
> however that seems very influenced by one outlier measurement.
There is no single PDV definition, all of the ones you listed are
perfectly valid.
If you send one packet every 20ms (simulating a g711 voip call with fairly
common characteristics) at the same time as you send other traffic, and
then you present max, 99th percentile, 95th percentile and average pdv, I
think all of those values are valuable. To a novice user, I would probably
choose the 99th and/or 95th percentile PDV value from baseline.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 7:18 ` Jonathan Morton
@ 2015-05-07 7:24 ` Mikael Abrahamsson
2015-05-07 7:40 ` Sebastian Moeller
2015-05-07 7:37 ` [Bloat] DSLReports Speed Test has latency measurement built-in Sebastian Moeller
1 sibling, 1 reply; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07 7:24 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
On Thu, 7 May 2015, Jonathan Morton wrote:
> However the more common characteristic is that delay is sometimes low
> (link idle) and sometimes high (buffer full) and rarely in between. In
> other words, delay samples are not statistically independent; loss due
> to jitter is bursty, and real-time applications like VoIP can't cope
> with that. For that reason, and due to your low temporal sampling rate,
> you should take the peak delay observed under load and compare it to the
> average during idle.
Well, some applications will stop playing if the playout-buffer is empty,
and if the packet arrives late, just start playing again and then increase
the PDV buffer to whatever gap was observed, and if the PDV buffer has
sustained fill, start playing it faster or skipping packets to play down
the PDV buffer fill again.
So you'll observe silence or cutouts, but you'll still hear all sound but
after this event, your mouth-ear-mouth-ear delay has now increased.
As far as I can tell, for instance Skype has a lot of different ways to
cope with changing characteristics of the path, which work a lot better
than a 10 year old classic PSTN-style G.711-over-IP style system with
static 40ms PDV buffers, which behave exactly as you describe.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-06 20:43 ` Toke Høiland-Jørgensen
@ 2015-05-07 7:33 ` Sebastian Moeller
0 siblings, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2015-05-07 7:33 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: Jonathan Morton, bloat
Hi Toke,
On May 6, 2015, at 22:43 , Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> Jonathan Morton <chromatix99@gmail.com> writes:
>
>> Compare these totals to twice the ITU benchmark figures, rate
>> accordingly, and plot on a map.
>
> A nice way of visualising this can be 'radius of reach within n
> milliseconds'. Or, 'number of people reachable within n ms'. This paper
> uses that (or something very similar) to visualise the benefits of
> speed-of-light internet:
> http://web.engr.illinois.edu/~singla2/papers/hotnets14.pdf
>
> That same paper uses 30 ms as an 'instant response' number, btw, citing
> this: http://plato.stanford.edu/entries/consciousness-temporal/empirical-findings.html
This number does not mean what the authors of that paper think it does (assuming that my interpretation is correct)… they at least should have read their reference 7 in full. Yes 30ms will count as instantaneous, but it s far from the upper threshold.
To illustrate, the reference basically shows that if two successive events are spaced further than 30 ms apart they will be (most likely) interpreted as two distinct events instead of one event with a temporal extent. To relate to networks, if one would send successive frames of video without buffering these 30ms would be the time permissible for transmission and presentation of successive frames without people perceiving glitches or a slide show (you would think, but motion perception would still be of odd movement).
BUT if we think about the related phenomenon of flicker-fusion frequency it becomes clear that this might well depend on the actual stimuli and the surround luminosity. I think no one is proposing to under buffer so severely that gaps >= 30ms occur, so this number seems not too relevant in my eyes.
I think the more relevant question is what delay between an action and the response will people tolerate and find acceptable. I guess I should do a little literature research.
Best Regards
Sebastian
>
> -Toke
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 7:18 ` Jonathan Morton
2015-05-07 7:24 ` Mikael Abrahamsson
@ 2015-05-07 7:37 ` Sebastian Moeller
1 sibling, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2015-05-07 7:37 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
Hi Jonathan,
On May 7, 2015, at 09:18 , Jonathan Morton <chromatix99@gmail.com> wrote:
> It may depend on the application's tolerance to packet loss. A packet delayed further than the jitter buffer's tolerance counts as lost, so *IF* jitter is randomly distributed, jitter can be traded off against loss. For those purposes, standard deviation may be a valid metric.
All valid, but I think that the diced latency is not a normal distribution, it has a lower bound the min RTT caused by the “speed of light” (I simplify), but no real upper bound (I think we have examples of several seconds), so standard deviation or confidence intervals might not be applicable (at least not formally).
Best Regards
Sebastian
>
> However the more common characteristic is that delay is sometimes low (link idle) and sometimes high (buffer full) and rarely in between. In other words, delay samples are not statistically independent; loss due to jitter is bursty, and real-time applications like VoIP can't cope with that. For that reason, and due to your low temporal sampling rate, you should take the peak delay observed under load and compare it to the average during idle.
>
> - Jonathan Morton
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 7:24 ` Mikael Abrahamsson
@ 2015-05-07 7:40 ` Sebastian Moeller
2015-05-07 9:16 ` Mikael Abrahamsson
0 siblings, 1 reply; 183+ messages in thread
From: Sebastian Moeller @ 2015-05-07 7:40 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: Jonathan Morton, bloat
Hi Mikhael,
On May 7, 2015, at 09:24 , Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 7 May 2015, Jonathan Morton wrote:
>
>> However the more common characteristic is that delay is sometimes low (link idle) and sometimes high (buffer full) and rarely in between. In other words, delay samples are not statistically independent; loss due to jitter is bursty, and real-time applications like VoIP can't cope with that. For that reason, and due to your low temporal sampling rate, you should take the peak delay observed under load and compare it to the average during idle.
>
> Well, some applications will stop playing if the playout-buffer is empty, and if the packet arrives late, just start playing again and then increase the PDV buffer to whatever gap was observed, and if the PDV buffer has sustained fill, start playing it faster or skipping packets to play down the PDV buffer fill again.
>
> So you'll observe silence or cutouts, but you'll still hear all sound but after this event, your mouth-ear-mouth-ear delay has now increased.
>
> As far as I can tell, for instance Skype has a lot of different ways to cope with changing characteristics of the path, which work a lot better than a 10 year old classic PSTN-style G.711-over-IP style system with static 40ms PDV buffers, which behave exactly as you describe.
Is this 40ms sort of set in stone? If so we have a new indicator for bad buffer-bloat if inured latency > 40 ms link is unsuitable for decent voip (using old equipment). Is the newer voip stuff that telcos roll out currently any smarter?
Best Regards
Sebastian
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 7:40 ` Sebastian Moeller
@ 2015-05-07 9:16 ` Mikael Abrahamsson
2015-05-07 10:44 ` jb
0 siblings, 1 reply; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07 9:16 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
On Thu, 7 May 2015, Sebastian Moeller wrote:
> Is this 40ms sort of set in stone? If so we have a new indicator
> for bad buffer-bloat if inured latency > 40 ms link is unsuitable for
> decent voip (using old equipment). Is the newer voip stuff that telcos
> roll out currently any smarter?
The 40ms is fairly typical for what I encountered 10 years ago. To deploy
them there was a requirement to have QoS (basically low-latency queuing on
Cisco) for DSCP EF traffic, otherwise things didn't work on the 0.5-2
megabit/s connections that were common back then.
I'd say anything people are trying to deploy now for use on the Internet
without QoS, 40ms just won't work and has never really worked. You need
adaptive PDV-buffers and they need to be able to handle hundreds of ms of
PDV.
If you look at this old document from Cisco (10 years old):
http://www.ciscopress.com/articles/article.asp?p=357102
"Voice (Bearer Traffic)
The following list summarizes the key QoS requirements and recommendations
for voice (bearer traffic):
Voice traffic should be marked to DSCP EF per the QoS Baseline and RFC
3246.
Loss should be no more than 1 percent.
One-way latency (mouth to ear) should be no more than 150 ms.
Average one-way jitter should be targeted at less than 30 ms.
A range of 21 to 320 kbps of guaranteed priority bandwidth is required per
call (depending on the sampling rate, the VoIP codec, and Layer 2 media
overhead).
Voice quality directly is affected by all three QoS quality factors: loss,
latency, and jitter."
This requirement kind of reflects the requirements of the VoIP systems of
the day with 40ms PDV buffer. There is also a section a page down or so
about "Jitter buffers" where there is a recommendation to have adaptive
jitter buffers, which I didn't encounter back then but I really hope is a
lot more common today.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 9:16 ` Mikael Abrahamsson
@ 2015-05-07 10:44 ` jb
2015-05-07 11:36 ` Sebastian Moeller
` (2 more replies)
0 siblings, 3 replies; 183+ messages in thread
From: jb @ 2015-05-07 10:44 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 3318 bytes --]
There is a web socket based jitter tester now. It is very early stage but
works ok.
http://www.dslreports.com/speedtest?radar=1
So the latency displayed is the mean latency from a rolling 60 sample
buffer,
Minimum latency is also displayed.
and the +/- PDV value is the mean difference between sequential pings in
that same rolling buffer. It is quite similar to the std.dev actually (not
shown).
Anyway because it is talking to 21 servers or whatever it is not doing high
frequency pinging, I think its about 2hz per server (which is about 50
packets
per second and not much bandwidth).
My thought is one might click on a server and focus in on that,
then it could go to a higher frequency. Since it is still TCP, I've got
lingering
doubts that simulating 20ms stream with tcp bursts is the same as UDP,
definitely in the case of packet loss, it would not be.'
There is no way to "load" your connection from this tool, you could open
another
page and run a speed test of course.
I'm still working on it, but since you guys are talking RTT and Jitter
thought I'd
throw it into the topic.
On Thu, May 7, 2015 at 7:16 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 7 May 2015, Sebastian Moeller wrote:
>
> Is this 40ms sort of set in stone? If so we have a new indicator
>> for bad buffer-bloat if inured latency > 40 ms link is unsuitable for
>> decent voip (using old equipment). Is the newer voip stuff that telcos roll
>> out currently any smarter?
>>
>
> The 40ms is fairly typical for what I encountered 10 years ago. To deploy
> them there was a requirement to have QoS (basically low-latency queuing on
> Cisco) for DSCP EF traffic, otherwise things didn't work on the 0.5-2
> megabit/s connections that were common back then.
>
> I'd say anything people are trying to deploy now for use on the Internet
> without QoS, 40ms just won't work and has never really worked. You need
> adaptive PDV-buffers and they need to be able to handle hundreds of ms of
> PDV.
>
> If you look at this old document from Cisco (10 years old):
>
> http://www.ciscopress.com/articles/article.asp?p=357102
>
> "Voice (Bearer Traffic)
>
> The following list summarizes the key QoS requirements and recommendations
> for voice (bearer traffic):
>
> Voice traffic should be marked to DSCP EF per the QoS Baseline and RFC
> 3246.
>
> Loss should be no more than 1 percent.
>
> One-way latency (mouth to ear) should be no more than 150 ms.
>
> Average one-way jitter should be targeted at less than 30 ms.
>
> A range of 21 to 320 kbps of guaranteed priority bandwidth is required per
> call (depending on the sampling rate, the VoIP codec, and Layer 2 media
> overhead).
>
> Voice quality directly is affected by all three QoS quality factors: loss,
> latency, and jitter."
>
> This requirement kind of reflects the requirements of the VoIP systems of
> the day with 40ms PDV buffer. There is also a section a page down or so
> about "Jitter buffers" where there is a recommendation to have adaptive
> jitter buffers, which I didn't encounter back then but I really hope is a
> lot more common today.
>
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 4534 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 10:44 ` jb
@ 2015-05-07 11:36 ` Sebastian Moeller
2015-05-07 11:44 ` Mikael Abrahamsson
2015-05-08 13:20 ` [Bloat] DSLReports Jitter/PDV test Rich Brown
2 siblings, 0 replies; 183+ messages in thread
From: Sebastian Moeller @ 2015-05-07 11:36 UTC (permalink / raw)
To: jb; +Cc: bloat
Hi jb,
On May 7, 2015, at 12:44 , jb <justin@dslr.net> wrote:
> There is a web socket based jitter tester now. It is very early stage but works ok.
>
> http://www.dslreports.com/speedtest?radar=1
Looks great.
>
> So the latency displayed is the mean latency from a rolling 60 sample buffer,
> Minimum latency is also displayed.
> and the +/- PDV value is the mean difference between sequential pings in
> that same rolling buffer. It is quite similar to the std.dev actually (not shown).
So it takes RTT(N+1) - RTT(N)? But if due to buffer bloat the latency goes up for several 100s of MS or several seconds would this not register as low PDV? Would it not be better to take the difference withe the minimum? And maybe even remember the minimum for a longer period than 60 seconds? I guess no network path is guaranteed to be stable over time, but if re-routing is rare, maybe even keep the same minimum as long as the tool is running? You could still report some aggregate, like the mean deviation of the sample buffer, but just not taking the difference between consecutive samples (which sort of feels like rater giving the change in PDV than PDV itself, but as always I am a layman in these matters)...
>
> Anyway because it is talking to 21 servers or whatever it is not doing high
> frequency pinging, I think its about 2hz per server (which is about 50 packets
> per second and not much bandwidth).
>
> My thought is one might click on a server and focus in on that,
That sounds pretty useful, especially giving the already relative broad global coverage ;)
> then it could go to a higher frequency. Since it is still TCP, I've got lingering
> doubts that simulating 20ms stream with tcp bursts is the same as UDP,
> definitely in the case of packet loss, it would not be.'
>
> There is no way to "load" your connection from this tool, you could open another
> page and run a speed test of course.
Could this be used to select a server for the bandwidth test?
Best Regards
Sebastian
>
> I'm still working on it, but since you guys are talking RTT and Jitter thought I'd
> throw it into the topic.
>
> On Thu, May 7, 2015 at 7:16 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 7 May 2015, Sebastian Moeller wrote:
>
> Is this 40ms sort of set in stone? If so we have a new indicator for bad buffer-bloat if inured latency > 40 ms link is unsuitable for decent voip (using old equipment). Is the newer voip stuff that telcos roll out currently any smarter?
>
> The 40ms is fairly typical for what I encountered 10 years ago. To deploy them there was a requirement to have QoS (basically low-latency queuing on Cisco) for DSCP EF traffic, otherwise things didn't work on the 0.5-2 megabit/s connections that were common back then.
>
> I'd say anything people are trying to deploy now for use on the Internet without QoS, 40ms just won't work and has never really worked. You need adaptive PDV-buffers and they need to be able to handle hundreds of ms of PDV.
>
> If you look at this old document from Cisco (10 years old):
>
> http://www.ciscopress.com/articles/article.asp?p=357102
>
> "Voice (Bearer Traffic)
>
> The following list summarizes the key QoS requirements and recommendations for voice (bearer traffic):
>
> Voice traffic should be marked to DSCP EF per the QoS Baseline and RFC 3246.
>
> Loss should be no more than 1 percent.
>
> One-way latency (mouth to ear) should be no more than 150 ms.
>
> Average one-way jitter should be targeted at less than 30 ms.
>
> A range of 21 to 320 kbps of guaranteed priority bandwidth is required per call (depending on the sampling rate, the VoIP codec, and Layer 2 media overhead).
>
> Voice quality directly is affected by all three QoS quality factors: loss, latency, and jitter."
>
> This requirement kind of reflects the requirements of the VoIP systems of the day with 40ms PDV buffer. There is also a section a page down or so about "Jitter buffers" where there is a recommendation to have adaptive jitter buffers, which I didn't encounter back then but I really hope is a lot more common today.
>
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 10:44 ` jb
2015-05-07 11:36 ` Sebastian Moeller
@ 2015-05-07 11:44 ` Mikael Abrahamsson
2015-05-07 13:10 ` Jim Gettys
2015-05-07 13:14 ` jb
2015-05-08 13:20 ` [Bloat] DSLReports Jitter/PDV test Rich Brown
2 siblings, 2 replies; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07 11:44 UTC (permalink / raw)
To: jb; +Cc: bloat
On Thu, 7 May 2015, jb wrote:
> There is a web socket based jitter tester now. It is very early stage but
> works ok.
>
> http://www.dslreports.com/speedtest?radar=1
>
> So the latency displayed is the mean latency from a rolling 60 sample
> buffer, Minimum latency is also displayed. and the +/- PDV value is the
> mean difference between sequential pings in that same rolling buffer. It
> is quite similar to the std.dev actually (not shown).
So I think there are two schools here, either you take average and display
+ / - from that, but I think I prefer to take the lowest of the last 100
samples (or something), and then display PDV from that "floor" value, ie
PDV can't ever be negative, it can only be positive.
Apart from that, the above multi-place RTT test is really really nice,
thanks for doing this!
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 11:44 ` Mikael Abrahamsson
@ 2015-05-07 13:10 ` Jim Gettys
2015-05-07 13:18 ` Mikael Abrahamsson
2015-05-07 13:14 ` jb
1 sibling, 1 reply; 183+ messages in thread
From: Jim Gettys @ 2015-05-07 13:10 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 1836 bytes --]
What I don't know is how rapidly VOIP applications will adjust their
latency + jitter window (the operating point that they choose for their
operation). They can't adjust it instantly, as if they do, the transitions
from one operating point to another will cause problems, and you certainly
won't be doing that adjustment quickly.
So the time period over which one computes jitter statistics should
probably be related to that behavior.
Ideally, we need to get someone involved in WebRTC to help with this, to
present statistics that may be useful to end users to predict the behavior
of their service.
I'll see if I can get someone working on that to join the discussion.
- Jim
On Thu, May 7, 2015 at 7:44 AM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 7 May 2015, jb wrote:
>
> There is a web socket based jitter tester now. It is very early stage but
>> works ok.
>>
>> http://www.dslreports.com/speedtest?radar=1
>>
>> So the latency displayed is the mean latency from a rolling 60 sample
>> buffer, Minimum latency is also displayed. and the +/- PDV value is the
>> mean difference between sequential pings in that same rolling buffer. It is
>> quite similar to the std.dev actually (not shown).
>>
>
> So I think there are two schools here, either you take average and display
> + / - from that, but I think I prefer to take the lowest of the last 100
> samples (or something), and then display PDV from that "floor" value, ie
> PDV can't ever be negative, it can only be positive.
>
> Apart from that, the above multi-place RTT test is really really nice,
> thanks for doing this!
>
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 3183 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 11:44 ` Mikael Abrahamsson
2015-05-07 13:10 ` Jim Gettys
@ 2015-05-07 13:14 ` jb
2015-05-07 13:26 ` Neil Davies
2015-05-07 14:45 ` Simon Barber
1 sibling, 2 replies; 183+ messages in thread
From: jb @ 2015-05-07 13:14 UTC (permalink / raw)
To: Mikael Abrahamsson, bloat
[-- Attachment #1: Type: text/plain, Size: 1598 bytes --]
I thought would be more sane too. I see mentioned online that PDV is a
gaussian distribution (around mean) but it looks more like half a bell
curve, with most numbers near the the lowest latency seen, and getting
progressively worse with
less frequency.
At least for DSL connections on good ISPs that scenario seems more frequent.
You "usually" get the best latency and "sometimes" get spikes or fuzz on
top of it.
by the way after I posted I discovered Firefox has an issue with this test
so I had
to block it with a message, my apologies if anyone wasted time trying it
with FF.
Hopefully i can figure out why.
On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 7 May 2015, jb wrote:
>
> There is a web socket based jitter tester now. It is very early stage but
>> works ok.
>>
>> http://www.dslreports.com/speedtest?radar=1
>>
>> So the latency displayed is the mean latency from a rolling 60 sample
>> buffer, Minimum latency is also displayed. and the +/- PDV value is the
>> mean difference between sequential pings in that same rolling buffer. It is
>> quite similar to the std.dev actually (not shown).
>>
>
> So I think there are two schools here, either you take average and display
> + / - from that, but I think I prefer to take the lowest of the last 100
> samples (or something), and then display PDV from that "floor" value, ie
> PDV can't ever be negative, it can only be positive.
>
> Apart from that, the above multi-place RTT test is really really nice,
> thanks for doing this!
>
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
>
[-- Attachment #2: Type: text/html, Size: 2329 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 13:10 ` Jim Gettys
@ 2015-05-07 13:18 ` Mikael Abrahamsson
0 siblings, 0 replies; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-05-07 13:18 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
On Thu, 7 May 2015, Jim Gettys wrote:
> Ideally, we need to get someone involved in WebRTC to help with this, to
> present statistics that may be useful to end users to predict the
> behavior of their service.
If nothing else, I would really like to be able to expose the realtime
application and its network experience, to the user.
For the kind of classic PSTNoverIP system I mentioned before, it was
usually possible to collect statistics such as:
Packet loss (packet was lost completely)
Packet re-ordering (packets arrived out of order)
Packet PDV buffer miss (packet arrived too late to be played on time)
I guess it's possible to get PDV buffer underrun or overrun (depending on
how one sees it), if I get a bunch of PDV buffer misses and then I halt
play-out to wait for the PDV buffer to fill up, and then I get 200ms worth
of packets at once and I don't have 200ms worth of buffer, then I throw
away sound due to that...
So it's all depending on the whole machinery and how it acts, you need
different statistics. How to present this in a useful manner to the user
is a very interesting problem, but it would be nice if most VoIP
applications at least had a "status window" where these values could be
seen in a graph or something similar to "task manager" in windows.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 13:14 ` jb
@ 2015-05-07 13:26 ` Neil Davies
2015-05-07 14:45 ` Simon Barber
1 sibling, 0 replies; 183+ messages in thread
From: Neil Davies @ 2015-05-07 13:26 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 2962 bytes --]
On 7 May 2015, at 14:14, jb <justin@dslr.net> wrote:
> I thought would be more sane too. I see mentioned online that PDV is a
> gaussian distribution (around mean) but it looks more like half a bell curve, with most numbers near the the lowest latency seen, and getting progressively worse with
> less frequency.
That's someone describing the typical mathematical formulation (motivated by noise models in signal propagation) not the reality experienced over DSL links
> At least for DSL connections on good ISPs that scenario seems more frequent.
> You "usually" get the best latency and "sometimes" get spikes or fuzz on top of it.
"Good ISPs" (let's, for the moment define good this way) are ones in which the variability induced by transit accross them is small and bounded - BT Wholesale (access network) has - in our experience - delivers packets (after you've removed the effects of distance and packet size) from the customer to the retail ISP with <5ms delay variation (~0%loss) and from the retail ISP to the customer <15ms delay variation <0.1% loss. The delay appears to be uniformly distributed.
The major (in such a scenario) cause of delay/loss is the instantaneous overdriving of the last mile capacity - that takes the typical pattern of rapid growth followed by slow decay that would expected for a queue fill/empty cycle at that point in the network (in that case the BRAS)
An example (not quite what described above - but one that illustrates the isssues) can be found here; http://www.slideshare.net/mgeddes/advanced-network-performance-measurement
Neil
>
> by the way after I posted I discovered Firefox has an issue with this test so I had
> to block it with a message, my apologies if anyone wasted time trying it with FF.
> Hopefully i can figure out why.
>
>
> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 7 May 2015, jb wrote:
>
> There is a web socket based jitter tester now. It is very early stage but
> works ok.
>
> http://www.dslreports.com/speedtest?radar=1
>
> So the latency displayed is the mean latency from a rolling 60 sample buffer, Minimum latency is also displayed. and the +/- PDV value is the mean difference between sequential pings in that same rolling buffer. It is quite similar to the std.dev actually (not shown).
>
> So I think there are two schools here, either you take average and display + / - from that, but I think I prefer to take the lowest of the last 100 samples (or something), and then display PDV from that "floor" value, ie PDV can't ever be negative, it can only be positive.
>
> Apart from that, the above multi-place RTT test is really really nice, thanks for doing this!
>
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
[-- Attachment #2: Type: text/html, Size: 4350 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 13:14 ` jb
2015-05-07 13:26 ` Neil Davies
@ 2015-05-07 14:45 ` Simon Barber
2015-05-07 22:27 ` Dave Taht
1 sibling, 1 reply; 183+ messages in thread
From: Simon Barber @ 2015-05-07 14:45 UTC (permalink / raw)
To: jb, Mikael Abrahamsson, bloat
[-- Attachment #1: Type: text/plain, Size: 3916 bytes --]
The key figure for VoIP is maximum latency, or perhaps somewhere around
99th percentile. Voice packets cannot be played out if they are late, so
how late they are is the only thing that matters. If many packets are early
but more than a very small number are late, then the jitter buffer has to
adjust to handle the late packets. Adjusting the jitter buffer disrupts the
conversation, so ideally adjustments are infrequent. When maximum latency
suddenly increases it becomes necessary to increase the buffer fairly
quickly to avoid a dropout in the conversation. Buffer reductions can be
hidden by waiting for gaps in conversation. People get used to the acoustic
round trip latency and learn how quickly to expect a reply from the other
person (unless latency is really too high), but adjustments interfere with
this learned expectation, so make it hard to interpret why the other person
has paused. Thus adjustments to the buffering should be as infrequent as
possible.
Codel measures and tracks minimum latency in its inner 'interval' loop. For
VoIP the maximum is what counts. You can call it minimum + jitter, but the
maximum is the important thing (not the absolute maximum, since a very
small number of late packets are tolerable, but perhaps the 99th percentile).
During a conversation it will take some time at the start to learn the
characteristics of the link, but ideally the jitter buffer algorithm will
quickly get to a place where few adjustments are made. The more
conservative the buffer (higher delay above minimum) the less likely a
future adjustment will be needed, hence a tendency towards larger buffers
(and more delay).
Priority queueing is perfect for VoIP, since it can keep the jitter at a
single hop down to the transmission time for a single maximum size packet.
Fair Queueing will often achieve the same thing, since VoIP streams are
often the lowest bandwidth ongoing stream on the link.
Simon
Sent with AquaMail for Android
http://www.aqua-mail.com
On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
> I thought would be more sane too. I see mentioned online that PDV is a
> gaussian distribution (around mean) but it looks more like half a bell
> curve, with most numbers near the the lowest latency seen, and getting
> progressively worse with
> less frequency.
> At least for DSL connections on good ISPs that scenario seems more frequent.
> You "usually" get the best latency and "sometimes" get spikes or fuzz on
> top of it.
>
> by the way after I posted I discovered Firefox has an issue with this test
> so I had
> to block it with a message, my apologies if anyone wasted time trying it
> with FF.
> Hopefully i can figure out why.
>
>
> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>
> > On Thu, 7 May 2015, jb wrote:
> >
> > There is a web socket based jitter tester now. It is very early stage but
> >> works ok.
> >>
> >> http://www.dslreports.com/speedtest?radar=1
> >>
> >> So the latency displayed is the mean latency from a rolling 60 sample
> >> buffer, Minimum latency is also displayed. and the +/- PDV value is the
> >> mean difference between sequential pings in that same rolling buffer. It is
> >> quite similar to the std.dev actually (not shown).
> >>
> >
> > So I think there are two schools here, either you take average and display
> > + / - from that, but I think I prefer to take the lowest of the last 100
> > samples (or something), and then display PDV from that "floor" value, ie
> > PDV can't ever be negative, it can only be positive.
> >
> > Apart from that, the above multi-place RTT test is really really nice,
> > thanks for doing this!
> >
> >
> > --
> > Mikael Abrahamsson email: swmike@swm.pp.se
> >
>
>
>
> ----------
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 5330 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 14:45 ` Simon Barber
@ 2015-05-07 22:27 ` Dave Taht
2015-05-07 22:45 ` Dave Taht
2015-05-07 23:09 ` Dave Taht
0 siblings, 2 replies; 183+ messages in thread
From: Dave Taht @ 2015-05-07 22:27 UTC (permalink / raw)
To: Simon Barber; +Cc: bloat
On Thu, May 7, 2015 at 7:45 AM, Simon Barber <simon@superduper.net> wrote:
> The key figure for VoIP is maximum latency, or perhaps somewhere around 99th
> percentile. Voice packets cannot be played out if they are late, so how late
> they are is the only thing that matters. If many packets are early but more
> than a very small number are late, then the jitter buffer has to adjust to
> handle the late packets. Adjusting the jitter buffer disrupts the
> conversation, so ideally adjustments are infrequent. When maximum latency
> suddenly increases it becomes necessary to increase the buffer fairly
> quickly to avoid a dropout in the conversation. Buffer reductions can be
> hidden by waiting for gaps in conversation. People get used to the acoustic
> round trip latency and learn how quickly to expect a reply from the other
> person (unless latency is really too high), but adjustments interfere with
> this learned expectation, so make it hard to interpret why the other person
> has paused. Thus adjustments to the buffering should be as infrequent as
> possible.
>
> Codel measures and tracks minimum latency in its inner 'interval' loop. For
> VoIP the maximum is what counts. You can call it minimum + jitter, but the
> maximum is the important thing (not the absolute maximum, since a very small
> number of late packets are tolerable, but perhaps the 99th percentile).
>
> During a conversation it will take some time at the start to learn the
> characteristics of the link, but ideally the jitter buffer algorithm will
> quickly get to a place where few adjustments are made. The more conservative
> the buffer (higher delay above minimum) the less likely a future adjustment
> will be needed, hence a tendency towards larger buffers (and more delay).
>
> Priority queueing is perfect for VoIP, since it can keep the jitter at a
> single hop down to the transmission time for a single maximum size packet.
> Fair Queueing will often achieve the same thing, since VoIP streams are
> often the lowest bandwidth ongoing stream on the link.
Unfortunately this is more nuanced than this. Not for the first time
do I wish that email contained math, and/or we had put together a paper
for this containing the relevant math. I do have a spreadsheet lying
around here somewhere...
In the case of a drop tail queue, jitter is a function of the total
amount of data outstanding on the link by all the flows. A single
big fat flow experiencing a drop will drop it's buffer occupancy
(and thus effect on other flows) by a lot on the next RTT. However
a lot of fat flows will drop by less if drops are few. Total delay
is the sum of all packets outstanding on the link.
In the case of stochastic packet-fair queuing jitter is a function
of the total number of bytes in each packet outstanding on the sum
of the total number of flows. The total delay is the sum of the
bytes delivered per packet per flow.
In the case of DRR, jitter is a function of the total number of bytes
allowed by the quantum per flow outstanding on the link. The total
delay experienced by the flow is a function of the amounts of
bytes delivered with the number of flows.
In the case of fq_codel, jitter is a function of of the total number
of bytes allowed by the quantum per flow outstanding on the link,
with the sparse optimization pushing flows with no queue
queue in the available window to the front. Furthermore
codel acts to shorten the lengths of the queues overall.
fq_codel's delay: when the arriving in new flow packet can be serviced
in less time than the total number of flows' quantums, is a function
of the total number of flows that are not also building queues. When
the total service time for all flows exceeds the interval the voip
packet is delivered in, and AND the quantum under which the algorithm
is delivering, fq_codel degrades to DRR behavior. (in other words,
given enough queuing flows or enough new flows, you can steadily
accrue delay on a voip flow under fq_codel). Predicting jitter is
really hard to do here, but still pretty minimal compared to the
alternatives above.
in the above 3 cases, hash collisions permute the result. Cake and
fq_pie have a lot less collisions.
I am generally sanguine about this along the edge - from the internet
packets cannot be easily classified, yet most edge networks have more
bandwidth from that direction, thus able to fit WAY more flows in
under 10ms, and outbound, from the home or small business, some
classification can be effectively used in a X tier shaper (or cake) to
ensure better priority (still with fair) queuing for this special
class of application - not that under most home workloads that this is
an issue. We think. We really need to do more benchmarking of web and
dash traffic loads.
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
> On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
>>
>> I thought would be more sane too. I see mentioned online that PDV is a
>> gaussian distribution (around mean) but it looks more like half a bell
>> curve, with most numbers near the the lowest latency seen, and getting
>> progressively worse with
>> less frequency.
>> At least for DSL connections on good ISPs that scenario seems more
>> frequent.
>> You "usually" get the best latency and "sometimes" get spikes or fuzz on
>> top of it.
>>
>> by the way after I posted I discovered Firefox has an issue with this test
>> so I had
>> to block it with a message, my apologies if anyone wasted time trying it
>> with FF.
>> Hopefully i can figure out why.
>>
>>
>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se>
>> wrote:
>>>
>>> On Thu, 7 May 2015, jb wrote:
>>>
>>>> There is a web socket based jitter tester now. It is very early stage
>>>> but
>>>> works ok.
>>>>
>>>> http://www.dslreports.com/speedtest?radar=1
>>>>
>>>> So the latency displayed is the mean latency from a rolling 60 sample
>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is the mean
>>>> difference between sequential pings in that same rolling buffer. It is quite
>>>> similar to the std.dev actually (not shown).
>>>
>>>
>>> So I think there are two schools here, either you take average and
>>> display + / - from that, but I think I prefer to take the lowest of the last
>>> 100 samples (or something), and then display PDV from that "floor" value, ie
>>> PDV can't ever be negative, it can only be positive.
>>>
>>> Apart from that, the above multi-place RTT test is really really nice,
>>> thanks for doing this!
>>>
>>>
>>> --
>>> Mikael Abrahamsson email: swmike@swm.pp.se
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 22:27 ` Dave Taht
@ 2015-05-07 22:45 ` Dave Taht
2015-05-07 23:09 ` Dave Taht
1 sibling, 0 replies; 183+ messages in thread
From: Dave Taht @ 2015-05-07 22:45 UTC (permalink / raw)
To: Simon Barber; +Cc: bloat
On Thu, May 7, 2015 at 3:27 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, May 7, 2015 at 7:45 AM, Simon Barber <simon@superduper.net> wrote:
>> The key figure for VoIP is maximum latency, or perhaps somewhere around 99th
>> percentile. Voice packets cannot be played out if they are late, so how late
>> they are is the only thing that matters. If many packets are early but more
>> than a very small number are late, then the jitter buffer has to adjust to
>> handle the late packets. Adjusting the jitter buffer disrupts the
>> conversation, so ideally adjustments are infrequent. When maximum latency
>> suddenly increases it becomes necessary to increase the buffer fairly
>> quickly to avoid a dropout in the conversation. Buffer reductions can be
>> hidden by waiting for gaps in conversation. People get used to the acoustic
>> round trip latency and learn how quickly to expect a reply from the other
>> person (unless latency is really too high), but adjustments interfere with
>> this learned expectation, so make it hard to interpret why the other person
>> has paused. Thus adjustments to the buffering should be as infrequent as
>> possible.
>>
>> Codel measures and tracks minimum latency in its inner 'interval' loop. For
>> VoIP the maximum is what counts. You can call it minimum + jitter, but the
>> maximum is the important thing (not the absolute maximum, since a very small
>> number of late packets are tolerable, but perhaps the 99th percentile).
>>
>> During a conversation it will take some time at the start to learn the
>> characteristics of the link, but ideally the jitter buffer algorithm will
>> quickly get to a place where few adjustments are made. The more conservative
>> the buffer (higher delay above minimum) the less likely a future adjustment
>> will be needed, hence a tendency towards larger buffers (and more delay).
>>
>> Priority queueing is perfect for VoIP, since it can keep the jitter at a
>> single hop down to the transmission time for a single maximum size packet.
>> Fair Queueing will often achieve the same thing, since VoIP streams are
>> often the lowest bandwidth ongoing stream on the link.
>
> Unfortunately this is more nuanced than this. Not for the first time
> do I wish that email contained math, and/or we had put together a paper
> for this containing the relevant math. I do have a spreadsheet lying
> around here somewhere...
>
> In the case of a drop tail queue, jitter is a function of the total
> amount of data outstanding on the link by all the flows. A single
> big fat flow experiencing a drop will drop it's buffer occupancy
> (and thus effect on other flows) by a lot on the next RTT. However
> a lot of fat flows will drop by less if drops are few. Total delay
> is the sum of all packets outstanding on the link.
>
> In the case of stochastic packet-fair queuing jitter is a function
> of the total number of bytes in each packet outstanding on the sum
> of the total number of flows. The total delay is the sum of the
> bytes delivered per packet per flow.
>
> In the case of DRR, jitter is a function of the total number of bytes
> allowed by the quantum per flow outstanding on the link. The total
> delay experienced by the flow is a function of the amounts of
> bytes delivered with the number of flows.
>
> In the case of fq_codel, jitter is a function of of the total number
> of bytes allowed by the quantum per flow outstanding on the link,
> with the sparse optimization pushing flows with no queue
> queue in the available window to the front. Furthermore
> codel acts to shorten the lengths of the queues overall.
>
> fq_codel's delay: when the arriving in new flow packet can be serviced
> in less time than the total number of flows' quantums, is a function
> of the total number of flows that are not also building queues. When
> the total service time for all flows exceeds the interval the voip
> packet is delivered in, and AND the quantum under which the algorithm
> is delivering, fq_codel degrades to DRR behavior. (in other words,
> given enough queuing flows or enough new flows, you can steadily
> accrue delay on a voip flow under fq_codel). Predicting jitter is
> really hard to do here, but still pretty minimal compared to the
> alternatives above.
>
> in the above 3 cases, hash collisions permute the result. Cake and
> fq_pie have a lot less collisions.
>
> I am generally sanguine about this along the edge - from the internet
> packets cannot be easily classified, yet most edge networks have more
> bandwidth from that direction, thus able to fit WAY more flows in
> under 10ms, and outbound, from the home or small business, some
> classification can be effectively used in a X tier shaper (or cake) to
> ensure better priority (still with fair) queuing for this special
> class of application - not that under most home workloads that this is
> an issue. We think. We really need to do more benchmarking of web and
> dash traffic loads.
I note also that I fought for (and lost) an argument to make it more
possible for webrtc applications to use one port for video and another
for voice. This would have provided a useful e2e clock to measure
against video congesting on a minimal interval in a FQ'd environment
in particular and perhaps have led to lower latency videoconferencing,
more rapid ramp ups of frame rate or quality, etc.
The argument for minimizing port use and the enormous difficulty in
establishing two clear paths for those ports all the time won the day,
and also the difficulty in lipsync with two separate flows. I decided
to wait til we had fq_codel derived stuff more worked out to play with
the browsers themselves.
I would have also liked ECN adopted for webrtc's primary frame bursts
as well, but the early proposal for that (in "Nada") was dropped, last
I looked. Again, this could be revisited.
>> Simon
>>
>> Sent with AquaMail for Android
>> http://www.aqua-mail.com
>>
>> On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
>>>
>>> I thought would be more sane too. I see mentioned online that PDV is a
>>> gaussian distribution (around mean) but it looks more like half a bell
>>> curve, with most numbers near the the lowest latency seen, and getting
>>> progressively worse with
>>> less frequency.
>>> At least for DSL connections on good ISPs that scenario seems more
>>> frequent.
>>> You "usually" get the best latency and "sometimes" get spikes or fuzz on
>>> top of it.
>>>
>>> by the way after I posted I discovered Firefox has an issue with this test
>>> so I had
>>> to block it with a message, my apologies if anyone wasted time trying it
>>> with FF.
>>> Hopefully i can figure out why.
>>>
>>>
>>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se>
>>> wrote:
>>>>
>>>> On Thu, 7 May 2015, jb wrote:
>>>>
>>>>> There is a web socket based jitter tester now. It is very early stage
>>>>> but
>>>>> works ok.
>>>>>
>>>>> http://www.dslreports.com/speedtest?radar=1
>>>>>
>>>>> So the latency displayed is the mean latency from a rolling 60 sample
>>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is the mean
>>>>> difference between sequential pings in that same rolling buffer. It is quite
>>>>> similar to the std.dev actually (not shown).
>>>>
>>>>
>>>> So I think there are two schools here, either you take average and
>>>> display + / - from that, but I think I prefer to take the lowest of the last
>>>> 100 samples (or something), and then display PDV from that "floor" value, ie
>>>> PDV can't ever be negative, it can only be positive.
>>>>
>>>> Apart from that, the above multi-place RTT test is really really nice,
>>>> thanks for doing this!
>>>>
>>>>
>>>> --
>>>> Mikael Abrahamsson email: swmike@swm.pp.se
>>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 22:27 ` Dave Taht
2015-05-07 22:45 ` Dave Taht
@ 2015-05-07 23:09 ` Dave Taht
2015-05-08 2:05 ` jb
2015-05-08 3:54 ` Eric Dumazet
1 sibling, 2 replies; 183+ messages in thread
From: Dave Taht @ 2015-05-07 23:09 UTC (permalink / raw)
To: Simon Barber; +Cc: bloat
On Thu, May 7, 2015 at 3:27 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, May 7, 2015 at 7:45 AM, Simon Barber <simon@superduper.net> wrote:
>> The key figure for VoIP is maximum latency, or perhaps somewhere around 99th
>> percentile. Voice packets cannot be played out if they are late, so how late
>> they are is the only thing that matters. If many packets are early but more
>> than a very small number are late, then the jitter buffer has to adjust to
>> handle the late packets. Adjusting the jitter buffer disrupts the
>> conversation, so ideally adjustments are infrequent. When maximum latency
>> suddenly increases it becomes necessary to increase the buffer fairly
>> quickly to avoid a dropout in the conversation. Buffer reductions can be
>> hidden by waiting for gaps in conversation. People get used to the acoustic
>> round trip latency and learn how quickly to expect a reply from the other
>> person (unless latency is really too high), but adjustments interfere with
>> this learned expectation, so make it hard to interpret why the other person
>> has paused. Thus adjustments to the buffering should be as infrequent as
>> possible.
>>
>> Codel measures and tracks minimum latency in its inner 'interval' loop. For
>> VoIP the maximum is what counts. You can call it minimum + jitter, but the
>> maximum is the important thing (not the absolute maximum, since a very small
>> number of late packets are tolerable, but perhaps the 99th percentile).
>>
>> During a conversation it will take some time at the start to learn the
>> characteristics of the link, but ideally the jitter buffer algorithm will
>> quickly get to a place where few adjustments are made. The more conservative
>> the buffer (higher delay above minimum) the less likely a future adjustment
>> will be needed, hence a tendency towards larger buffers (and more delay).
>>
>> Priority queueing is perfect for VoIP, since it can keep the jitter at a
>> single hop down to the transmission time for a single maximum size packet.
>> Fair Queueing will often achieve the same thing, since VoIP streams are
>> often the lowest bandwidth ongoing stream on the link.
>
> Unfortunately this is more nuanced than this. Not for the first time
> do I wish that email contained math, and/or we had put together a paper
> for this containing the relevant math. I do have a spreadsheet lying
> around here somewhere...
>
> In the case of a drop tail queue, jitter is a function of the total
> amount of data outstanding on the link by all the flows. A single
> big fat flow experiencing a drop will drop it's buffer occupancy
> (and thus effect on other flows) by a lot on the next RTT. However
> a lot of fat flows will drop by less if drops are few. Total delay
> is the sum of all packets outstanding on the link.
>
> In the case of stochastic packet-fair queuing jitter is a function
> of the total number of bytes in each packet outstanding on the sum
> of the total number of flows. The total delay is the sum of the
> bytes delivered per packet per flow.
>
> In the case of DRR, jitter is a function of the total number of bytes
> allowed by the quantum per flow outstanding on the link. The total
> delay experienced by the flow is a function of the amounts of
> bytes delivered with the number of flows.
>
> In the case of fq_codel, jitter is a function of of the total number
> of bytes allowed by the quantum per flow outstanding on the link,
> with the sparse optimization pushing flows with no queue
> queue in the available window to the front. Furthermore
> codel acts to shorten the lengths of the queues overall.
>
> fq_codel's delay: when the arriving in new flow packet can be serviced
> in less time than the total number of flows' quantums, is a function
> of the total number of flows that are not also building queues. When
> the total service time for all flows exceeds the interval the voip
> packet is delivered in, and AND the quantum under which the algorithm
> is delivering, fq_codel degrades to DRR behavior. (in other words,
> given enough queuing flows or enough new flows, you can steadily
> accrue delay on a voip flow under fq_codel). Predicting jitter is
> really hard to do here, but still pretty minimal compared to the
> alternatives above.
And to complexify it further if the total flows' service time exceeds
the interval on which the voip flow is being delivered, the voip flow
can deliver a fq_codel quantum's worth of packets back to back.
Boy I wish I could explain all this better, and/or observe the results
on real jitter buffers in real apps.
>
> in the above 3 cases, hash collisions permute the result. Cake and
> fq_pie have a lot less collisions.
Which is not necessarily a panacea either. perfect flow isolation
(cake) to hundreds of flows might be in some cases worse that
suffering hash collisions (fq_codel) for some workloads. sch_fq and
fq_pie have *perfect* flow isolation and I worry about the effects of
tons and tons of short flows (think ddos attacks) - I am comforted by
colliisions! and tend to think there is an ideal ratio of flows
allowed without queue management verses available bandwidth that we
don't know yet - as well as think for larger numbers of flows we
should be inheriting more global environmental (state of the link and
all queues) than we currently do in initializing both cake and
fq_codel queues.
Recently I did some tests of 450+ flows (details on the cake mailing
list) against sch_fq which got hopelessly buried (10000 packets in
queue). cake and fq_pie did a lot better.
> I am generally sanguine about this along the edge - from the internet
> packets cannot be easily classified, yet most edge networks have more
> bandwidth from that direction, thus able to fit WAY more flows in
> under 10ms, and outbound, from the home or small business, some
> classification can be effectively used in a X tier shaper (or cake) to
> ensure better priority (still with fair) queuing for this special
> class of application - not that under most home workloads that this is
> an issue. We think. We really need to do more benchmarking of web and
> dash traffic loads.
>
>> Simon
>>
>> Sent with AquaMail for Android
>> http://www.aqua-mail.com
>>
>> On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
>>>
>>> I thought would be more sane too. I see mentioned online that PDV is a
>>> gaussian distribution (around mean) but it looks more like half a bell
>>> curve, with most numbers near the the lowest latency seen, and getting
>>> progressively worse with
>>> less frequency.
>>> At least for DSL connections on good ISPs that scenario seems more
>>> frequent.
>>> You "usually" get the best latency and "sometimes" get spikes or fuzz on
>>> top of it.
>>>
>>> by the way after I posted I discovered Firefox has an issue with this test
>>> so I had
>>> to block it with a message, my apologies if anyone wasted time trying it
>>> with FF.
>>> Hopefully i can figure out why.
>>>
>>>
>>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se>
>>> wrote:
>>>>
>>>> On Thu, 7 May 2015, jb wrote:
>>>>
>>>>> There is a web socket based jitter tester now. It is very early stage
>>>>> but
>>>>> works ok.
>>>>>
>>>>> http://www.dslreports.com/speedtest?radar=1
>>>>>
>>>>> So the latency displayed is the mean latency from a rolling 60 sample
>>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is the mean
>>>>> difference between sequential pings in that same rolling buffer. It is quite
>>>>> similar to the std.dev actually (not shown).
>>>>
>>>>
>>>> So I think there are two schools here, either you take average and
>>>> display + / - from that, but I think I prefer to take the lowest of the last
>>>> 100 samples (or something), and then display PDV from that "floor" value, ie
>>>> PDV can't ever be negative, it can only be positive.
>>>>
>>>> Apart from that, the above multi-place RTT test is really really nice,
>>>> thanks for doing this!
>>>>
>>>>
>>>> --
>>>> Mikael Abrahamsson email: swmike@swm.pp.se
>>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 23:09 ` Dave Taht
@ 2015-05-08 2:05 ` jb
2015-05-08 4:16 ` David Lang
2015-05-08 3:54 ` Eric Dumazet
1 sibling, 1 reply; 183+ messages in thread
From: jb @ 2015-05-08 2:05 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 10597 bytes --]
I've made some changes and now this test displays the "PDV" column as
simply the recent average increase on the best latency seen, as usually the
best latency seen is pretty stable. (It also should work in firefox too
now).
In addition, every 30 seconds, a grade is printed next to a timestamp.
I know how we all like grades :) the grade is based on the average of all
the PDVs, and ranges from A+ (5 milliseconds or less) down to F for fail.
I'm not 100% happy with this PDV figure, a stellar connection - and no
internet
congestion - will show a low number that is stable and an A+ grade. A
connection
with jitter will show a PDV that is half the average jitter amplitude. So
far so good.
But a connection with almost no jitter, but that has visibly higher than
minimal
latency, will show a failing grade. And if this is a jitter / packet delay
variation
type test, I'm not sure about this situation. One could say it is a very
good
connection but because it is 30ms higher than just one revealed optimal
ping, yet it might get a "D". Not sure how common this state of things could
be though.
Also since it is a global test a component of the grade is also internet
backbone congestion, and not necessarily an ISP or equipment issue.
On Fri, May 8, 2015 at 9:09 AM, Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, May 7, 2015 at 3:27 PM, Dave Taht <dave.taht@gmail.com> wrote:
> > On Thu, May 7, 2015 at 7:45 AM, Simon Barber <simon@superduper.net>
> wrote:
> >> The key figure for VoIP is maximum latency, or perhaps somewhere around
> 99th
> >> percentile. Voice packets cannot be played out if they are late, so how
> late
> >> they are is the only thing that matters. If many packets are early but
> more
> >> than a very small number are late, then the jitter buffer has to adjust
> to
> >> handle the late packets. Adjusting the jitter buffer disrupts the
> >> conversation, so ideally adjustments are infrequent. When maximum
> latency
> >> suddenly increases it becomes necessary to increase the buffer fairly
> >> quickly to avoid a dropout in the conversation. Buffer reductions can be
> >> hidden by waiting for gaps in conversation. People get used to the
> acoustic
> >> round trip latency and learn how quickly to expect a reply from the
> other
> >> person (unless latency is really too high), but adjustments interfere
> with
> >> this learned expectation, so make it hard to interpret why the other
> person
> >> has paused. Thus adjustments to the buffering should be as infrequent as
> >> possible.
> >>
> >> Codel measures and tracks minimum latency in its inner 'interval' loop.
> For
> >> VoIP the maximum is what counts. You can call it minimum + jitter, but
> the
> >> maximum is the important thing (not the absolute maximum, since a very
> small
> >> number of late packets are tolerable, but perhaps the 99th percentile).
> >>
> >> During a conversation it will take some time at the start to learn the
> >> characteristics of the link, but ideally the jitter buffer algorithm
> will
> >> quickly get to a place where few adjustments are made. The more
> conservative
> >> the buffer (higher delay above minimum) the less likely a future
> adjustment
> >> will be needed, hence a tendency towards larger buffers (and more
> delay).
> >>
> >> Priority queueing is perfect for VoIP, since it can keep the jitter at a
> >> single hop down to the transmission time for a single maximum size
> packet.
> >> Fair Queueing will often achieve the same thing, since VoIP streams are
> >> often the lowest bandwidth ongoing stream on the link.
> >
> > Unfortunately this is more nuanced than this. Not for the first time
> > do I wish that email contained math, and/or we had put together a paper
> > for this containing the relevant math. I do have a spreadsheet lying
> > around here somewhere...
> >
> > In the case of a drop tail queue, jitter is a function of the total
> > amount of data outstanding on the link by all the flows. A single
> > big fat flow experiencing a drop will drop it's buffer occupancy
> > (and thus effect on other flows) by a lot on the next RTT. However
> > a lot of fat flows will drop by less if drops are few. Total delay
> > is the sum of all packets outstanding on the link.
> >
> > In the case of stochastic packet-fair queuing jitter is a function
> > of the total number of bytes in each packet outstanding on the sum
> > of the total number of flows. The total delay is the sum of the
> > bytes delivered per packet per flow.
> >
> > In the case of DRR, jitter is a function of the total number of bytes
> > allowed by the quantum per flow outstanding on the link. The total
> > delay experienced by the flow is a function of the amounts of
> > bytes delivered with the number of flows.
> >
> > In the case of fq_codel, jitter is a function of of the total number
> > of bytes allowed by the quantum per flow outstanding on the link,
> > with the sparse optimization pushing flows with no queue
> > queue in the available window to the front. Furthermore
> > codel acts to shorten the lengths of the queues overall.
> >
> > fq_codel's delay: when the arriving in new flow packet can be serviced
> > in less time than the total number of flows' quantums, is a function
> > of the total number of flows that are not also building queues. When
> > the total service time for all flows exceeds the interval the voip
> > packet is delivered in, and AND the quantum under which the algorithm
> > is delivering, fq_codel degrades to DRR behavior. (in other words,
> > given enough queuing flows or enough new flows, you can steadily
> > accrue delay on a voip flow under fq_codel). Predicting jitter is
> > really hard to do here, but still pretty minimal compared to the
> > alternatives above.
>
> And to complexify it further if the total flows' service time exceeds
> the interval on which the voip flow is being delivered, the voip flow
> can deliver a fq_codel quantum's worth of packets back to back.
>
> Boy I wish I could explain all this better, and/or observe the results
> on real jitter buffers in real apps.
>
> >
> > in the above 3 cases, hash collisions permute the result. Cake and
> > fq_pie have a lot less collisions.
>
> Which is not necessarily a panacea either. perfect flow isolation
> (cake) to hundreds of flows might be in some cases worse that
> suffering hash collisions (fq_codel) for some workloads. sch_fq and
> fq_pie have *perfect* flow isolation and I worry about the effects of
> tons and tons of short flows (think ddos attacks) - I am comforted by
> colliisions! and tend to think there is an ideal ratio of flows
> allowed without queue management verses available bandwidth that we
> don't know yet - as well as think for larger numbers of flows we
> should be inheriting more global environmental (state of the link and
> all queues) than we currently do in initializing both cake and
> fq_codel queues.
>
> Recently I did some tests of 450+ flows (details on the cake mailing
> list) against sch_fq which got hopelessly buried (10000 packets in
> queue). cake and fq_pie did a lot better.
>
> > I am generally sanguine about this along the edge - from the internet
> > packets cannot be easily classified, yet most edge networks have more
> > bandwidth from that direction, thus able to fit WAY more flows in
> > under 10ms, and outbound, from the home or small business, some
> > classification can be effectively used in a X tier shaper (or cake) to
> > ensure better priority (still with fair) queuing for this special
> > class of application - not that under most home workloads that this is
> > an issue. We think. We really need to do more benchmarking of web and
> > dash traffic loads.
> >
> >> Simon
> >>
> >> Sent with AquaMail for Android
> >> http://www.aqua-mail.com
> >>
> >> On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
> >>>
> >>> I thought would be more sane too. I see mentioned online that PDV is a
> >>> gaussian distribution (around mean) but it looks more like half a bell
> >>> curve, with most numbers near the the lowest latency seen, and getting
> >>> progressively worse with
> >>> less frequency.
> >>> At least for DSL connections on good ISPs that scenario seems more
> >>> frequent.
> >>> You "usually" get the best latency and "sometimes" get spikes or fuzz
> on
> >>> top of it.
> >>>
> >>> by the way after I posted I discovered Firefox has an issue with this
> test
> >>> so I had
> >>> to block it with a message, my apologies if anyone wasted time trying
> it
> >>> with FF.
> >>> Hopefully i can figure out why.
> >>>
> >>>
> >>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se>
> >>> wrote:
> >>>>
> >>>> On Thu, 7 May 2015, jb wrote:
> >>>>
> >>>>> There is a web socket based jitter tester now. It is very early stage
> >>>>> but
> >>>>> works ok.
> >>>>>
> >>>>> http://www.dslreports.com/speedtest?radar=1
> >>>>>
> >>>>> So the latency displayed is the mean latency from a rolling 60 sample
> >>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is
> the mean
> >>>>> difference between sequential pings in that same rolling buffer. It
> is quite
> >>>>> similar to the std.dev actually (not shown).
> >>>>
> >>>>
> >>>> So I think there are two schools here, either you take average and
> >>>> display + / - from that, but I think I prefer to take the lowest of
> the last
> >>>> 100 samples (or something), and then display PDV from that "floor"
> value, ie
> >>>> PDV can't ever be negative, it can only be positive.
> >>>>
> >>>> Apart from that, the above multi-place RTT test is really really nice,
> >>>> thanks for doing this!
> >>>>
> >>>>
> >>>> --
> >>>> Mikael Abrahamsson email: swmike@swm.pp.se
> >>>
> >>>
> >>> _______________________________________________
> >>> Bloat mailing list
> >>> Bloat@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/bloat
> >>>
> >>
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >>
> >
> >
> >
> > --
> > Dave Täht
> > Open Networking needs **Open Source Hardware**
> >
> > https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>
[-- Attachment #2: Type: text/html, Size: 13425 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-07 23:09 ` Dave Taht
2015-05-08 2:05 ` jb
@ 2015-05-08 3:54 ` Eric Dumazet
2015-05-08 4:20 ` Dave Taht
1 sibling, 1 reply; 183+ messages in thread
From: Eric Dumazet @ 2015-05-08 3:54 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
On Thu, 2015-05-07 at 16:09 -0700, Dave Taht wrote:
> Recently I did some tests of 450+ flows (details on the cake mailing
> list) against sch_fq which got hopelessly buried (10000 packets in
> queue). cake and fq_pie did a lot better.
Seriously, comparing sch_fq against fq_pie or fq_codel or cake is quite
strange.
sch_fq has no CoDel part, it doesn't drop packets, unless you hit some
limit.
First intent for fq was for hosts, to implement TCP pacing at low cost.
Maybe you need an hybrid, and this is very possible to do that.
I did recently one change in sch_fq. where non local flows can be hashed
in a stochastic way. You could eventually add CoDel capability to such
flows.
http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=06eb395fa9856b5a87cf7d80baee2a0ed3cdb9d7
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-08 2:05 ` jb
@ 2015-05-08 4:16 ` David Lang
0 siblings, 0 replies; 183+ messages in thread
From: David Lang @ 2015-05-08 4:16 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: TEXT/Plain, Size: 10787 bytes --]
On Fri, 8 May 2015, jb wrote:
> I've made some changes and now this test displays the "PDV" column as
> simply the recent average increase on the best latency seen, as usually the
> best latency seen is pretty stable. (It also should work in firefox too
> now).
>
> In addition, every 30 seconds, a grade is printed next to a timestamp.
> I know how we all like grades :) the grade is based on the average of all
> the PDVs, and ranges from A+ (5 milliseconds or less) down to F for fail.
>
> I'm not 100% happy with this PDV figure, a stellar connection - and no
> internet congestion - will show a low number that is stable and an A+ grade. A
> connection with jitter will show a PDV that is half the average jitter
> amplitude. So far so good.
>
> But a connection with almost no jitter, but that has visibly higher than
> minimal latency, will show a failing grade. And if this is a jitter / packet
> delay variation type test, I'm not sure about this situation. One could say it
> is a very good connection but because it is 30ms higher than just one revealed
> optimal ping, yet it might get a "D". Not sure how common this state of things
> could be though.
this is why the grade should be based more on the ability to induce jitter (the
additional latency under load) than the absolute number
a 100ms worth of buffer induced latency on a 20ms connection should score far
worse than 20ms worth of induced latency on a 100ms connection.
David Lang
> Also since it is a global test a component of the grade is also internet
> backbone congestion, and not necessarily an ISP or equipment issue.
>
>
> On Fri, May 8, 2015 at 9:09 AM, Dave Taht <dave.taht@gmail.com> wrote:
>
>> On Thu, May 7, 2015 at 3:27 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>> On Thu, May 7, 2015 at 7:45 AM, Simon Barber <simon@superduper.net>
>> wrote:
>>>> The key figure for VoIP is maximum latency, or perhaps somewhere around
>> 99th
>>>> percentile. Voice packets cannot be played out if they are late, so how
>> late
>>>> they are is the only thing that matters. If many packets are early but
>> more
>>>> than a very small number are late, then the jitter buffer has to adjust
>> to
>>>> handle the late packets. Adjusting the jitter buffer disrupts the
>>>> conversation, so ideally adjustments are infrequent. When maximum
>> latency
>>>> suddenly increases it becomes necessary to increase the buffer fairly
>>>> quickly to avoid a dropout in the conversation. Buffer reductions can be
>>>> hidden by waiting for gaps in conversation. People get used to the
>> acoustic
>>>> round trip latency and learn how quickly to expect a reply from the
>> other
>>>> person (unless latency is really too high), but adjustments interfere
>> with
>>>> this learned expectation, so make it hard to interpret why the other
>> person
>>>> has paused. Thus adjustments to the buffering should be as infrequent as
>>>> possible.
>>>>
>>>> Codel measures and tracks minimum latency in its inner 'interval' loop.
>> For
>>>> VoIP the maximum is what counts. You can call it minimum + jitter, but
>> the
>>>> maximum is the important thing (not the absolute maximum, since a very
>> small
>>>> number of late packets are tolerable, but perhaps the 99th percentile).
>>>>
>>>> During a conversation it will take some time at the start to learn the
>>>> characteristics of the link, but ideally the jitter buffer algorithm
>> will
>>>> quickly get to a place where few adjustments are made. The more
>> conservative
>>>> the buffer (higher delay above minimum) the less likely a future
>> adjustment
>>>> will be needed, hence a tendency towards larger buffers (and more
>> delay).
>>>>
>>>> Priority queueing is perfect for VoIP, since it can keep the jitter at a
>>>> single hop down to the transmission time for a single maximum size
>> packet.
>>>> Fair Queueing will often achieve the same thing, since VoIP streams are
>>>> often the lowest bandwidth ongoing stream on the link.
>>>
>>> Unfortunately this is more nuanced than this. Not for the first time
>>> do I wish that email contained math, and/or we had put together a paper
>>> for this containing the relevant math. I do have a spreadsheet lying
>>> around here somewhere...
>>>
>>> In the case of a drop tail queue, jitter is a function of the total
>>> amount of data outstanding on the link by all the flows. A single
>>> big fat flow experiencing a drop will drop it's buffer occupancy
>>> (and thus effect on other flows) by a lot on the next RTT. However
>>> a lot of fat flows will drop by less if drops are few. Total delay
>>> is the sum of all packets outstanding on the link.
>>>
>>> In the case of stochastic packet-fair queuing jitter is a function
>>> of the total number of bytes in each packet outstanding on the sum
>>> of the total number of flows. The total delay is the sum of the
>>> bytes delivered per packet per flow.
>>>
>>> In the case of DRR, jitter is a function of the total number of bytes
>>> allowed by the quantum per flow outstanding on the link. The total
>>> delay experienced by the flow is a function of the amounts of
>>> bytes delivered with the number of flows.
>>>
>>> In the case of fq_codel, jitter is a function of of the total number
>>> of bytes allowed by the quantum per flow outstanding on the link,
>>> with the sparse optimization pushing flows with no queue
>>> queue in the available window to the front. Furthermore
>>> codel acts to shorten the lengths of the queues overall.
>>>
>>> fq_codel's delay: when the arriving in new flow packet can be serviced
>>> in less time than the total number of flows' quantums, is a function
>>> of the total number of flows that are not also building queues. When
>>> the total service time for all flows exceeds the interval the voip
>>> packet is delivered in, and AND the quantum under which the algorithm
>>> is delivering, fq_codel degrades to DRR behavior. (in other words,
>>> given enough queuing flows or enough new flows, you can steadily
>>> accrue delay on a voip flow under fq_codel). Predicting jitter is
>>> really hard to do here, but still pretty minimal compared to the
>>> alternatives above.
>>
>> And to complexify it further if the total flows' service time exceeds
>> the interval on which the voip flow is being delivered, the voip flow
>> can deliver a fq_codel quantum's worth of packets back to back.
>>
>> Boy I wish I could explain all this better, and/or observe the results
>> on real jitter buffers in real apps.
>>
>>>
>>> in the above 3 cases, hash collisions permute the result. Cake and
>>> fq_pie have a lot less collisions.
>>
>> Which is not necessarily a panacea either. perfect flow isolation
>> (cake) to hundreds of flows might be in some cases worse that
>> suffering hash collisions (fq_codel) for some workloads. sch_fq and
>> fq_pie have *perfect* flow isolation and I worry about the effects of
>> tons and tons of short flows (think ddos attacks) - I am comforted by
>> colliisions! and tend to think there is an ideal ratio of flows
>> allowed without queue management verses available bandwidth that we
>> don't know yet - as well as think for larger numbers of flows we
>> should be inheriting more global environmental (state of the link and
>> all queues) than we currently do in initializing both cake and
>> fq_codel queues.
>>
>> Recently I did some tests of 450+ flows (details on the cake mailing
>> list) against sch_fq which got hopelessly buried (10000 packets in
>> queue). cake and fq_pie did a lot better.
>>
>>> I am generally sanguine about this along the edge - from the internet
>>> packets cannot be easily classified, yet most edge networks have more
>>> bandwidth from that direction, thus able to fit WAY more flows in
>>> under 10ms, and outbound, from the home or small business, some
>>> classification can be effectively used in a X tier shaper (or cake) to
>>> ensure better priority (still with fair) queuing for this special
>>> class of application - not that under most home workloads that this is
>>> an issue. We think. We really need to do more benchmarking of web and
>>> dash traffic loads.
>>>
>>>> Simon
>>>>
>>>> Sent with AquaMail for Android
>>>> http://www.aqua-mail.com
>>>>
>>>> On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
>>>>>
>>>>> I thought would be more sane too. I see mentioned online that PDV is a
>>>>> gaussian distribution (around mean) but it looks more like half a bell
>>>>> curve, with most numbers near the the lowest latency seen, and getting
>>>>> progressively worse with
>>>>> less frequency.
>>>>> At least for DSL connections on good ISPs that scenario seems more
>>>>> frequent.
>>>>> You "usually" get the best latency and "sometimes" get spikes or fuzz
>> on
>>>>> top of it.
>>>>>
>>>>> by the way after I posted I discovered Firefox has an issue with this
>> test
>>>>> so I had
>>>>> to block it with a message, my apologies if anyone wasted time trying
>> it
>>>>> with FF.
>>>>> Hopefully i can figure out why.
>>>>>
>>>>>
>>>>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se>
>>>>> wrote:
>>>>>>
>>>>>> On Thu, 7 May 2015, jb wrote:
>>>>>>
>>>>>>> There is a web socket based jitter tester now. It is very early stage
>>>>>>> but
>>>>>>> works ok.
>>>>>>>
>>>>>>> http://www.dslreports.com/speedtest?radar=1
>>>>>>>
>>>>>>> So the latency displayed is the mean latency from a rolling 60 sample
>>>>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is
>> the mean
>>>>>>> difference between sequential pings in that same rolling buffer. It
>> is quite
>>>>>>> similar to the std.dev actually (not shown).
>>>>>>
>>>>>>
>>>>>> So I think there are two schools here, either you take average and
>>>>>> display + / - from that, but I think I prefer to take the lowest of
>> the last
>>>>>> 100 samples (or something), and then display PDV from that "floor"
>> value, ie
>>>>>> PDV can't ever be negative, it can only be positive.
>>>>>>
>>>>>> Apart from that, the above multi-place RTT test is really really nice,
>>>>>> thanks for doing this!
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Mikael Abrahamsson email: swmike@swm.pp.se
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>
>>>>
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>
>>>
>>>
>>> --
>>> Dave Täht
>>> Open Networking needs **Open Source Hardware**
>>>
>>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>
>>
>>
>> --
>> Dave Täht
>> Open Networking needs **Open Source Hardware**
>>
>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>
>
[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-05-08 3:54 ` Eric Dumazet
@ 2015-05-08 4:20 ` Dave Taht
0 siblings, 0 replies; 183+ messages in thread
From: Dave Taht @ 2015-05-08 4:20 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
On Thu, May 7, 2015 at 8:54 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> On Thu, 2015-05-07 at 16:09 -0700, Dave Taht wrote:
>
>> Recently I did some tests of 450+ flows (details on the cake mailing
>> list) against sch_fq which got hopelessly buried (10000 packets in
>> queue). cake and fq_pie did a lot better.
>
> Seriously, comparing sch_fq against fq_pie or fq_codel or cake is quite
> strange.
I have been beating back the folk that think fq is all you need for almost
as long as the pure aqm'rs. Sometimes it makes me do crazy stuff like
that test... or use pfifo_fast... just to have the data.
In the 450 flow test, I was actually trying to exercise cake's ultimate
deal-with-the-8way-set-associative collision code path, and ran the
other qdiscs for giggles. I did not expect sch_fq to hit a backlog of
10k packets, frankly.
> sch_fq has no CoDel part, it doesn't drop packets, unless you hit some
> limit.
> First intent for fq was for hosts, to implement TCP pacing at low cost.
that was a test on a 1gige server running those qdiscs, not a router.
I am sure the pacing bit works well when the host does not saturate
its own card, but when it does, oh, my!
>
> Maybe you need an hybrid, and this is very possible to do that.
fq_pie did well, as did cake.
> I did recently one change in sch_fq. where non local flows can be hashed
> in a stochastic way. You could eventually add CoDel capability to such
> flows.
>
> http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=06eb395fa9856b5a87cf7d80baee2a0ed3cdb9d7
I am aware of that patch.
My own take on things was that TSQ needs to be more aware of the total
number of flows in this case.
>
>
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Jitter/PDV test
2015-05-07 10:44 ` jb
2015-05-07 11:36 ` Sebastian Moeller
2015-05-07 11:44 ` Mikael Abrahamsson
@ 2015-05-08 13:20 ` Rich Brown
2015-05-08 14:22 ` jb
2 siblings, 1 reply; 183+ messages in thread
From: Rich Brown @ 2015-05-08 13:20 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 711 bytes --]
On May 7, 2015, at 6:44 AM, jb <justin@dslr.net> wrote:
> There is a web socket based jitter tester now. It is very early stage but works ok.
>
> http://www.dslreports.com/speedtest?radar=1
I was surprised to see how *good* a test the websocket could be. It appears to add only a couple msec over the ICMP Echo timings.
Here are a few samples from the web page. The first four columns show the values from the PDV test; the final column is the min/avg/max/stddev from ping.
NY, USA 162.248.95.144 41 +3ms 38.677/40.405/43.192/1.269 ms
CO, USA 72.5.102.138 80 +3ms 79.305/81.531/85.514/1.540 ms
LA, USA 162.248.93.162 108 +5ms 105.225/106.540/108.358/0.877 ms
Nice work, Justin!
Rich
[-- Attachment #2: Type: text/html, Size: 3006 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Jitter/PDV test
2015-05-08 13:20 ` [Bloat] DSLReports Jitter/PDV test Rich Brown
@ 2015-05-08 14:22 ` jb
0 siblings, 0 replies; 183+ messages in thread
From: jb @ 2015-05-08 14:22 UTC (permalink / raw)
To: Rich Brown; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 1151 bytes --]
I was surprised as well, I wasn't that impressed with web sockets for a
while
then realised it was the server side holding them back.
They are also interesting because you can play with asymmetry. For example:
ping with 1 byte up but 1k down? or vice versa. Can. You can't do that with
icmp.
they also don't seem to be too demanding on cpu.
On Fri, May 8, 2015 at 11:20 PM, Rich Brown <richb.hanover@gmail.com> wrote:
>
> On May 7, 2015, at 6:44 AM, jb <justin@dslr.net> wrote:
>
> There is a web socket based jitter tester now. It is very early stage but
> works ok.
>
> http://www.dslreports.com/speedtest?radar=1
>
>
> I was surprised to see how *good* a test the websocket could be. It
> appears to add only a couple msec over the ICMP Echo timings.
>
> Here are a few samples from the web page. The first four columns show the
> values from the PDV test; the final column is the min/avg/max/stddev from
> ping.
>
> NY, USA 162.248.95.144 41 +3ms 38.677/40.405/43.192/1.269 ms
>
> CO, USA 72.5.102.138 80 +3ms 79.305/81.531/85.514/1.540 ms
>
> LA, USA 162.248.93.162 108 +5ms 105.225/106.540/108.358/0.877 ms
>
> Nice work, Justin!
>
> Rich
>
[-- Attachment #2: Type: text/html, Size: 3221 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 5:00 ` jb
@ 2015-04-27 16:28 ` Dave Taht
0 siblings, 0 replies; 183+ messages in thread
From: Dave Taht @ 2015-04-27 16:28 UTC (permalink / raw)
To: jb; +Cc: bloat
On Thu, Apr 23, 2015 at 10:00 PM, jb <justin@dslr.net> wrote:
> The problem with metronome pinging is when there is a stall, they all pile
> up then when they are released, you get this illusion of a 45 degree slope
> of diminishing pings They all came back at the same instant (the 30 second
> mark), but were all sent at the ticks along the X-Axis. So from 15 to 30
> seconds was basically a total stall.
Ugh. To clarify what you wrote here, is "the problem with metronome
pinging *over TCP/IP* is when you get a stall..."
I keep thinking that leveraging a webrtc (udp) for this test would be good...
> I'm not sure what to do with that information, just there it is.
> Oh and the latency was so high, everything else is invisible, and even the
> red barrier, which I arbitrarily set to end at 9999 ms, is exceeded.
PITA, isn't it?
I have had a variety of (mostly extraordinary amounts of bufferbloat)
failures with this test thus far, and I have this fear that users (and
your back end db) will discard them as noise - when they really
aren't. Is there/will there be a way to look hard at the failures?
> Regarding the other suggestions on the list, I'll certainly try to
> incorporate all of them. I'm the sole programmer on this and spend half of
> each day doing other stuff too. So I only get an hour or two to add features
> or improve existing ones. If something is suggested but not implemented, or
> noted but not fixed it is just my time management failing.
>
> I've a continuing problem with browsers under Linux getting bogged down when
> asked to do these little charts so if there is no gauge visible during the
> test, that is the reason, but hopefully the measurements are still being
> taken, and will be in the results.
>
> I agree that browsers are not ideal for testing, although most now seem ok
> at delegating to the OS most of the job of receiving or sending, and do not
> appear to be writing disk, But I think a desktop OS is always going to give
> priority to interactive applications over "background" network activity.
>
> However it does seem that connection speeds are improving at a similar, but
> not faster, rate to new hardware, so perhaps it will never be an issue.
> There is always the option of linking two or more devices with browsers
> behind the same IP, and synchronising a test so all start at the same
> instant. I don't know about everyone else, but we have two iMacs, one
> tablet, two iPhones, and a laptop in the house. Even if I get the National
> Broadband Network fiber to the home this year, and a 1 gig port - and pigs
> fly - it'll be no problem overwhelming the port with the gadgets on hand,
> and the browsers they come with.
>
> On Fri, Apr 24, 2015 at 2:04 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>
>> Summary result from my hotel, with 11 seconds of bloat.
>>
>> http://www.dslreports.com/speedtest/353034
>>
>> On Thu, Apr 23, 2015 at 8:39 PM, Dave Taht <dave.taht@gmail.com> wrote:
>> > On Thu, Apr 23, 2015 at 8:20 PM, Jim Gettys <jg@freedesktop.org> wrote:
>> >> I love the test, and thanks for the video!
>> >>
>> >> There is an interesting problem: some paths have for all intents and
>> >> purposes infinite buffering, so you can end up with not just seconds,
>> >> but
>> >> even minutes of bloat. The current interplanetary record for
>> >> bufferbloat is
>> >> GoGo inflight is 760(!) seconds of buffering (using netperf-wrapper,
>> >> RRUL
>> >> test, on several flights); Mars is closer to Earth than that for part
>> >> of its
>> >> orbit. I've seen 60 seconds or so on some XFinity WiFi and similar
>> >> amounts
>> >> of bloat on some cellular systems. Exactly how quickly one might fill
>> >> such
>> >> buffers depends on the details of load parts of a test.
>> >>
>> >> Please don't try the netperf-wrapper test on GoGo; all the users on the
>> >> plane will suffer, and their Squid proxy dies entirely. And the
>> >> government
>> >> just asked people to report "hackers" on airplanes.... Would that GoGo
>> >> listen to the mail and tweets I sent them to try to report the problem
>> >> to
>> >> them.... If anyone on the list happens to know someone from GoGo, I'd
>> >> like
>> >> to hear about it.
>> >
>> > I have also sent mail and tweets to no effect.
>> >
>> > I hereby donate 1k to the "bufferbloat testing vs gogo-in-flight legal
>> > defense fund". Anyone that gets busted by testing for bufferbloat on
>> > an airplane using these new tools or the rrul test can tap me for
>> > that. Anyone else willing to chip in?[1]
>> >
>> > I note that tweeting in the air after such a test might be impossible
>> > (on at least one bloat test done so far the connection never came
>> > back) so you'd probably have to tweet something like
>> >
>> > "I am about to test for bufferbloat on my flight. If I do not tweet
>> > again for the next 4 hours, I blew up gogo-in-flight, and expect to be
>> > met by secret service agents on landing with no sense of humor about
>> > how network congestion control is supposed to work."
>> >
>> > FIRST. (and shrink the above to 140 pithy characters)
>> >
>> > [1] I guess this makes me liable for inciting someone to do a network
>> > test, also, which I hope is not illegal (?). I personally don't want
>> > to do the test as I have better things to do than rewrite walden and
>> > am not fond of roomates named "bubba".
>> >
>> > ... but I admit to being tempted.
>> >
>> >> On Thu, Apr 23, 2015 at 7:40 PM, Dave Taht <dave.taht@gmail.com> wrote:
>> >>>
>> >>> On Thu, Apr 23, 2015 at 6:58 PM, Rich Brown <richb.hanover@gmail.com>
>> >>> wrote:
>> >>> >
>> >>> > On Apr 23, 2015, at 6:22 PM, Dave Taht <dave.taht@gmail.com> wrote:
>> >>> >
>> >>> >> On Thu, Apr 23, 2015 at 2:44 PM, Rich Brown
>> >>> >> <richb.hanover@gmail.com>
>> >>> >> wrote:
>> >>> >>> Hi Justin,
>> >>> >>>
>> >>> >>> The newest Speed Test is great! It is more convincing than I even
>> >>> >>> thought it would be. These comments are focused on the "theater"
>> >>> >>> of the
>> >>> >>> measurements, so that they are unambiguous, and that people can
>> >>> >>> figure out
>> >>> >>> what's happening
>> >>> >>>
>> >>> >>> I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to
>> >>> >>> illustrate my points. NB: I turned fq_codel off for this demo, so
>> >>> >>> that the
>> >>> >>> results would be more extreme.
>> >>> >>>
>> >>> >>> 1) It would be great to label the gauge as "Latency (msec)" I love
>> >>> >>> the
>> >>> >>> term "bufferbloat" as much as the next guy, but the Speed Test
>> >>> >>> page should
>> >>> >>> call the measurement what it really is. (The help page can explain
>> >>> >>> that the
>> >>> >>> latency is almost certainly caused by bufferbloat, but that should
>> >>> >>> be the
>> >>> >>> place it's mentioned.)
>> >>> >>
>> >>> >> I would prefer that it say "bufferbloat (lag in msec)" there,
>> >>> >
>> >>> > OK - I change my opinion. I'll be fussy and say it should be
>> >>> > "Bufferbloat (lag) in msec"
>> >>>
>> >>> Works for me.
>> >>>
>> >>> >
>> >>> >> ... rather
>> >>> >> than make people look up another word buried in the doc. Sending
>> >>> >> people to the right thing, at the getgo, is important - looking for
>> >>> >> "lag" on the internet takes you to a lot of wrong places,
>> >>> >> misinformation, and snake oil. So perhaps in doc page I would have
>> >>> >> an
>> >>> >> explanation of lag as it relates to bufferbloat and other possible
>> >>> >> causes of these behaviors.
>> >>> >>
>> >>> >> I also do not see the gauge in my linux firefox, that you are
>> >>> >> showing
>> >>> >> on youtube. Am I using a wrong link. I LOVE this gauge, however.
>> >>> >
>> >>> > I see this as well (Firefox in Linux). It seems OK in other browser
>> >>> > combinations. (I have *not* done the full set of variations...)
>> >>> >
>> >>> > If this is a matter that FF won't show that gizmo, perhaps there
>> >>> > could
>> >>> > be a rectangle (like the blue/red ones) for Latency that shows:
>> >>> >
>> >>> > Latency
>> >>> > Down: min/avg/max
>> >>> > Up: min/avg/max
>> >>> >
>> >>> >> Lastly, the static radar plot of pings occupies center stage yet
>> >>> >> does
>> >>> >> not do anything later in the test. Either animating it to show the
>> >>> >> bloat, or moving it off of center stage and the new bloat gauge to
>> >>> >> center stage after it sounds the net, would be good.
>> >>> >
>> >>> > I have also wondered about whether we should find a way to add
>> >>> > further
>> >>> > value to the radar display. I have not yet come up with useful
>> >>> > suggestions,
>> >>> > though.
>> >>>
>> >>> Stick it center stage and call it the "Bloat-o-Meter?"
>> >>>
>> >>> >> bufferbloat as a single word is quite googlable to good resources,
>> >>> >> and
>> >>> >> there is some activity on fixing up wikipedia going on that I like
>> >>> >> a
>> >>> >> lot.
>> >>> >>
>> >>> >>>
>> >>> >>> 2) I can't explain why the latency gauge starts at 1-3 msec. I am
>> >>> >>> guessing that it's showing incremental latency above the nominal
>> >>> >>> value
>> >>> >>> measured during the initial setup. I recommend that the gauge
>> >>> >>> always show
>> >>> >>> actual latency. Thus the gauge could start at 45 msec (0:11 in the
>> >>> >>> video)
>> >>> >>> then change during the measurements.
>> >>> >>>
>> >>> >>> 3) I was a bit confused by the behavior of the gauge before/after
>> >>> >>> the
>> >>> >>> test. I'd like it to change only when when something else is
>> >>> >>> moving in the
>> >>> >>> window. Here are some suggestions for what would make it clearer:
>> >>> >>> - The gauge should not change until the graph starts
>> >>> >>> moving. I
>> >>> >>> found it confusing to see the latency jump up at 0:13 just before
>> >>> >>> the blue
>> >>> >>> download chart started, or at 0:28 before the upload chart started
>> >>> >>> at 0:31.
>> >>> >>> - Between the download and upload tests, the gauge should
>> >>> >>> drop
>> >>> >>> back to the nominal measured values. I think it does.
>> >>> >>> - After the test, the gauge should also drop back to the
>> >>> >>> nominal measured value. It seems stuck at 4928 msec (0:55).
>> >>> >>
>> >>> >> We had/have a lot of this problem in netperf-wrapper - a lot of
>> >>> >> data
>> >>> >> tends to accumulate at the end of the test(s) and pollute the last
>> >>> >> few
>> >>> >> data points in bloated scenarios. You have to wait for the queues
>> >>> >> to
>> >>> >> drain to get a "clean" test - although this begins to show what
>> >>> >> actually happen when the link is buried in both directions.
>> >>> >>
>> >>> >> Is there any chance to add a simultaneous up+down+ping test at the
>> >>> >> conclusion?
>> >>> >
>> >>> > This confuses the "speed test" notion of this site. Since the flow
>> >>> > of
>> >>> > ack's can eat up 25% of the bandwidth of a slow, asymmetric link, I
>> >>> > am
>> >>> > concerend that people would wonder why their upload bandwidth
>> >>> > suddenly went
>> >>> > down dramatically...
>> >>>
>> >>> To me, that would help. Far too many think that data just arrives by
>> >>> magic and doesn't have an ack clock.
>> >>>
>> >>> > Given that other speed test sites only show upload/download, I would
>> >>> > vote to keep that format here. Perhaps there could be an
>> >>> > option/preference/setting to do up/down/ping .
>> >>> >
>> >>> >>> 4) I like the way the latency gauge changes color during the test.
>> >>> >>> It's OK for it to use the color to indicate an "opinion". Are you
>> >>> >>> happy with
>> >>> >>> the thresholds for yellow & red colors?
>> >>> >>
>> >>> >> It is not clear to me what they are.
>> >>> >>
>> >>> >>> 5) The gauge makes it appear that moderate latency - 765 msec
>> >>> >>> (0:29) -
>> >
>> > --
>> > Dave Täht
>> > Open Networking needs **Open Source Hardware**
>> >
>> > https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>>
>>
>>
>> --
>> Dave Täht
>> Open Networking needs **Open Source Hardware**
>>
>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 21:45 ` Rich Brown
@ 2015-04-25 1:14 ` jb
0 siblings, 0 replies; 183+ messages in thread
From: jb @ 2015-04-25 1:14 UTC (permalink / raw)
To: Rich Brown, bloat
[-- Attachment #1: Type: text/plain, Size: 1166 bytes --]
I better at least do the suggested color banding, and wait till network
gets to quiescence between phases.
Currently there is a 5 second delay between download and start of upload,
however it is just automatic, it does not wait for the latency to drop back
and everything to drain. However even if the upload starts off with issues
from the download, it does not change the speed result.
But since we don't want to see any download bloat being tagged as upload
bloat, then it isn't ideal as it stands.
I can also extend the upload phase as incoming bandwidth is free.
Another enhancement coming will be the ability to select the congestion
choice on the server side. This will be reserved just for advanced users
but it would be neat to play with.
On Sat, Apr 25, 2015 at 7:45 AM, Rich Brown <richb.hanover@gmail.com> wrote:
> I just updated the Quick Test for Bufferbloat page to list DSLReports
> first (and call it the "Easy Test for Bufferbloat"). See
> http://www.bufferbloat.net/projects/cerowrt/wiki/Quick_Test_for_Bufferbloat
>
> The page still describes the ping & speed test procedure if people want to
> use the old/harder test procedure.
>
> Rich
>
[-- Attachment #2: Type: text/html, Size: 1814 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 20:19 ` jb
2015-04-23 20:39 ` Dave Taht
@ 2015-04-24 21:45 ` Rich Brown
2015-04-25 1:14 ` jb
1 sibling, 1 reply; 183+ messages in thread
From: Rich Brown @ 2015-04-24 21:45 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 323 bytes --]
I just updated the Quick Test for Bufferbloat page to list DSLReports first (and call it the "Easy Test for Bufferbloat"). See http://www.bufferbloat.net/projects/cerowrt/wiki/Quick_Test_for_Bufferbloat
The page still describes the ping & speed test procedure if people want to use the old/harder test procedure.
Rich
[-- Attachment #2: Type: text/html, Size: 599 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 4:17 ` Dave Taht
@ 2015-04-24 16:13 ` Rick Jones
0 siblings, 0 replies; 183+ messages in thread
From: Rick Jones @ 2015-04-24 16:13 UTC (permalink / raw)
To: bloat
On 04/23/2015 09:17 PM, Dave Taht wrote:
> OK, I have had a little more fun fooling with this.
>
> A huge problem all speedtest-like results have in general is that the
> test does not run long enough. About 20 seconds is needed for either
> up or down to reach maximum bloat, and many networks have been
> "optimized" to look good on the "normal" speedtests which are shorter
> than that. It appears this test only runs for about 15 seconds, and
> you can clearly see from this result
>
> http://www.dslreports.com/speedtest/353034
>
> that we have clogged up the link in one direction which has not
> cleared in time for the next test to start.
Perhaps then that phase of the test shouldn't be considered complete?
In a netperf TCP_STREAM or TCP_MAERTS test, the test isn't over until
there is an indication the last byte has arrived. As such, what one
might request to be a 30 second test can end-up with say a 34 second
elapsed time. That is I suppose along the lines of your proposal C.
> C) waiting for the bloat to clear would be a good idea before starting
> the next test.... and showing "waiting for the bloat to clear" as
> part of the result would be good also.
rick jones
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 3:39 ` Dave Taht
2015-04-24 4:04 ` Dave Taht
@ 2015-04-24 16:09 ` Rick Jones
1 sibling, 0 replies; 183+ messages in thread
From: Rick Jones @ 2015-04-24 16:09 UTC (permalink / raw)
To: bloat
On 04/23/2015 08:39 PM, Dave Taht wrote:
> I have also sent mail and tweets to no effect.
>
> I hereby donate 1k to the "bufferbloat testing vs gogo-in-flight legal
> defense fund". Anyone that gets busted by testing for bufferbloat on
> an airplane using these new tools or the rrul test can tap me for
> that. Anyone else willing to chip in?[1]
>
> I note that tweeting in the air after such a test might be impossible
> (on at least one bloat test done so far the connection never came
> back) so you'd probably have to tweet something like
>
> "I am about to test for bufferbloat on my flight. If I do not tweet
> again for the next 4 hours, I blew up gogo-in-flight, and expect to be
> met by secret service agents on landing with no sense of humor about
> how network congestion control is supposed to work."
>
> FIRST. (and shrink the above to 140 pithy characters)
>
> [1] I guess this makes me liable for inciting someone to do a network
> test, also, which I hope is not illegal (?). I personally don't want
> to do the test as I have better things to do than rewrite walden and
> am not fond of roomates named "bubba".
>
> ... but I admit to being tempted.
Please don't. I don't want netperf classified as a munition :)
Plausible deniability might be your friend. "All I was doing was trying
to [upload my 1 GB powerpoint presentation I just finished | download
the 1 GB powerpoint presentation I had to work-on ] and it seemed to be
taking forever, so I was worried something was wrong and decided to ping
to check connectivity. That is when I noticed the latency was so high..."
rick jones
Anyone who wants to read the story about how netperf took-out a
corporate video security system - twice - and why the netperf UDP_STREAM
test is no longer over a routable socket by default should feel free to
contact me.
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 1:58 ` Rich Brown
2015-04-24 2:40 ` Dave Taht
@ 2015-04-24 13:49 ` Pedro Tumusok
1 sibling, 0 replies; 183+ messages in thread
From: Pedro Tumusok @ 2015-04-24 13:49 UTC (permalink / raw)
To: Rich Brown; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 2133 bytes --]
On Fri, Apr 24, 2015 at 3:58 AM, Rich Brown <richb.hanover@gmail.com> wrote:
>
> On Apr 23, 2015, at 6:22 PM, Dave Taht <dave.taht@gmail.com> wrote:
>
> >
> > We had/have a lot of this problem in netperf-wrapper - a lot of data
> > tends to accumulate at the end of the test(s) and pollute the last few
> > data points in bloated scenarios. You have to wait for the queues to
> > drain to get a "clean" test - although this begins to show what
> > actually happen when the link is buried in both directions.
> >
> > Is there any chance to add a simultaneous up+down+ping test at the
> conclusion?
>
> This confuses the "speed test" notion of this site. Since the flow of
> ack's can eat up 25% of the bandwidth of a slow, asymmetric link, I am
> concerend that people would wonder why their upload bandwidth suddenly went
> down dramatically...
>
> Given that other speed test sites only show upload/download, I would vote
> to keep that format here. Perhaps there could be an
> option/preference/setting to do up/down/ping .
>
>
But isn't the point to have the best/relevant speed test in the business,
how can you innovate or do something new, if you make sure you do not
deviate to much from the competition?
Most users would likes the new gadget shiny feeling, so if the test does
the normal down and then up. And tells you those numbers are the ideal
numbers you can get. But in reality this is the closer to the service you
are actually getting for your money, then do the down/up+ping test.
As for ms thresholds, my $0.02 thinks that there should be some "real
world" correlation. Most people know how long 1 second is, but have no
idea how it impacts stuff. Especially since most people think 1 second is
very quick and most likely negligible. Showing milliseconds, probably does
not ring a bell to most people.
So putting in pointers like, at this latency (bufferbloat induced or not)
VoIP will not work anymore and at this latency online gaming means you have
to be psychic to win and at this next level, you probably should think
about making a cup of coffee while you wait for your favorite webpage
display.
Pedro
[-- Attachment #2: Type: text/html, Size: 2782 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 4:04 ` Dave Taht
2015-04-24 4:17 ` Dave Taht
@ 2015-04-24 5:00 ` jb
2015-04-27 16:28 ` Dave Taht
1 sibling, 1 reply; 183+ messages in thread
From: jb @ 2015-04-24 5:00 UTC (permalink / raw)
To: Dave Taht, bloat
[-- Attachment #1: Type: text/plain, Size: 11306 bytes --]
The problem with metronome pinging is when there is a stall, they all pile
up then when they are released, you get this illusion of a 45 degree slope
of diminishing pings They all came back at the same instant (the 30 second
mark), but were all sent at the ticks along the X-Axis. So from 15 to 30
seconds was basically a total stall.
I'm not sure what to do with that information, just there it is.
Oh and the latency was so high, everything else is invisible, and even the
red barrier, which I arbitrarily set to end at 9999 ms, is exceeded.
Regarding the other suggestions on the list, I'll certainly try to
incorporate all of them. I'm the sole programmer on this and spend half of
each day doing other stuff too. So I only get an hour or two to add
features or improve existing ones. If something is suggested but not
implemented, or noted but not fixed it is just my time management failing.
I've a continuing problem with browsers under Linux getting bogged down
when asked to do these little charts so if there is no gauge visible during
the test, that is the reason, but hopefully the measurements are still
being taken, and will be in the results.
I agree that browsers are not ideal for testing, although most now seem ok
at delegating to the OS most of the job of receiving or sending, and do not
appear to be writing disk, But I think a desktop OS is always going to give
priority to interactive applications over "background" network activity.
However it does seem that connection speeds are improving at a similar, but
not faster, rate to new hardware, so perhaps it will never be an issue.
There is always the option of linking two or more devices with browsers
behind the same IP, and synchronising a test so all start at the same
instant. I don't know about everyone else, but we have two iMacs, one
tablet, two iPhones, and a laptop in the house. Even if I get the National
Broadband Network fiber to the home this year, and a 1 gig port - and pigs
fly - it'll be no problem overwhelming the port with the gadgets on hand,
and the browsers they come with.
On Fri, Apr 24, 2015 at 2:04 PM, Dave Taht <dave.taht@gmail.com> wrote:
> Summary result from my hotel, with 11 seconds of bloat.
>
> http://www.dslreports.com/speedtest/353034
>
> On Thu, Apr 23, 2015 at 8:39 PM, Dave Taht <dave.taht@gmail.com> wrote:
> > On Thu, Apr 23, 2015 at 8:20 PM, Jim Gettys <jg@freedesktop.org> wrote:
> >> I love the test, and thanks for the video!
> >>
> >> There is an interesting problem: some paths have for all intents and
> >> purposes infinite buffering, so you can end up with not just seconds,
> but
> >> even minutes of bloat. The current interplanetary record for
> bufferbloat is
> >> GoGo inflight is 760(!) seconds of buffering (using netperf-wrapper,
> RRUL
> >> test, on several flights); Mars is closer to Earth than that for part
> of its
> >> orbit. I've seen 60 seconds or so on some XFinity WiFi and similar
> amounts
> >> of bloat on some cellular systems. Exactly how quickly one might fill
> such
> >> buffers depends on the details of load parts of a test.
> >>
> >> Please don't try the netperf-wrapper test on GoGo; all the users on the
> >> plane will suffer, and their Squid proxy dies entirely. And the
> government
> >> just asked people to report "hackers" on airplanes.... Would that GoGo
> >> listen to the mail and tweets I sent them to try to report the problem
> to
> >> them.... If anyone on the list happens to know someone from GoGo, I'd
> like
> >> to hear about it.
> >
> > I have also sent mail and tweets to no effect.
> >
> > I hereby donate 1k to the "bufferbloat testing vs gogo-in-flight legal
> > defense fund". Anyone that gets busted by testing for bufferbloat on
> > an airplane using these new tools or the rrul test can tap me for
> > that. Anyone else willing to chip in?[1]
> >
> > I note that tweeting in the air after such a test might be impossible
> > (on at least one bloat test done so far the connection never came
> > back) so you'd probably have to tweet something like
> >
> > "I am about to test for bufferbloat on my flight. If I do not tweet
> > again for the next 4 hours, I blew up gogo-in-flight, and expect to be
> > met by secret service agents on landing with no sense of humor about
> > how network congestion control is supposed to work."
> >
> > FIRST. (and shrink the above to 140 pithy characters)
> >
> > [1] I guess this makes me liable for inciting someone to do a network
> > test, also, which I hope is not illegal (?). I personally don't want
> > to do the test as I have better things to do than rewrite walden and
> > am not fond of roomates named "bubba".
> >
> > ... but I admit to being tempted.
> >
> >> On Thu, Apr 23, 2015 at 7:40 PM, Dave Taht <dave.taht@gmail.com> wrote:
> >>>
> >>> On Thu, Apr 23, 2015 at 6:58 PM, Rich Brown <richb.hanover@gmail.com>
> >>> wrote:
> >>> >
> >>> > On Apr 23, 2015, at 6:22 PM, Dave Taht <dave.taht@gmail.com> wrote:
> >>> >
> >>> >> On Thu, Apr 23, 2015 at 2:44 PM, Rich Brown <
> richb.hanover@gmail.com>
> >>> >> wrote:
> >>> >>> Hi Justin,
> >>> >>>
> >>> >>> The newest Speed Test is great! It is more convincing than I even
> >>> >>> thought it would be. These comments are focused on the "theater"
> of the
> >>> >>> measurements, so that they are unambiguous, and that people can
> figure out
> >>> >>> what's happening
> >>> >>>
> >>> >>> I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to
> >>> >>> illustrate my points. NB: I turned fq_codel off for this demo, so
> that the
> >>> >>> results would be more extreme.
> >>> >>>
> >>> >>> 1) It would be great to label the gauge as "Latency (msec)" I love
> the
> >>> >>> term "bufferbloat" as much as the next guy, but the Speed Test
> page should
> >>> >>> call the measurement what it really is. (The help page can explain
> that the
> >>> >>> latency is almost certainly caused by bufferbloat, but that should
> be the
> >>> >>> place it's mentioned.)
> >>> >>
> >>> >> I would prefer that it say "bufferbloat (lag in msec)" there,
> >>> >
> >>> > OK - I change my opinion. I'll be fussy and say it should be
> >>> > "Bufferbloat (lag) in msec"
> >>>
> >>> Works for me.
> >>>
> >>> >
> >>> >> ... rather
> >>> >> than make people look up another word buried in the doc. Sending
> >>> >> people to the right thing, at the getgo, is important - looking for
> >>> >> "lag" on the internet takes you to a lot of wrong places,
> >>> >> misinformation, and snake oil. So perhaps in doc page I would have
> an
> >>> >> explanation of lag as it relates to bufferbloat and other possible
> >>> >> causes of these behaviors.
> >>> >>
> >>> >> I also do not see the gauge in my linux firefox, that you are
> showing
> >>> >> on youtube. Am I using a wrong link. I LOVE this gauge, however.
> >>> >
> >>> > I see this as well (Firefox in Linux). It seems OK in other browser
> >>> > combinations. (I have *not* done the full set of variations...)
> >>> >
> >>> > If this is a matter that FF won't show that gizmo, perhaps there
> could
> >>> > be a rectangle (like the blue/red ones) for Latency that shows:
> >>> >
> >>> > Latency
> >>> > Down: min/avg/max
> >>> > Up: min/avg/max
> >>> >
> >>> >> Lastly, the static radar plot of pings occupies center stage yet
> does
> >>> >> not do anything later in the test. Either animating it to show the
> >>> >> bloat, or moving it off of center stage and the new bloat gauge to
> >>> >> center stage after it sounds the net, would be good.
> >>> >
> >>> > I have also wondered about whether we should find a way to add
> further
> >>> > value to the radar display. I have not yet come up with useful
> suggestions,
> >>> > though.
> >>>
> >>> Stick it center stage and call it the "Bloat-o-Meter?"
> >>>
> >>> >> bufferbloat as a single word is quite googlable to good resources,
> and
> >>> >> there is some activity on fixing up wikipedia going on that I like a
> >>> >> lot.
> >>> >>
> >>> >>>
> >>> >>> 2) I can't explain why the latency gauge starts at 1-3 msec. I am
> >>> >>> guessing that it's showing incremental latency above the nominal
> value
> >>> >>> measured during the initial setup. I recommend that the gauge
> always show
> >>> >>> actual latency. Thus the gauge could start at 45 msec (0:11 in the
> video)
> >>> >>> then change during the measurements.
> >>> >>>
> >>> >>> 3) I was a bit confused by the behavior of the gauge before/after
> the
> >>> >>> test. I'd like it to change only when when something else is
> moving in the
> >>> >>> window. Here are some suggestions for what would make it clearer:
> >>> >>> - The gauge should not change until the graph starts
> moving. I
> >>> >>> found it confusing to see the latency jump up at 0:13 just before
> the blue
> >>> >>> download chart started, or at 0:28 before the upload chart started
> at 0:31.
> >>> >>> - Between the download and upload tests, the gauge should
> drop
> >>> >>> back to the nominal measured values. I think it does.
> >>> >>> - After the test, the gauge should also drop back to the
> >>> >>> nominal measured value. It seems stuck at 4928 msec (0:55).
> >>> >>
> >>> >> We had/have a lot of this problem in netperf-wrapper - a lot of data
> >>> >> tends to accumulate at the end of the test(s) and pollute the last
> few
> >>> >> data points in bloated scenarios. You have to wait for the queues to
> >>> >> drain to get a "clean" test - although this begins to show what
> >>> >> actually happen when the link is buried in both directions.
> >>> >>
> >>> >> Is there any chance to add a simultaneous up+down+ping test at the
> >>> >> conclusion?
> >>> >
> >>> > This confuses the "speed test" notion of this site. Since the flow of
> >>> > ack's can eat up 25% of the bandwidth of a slow, asymmetric link, I
> am
> >>> > concerend that people would wonder why their upload bandwidth
> suddenly went
> >>> > down dramatically...
> >>>
> >>> To me, that would help. Far too many think that data just arrives by
> >>> magic and doesn't have an ack clock.
> >>>
> >>> > Given that other speed test sites only show upload/download, I would
> >>> > vote to keep that format here. Perhaps there could be an
> >>> > option/preference/setting to do up/down/ping .
> >>> >
> >>> >>> 4) I like the way the latency gauge changes color during the test.
> >>> >>> It's OK for it to use the color to indicate an "opinion". Are you
> happy with
> >>> >>> the thresholds for yellow & red colors?
> >>> >>
> >>> >> It is not clear to me what they are.
> >>> >>
> >>> >>> 5) The gauge makes it appear that moderate latency - 765 msec
> (0:29) -
> >
> > --
> > Dave Täht
> > Open Networking needs **Open Source Hardware**
> >
> > https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 14855 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 4:16 ` Mikael Abrahamsson
@ 2015-04-24 4:24 ` Dave Taht
0 siblings, 0 replies; 183+ messages in thread
From: Dave Taht @ 2015-04-24 4:24 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
On Thu, Apr 23, 2015 at 9:16 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 23 Apr 2015, Mikael Abrahamsson wrote:
>
>> It's actually remarkably un-bloated...
>
>
> I re-did the test again at 6 in the morning,
> http://www.dslreports.com/speedtest/353094 , and it's still not bloated.
>
> I'm actually very happy for my connection from a bloat point of view, I can
> do an scp at 50 megabit/s towards an Internet host and my increased latency
> is usually in the 5-20ms range and I can't really tell whilst doing
> interactive things that this scp is actually going on. Same in the
> downstream direction.
ssh(scp) has windowing issues at higher rates of it's own. Or at
least, it used to.
If I could get you to do a netperf-wrapper rrul_be test on the same
link, I would believe you are getting a valid result. Remember that
one machine running a test program may not be able to mess up the
network, but most people/families/offices have way more people and way
more machines.
I have successfully got netperf-wrapper to build on OSX on macports,
but not brew.
Alternatively, load up the link with a bunch of netperfs, and while
doing a ping. Since it is hard to get 250Mbit service, here's a rough
equivalent of the
rrul test aimed at multiple sites.
ping 8.8.8.8 > ping.log &
netperf -H netperf-eu.bufferbloat.net -l 300 -t TCP_MAERTS &
netperf -H netperf-eu.bufferbloat.net -l 300 -t TCP_MAERTS &
netperf -H netperf-west.bufferbloat.net -l 300 -t TCP_MAERTS &
netperf -H netperf-east.bufferbloat.net -l 300 -t TCP_MAERTS &
netperf -H netperf-eu.bufferbloat.net -l 300 -t TCP_STREAM &
netperf -H netperf-eu.bufferbloat.net -l 300 -t TCP_STREAM &
netperf -H netperf-west.bufferbloat.net -l 300 -t TCP_STREAM &
netperf -H netperf-east.bufferbloat.net -l 300 -t TCP_STREAM &
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 4:04 ` Dave Taht
@ 2015-04-24 4:17 ` Dave Taht
2015-04-24 16:13 ` Rick Jones
2015-04-24 5:00 ` jb
1 sibling, 1 reply; 183+ messages in thread
From: Dave Taht @ 2015-04-24 4:17 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
OK, I have had a little more fun fooling with this.
A huge problem all speedtest-like results have in general is that the
test does not run long enough. About 20 seconds is needed for either
up or down to reach maximum bloat, and many networks have been
"optimized" to look good on the "normal" speedtests which are shorter
than that. It appears this test only runs for about 15 seconds, and
you can clearly see from this result
http://www.dslreports.com/speedtest/353034
that we have clogged up the link in one direction which has not
cleared in time for the next test to start.
While consumer patience is limited, I would
A) recommend increasing the duration of the upload and download tests
by at least 5, maybe 10 seconds. I note that this change, made
universally across more tests, would also make for better tests.
Everyone's impression of how their connection is working is shaped by
speedtest not running long enough.
B) However, the reason why I designed the 60 second long rrul
(up+down+ping) test was that it detects maximum bloat in minimal time,
usually peaking in about 20 seconds on every access technology we have
at reasonable RTTs. I would rather like a test like that in this
dslreports suite.
I keep hammering away at the idea that bloat happens in both
directions - which is most easily created by bittorrent-like behavior
- but just as easily created by someone, or their family, or their
officemates doing an upload and download on the same link. Fixing the
CMTSes and head-ends also needs to happen. No matter how much we can
improve matters with an inbound shaper on cpe/home routers, doing it
right on the head-ends is MUCH nicer, lower jitter, etc.
C) waiting for the bloat to clear would be a good idea before starting
the next test.... and showing "waiting for the bloat to clear" as
part of the result would be good also.
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 18:04 ` Mikael Abrahamsson
` (2 preceding siblings ...)
2015-04-23 21:44 ` Rich Brown
@ 2015-04-24 4:16 ` Mikael Abrahamsson
2015-04-24 4:24 ` Dave Taht
3 siblings, 1 reply; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-04-24 4:16 UTC (permalink / raw)
To: bloat
On Thu, 23 Apr 2015, Mikael Abrahamsson wrote:
> It's actually remarkably un-bloated...
I re-did the test again at 6 in the morning,
http://www.dslreports.com/speedtest/353094 , and it's still not bloated.
I'm actually very happy for my connection from a bloat point of view, I
can do an scp at 50 megabit/s towards an Internet host and my increased
latency is usually in the 5-20ms range and I can't really tell whilst
doing interactive things that this scp is actually going on. Same in the
downstream direction.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 3:39 ` Dave Taht
@ 2015-04-24 4:04 ` Dave Taht
2015-04-24 4:17 ` Dave Taht
2015-04-24 5:00 ` jb
2015-04-24 16:09 ` Rick Jones
1 sibling, 2 replies; 183+ messages in thread
From: Dave Taht @ 2015-04-24 4:04 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
Summary result from my hotel, with 11 seconds of bloat.
http://www.dslreports.com/speedtest/353034
On Thu, Apr 23, 2015 at 8:39 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, Apr 23, 2015 at 8:20 PM, Jim Gettys <jg@freedesktop.org> wrote:
>> I love the test, and thanks for the video!
>>
>> There is an interesting problem: some paths have for all intents and
>> purposes infinite buffering, so you can end up with not just seconds, but
>> even minutes of bloat. The current interplanetary record for bufferbloat is
>> GoGo inflight is 760(!) seconds of buffering (using netperf-wrapper, RRUL
>> test, on several flights); Mars is closer to Earth than that for part of its
>> orbit. I've seen 60 seconds or so on some XFinity WiFi and similar amounts
>> of bloat on some cellular systems. Exactly how quickly one might fill such
>> buffers depends on the details of load parts of a test.
>>
>> Please don't try the netperf-wrapper test on GoGo; all the users on the
>> plane will suffer, and their Squid proxy dies entirely. And the government
>> just asked people to report "hackers" on airplanes.... Would that GoGo
>> listen to the mail and tweets I sent them to try to report the problem to
>> them.... If anyone on the list happens to know someone from GoGo, I'd like
>> to hear about it.
>
> I have also sent mail and tweets to no effect.
>
> I hereby donate 1k to the "bufferbloat testing vs gogo-in-flight legal
> defense fund". Anyone that gets busted by testing for bufferbloat on
> an airplane using these new tools or the rrul test can tap me for
> that. Anyone else willing to chip in?[1]
>
> I note that tweeting in the air after such a test might be impossible
> (on at least one bloat test done so far the connection never came
> back) so you'd probably have to tweet something like
>
> "I am about to test for bufferbloat on my flight. If I do not tweet
> again for the next 4 hours, I blew up gogo-in-flight, and expect to be
> met by secret service agents on landing with no sense of humor about
> how network congestion control is supposed to work."
>
> FIRST. (and shrink the above to 140 pithy characters)
>
> [1] I guess this makes me liable for inciting someone to do a network
> test, also, which I hope is not illegal (?). I personally don't want
> to do the test as I have better things to do than rewrite walden and
> am not fond of roomates named "bubba".
>
> ... but I admit to being tempted.
>
>> On Thu, Apr 23, 2015 at 7:40 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>> On Thu, Apr 23, 2015 at 6:58 PM, Rich Brown <richb.hanover@gmail.com>
>>> wrote:
>>> >
>>> > On Apr 23, 2015, at 6:22 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>> >
>>> >> On Thu, Apr 23, 2015 at 2:44 PM, Rich Brown <richb.hanover@gmail.com>
>>> >> wrote:
>>> >>> Hi Justin,
>>> >>>
>>> >>> The newest Speed Test is great! It is more convincing than I even
>>> >>> thought it would be. These comments are focused on the "theater" of the
>>> >>> measurements, so that they are unambiguous, and that people can figure out
>>> >>> what's happening
>>> >>>
>>> >>> I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to
>>> >>> illustrate my points. NB: I turned fq_codel off for this demo, so that the
>>> >>> results would be more extreme.
>>> >>>
>>> >>> 1) It would be great to label the gauge as "Latency (msec)" I love the
>>> >>> term "bufferbloat" as much as the next guy, but the Speed Test page should
>>> >>> call the measurement what it really is. (The help page can explain that the
>>> >>> latency is almost certainly caused by bufferbloat, but that should be the
>>> >>> place it's mentioned.)
>>> >>
>>> >> I would prefer that it say "bufferbloat (lag in msec)" there,
>>> >
>>> > OK - I change my opinion. I'll be fussy and say it should be
>>> > "Bufferbloat (lag) in msec"
>>>
>>> Works for me.
>>>
>>> >
>>> >> ... rather
>>> >> than make people look up another word buried in the doc. Sending
>>> >> people to the right thing, at the getgo, is important - looking for
>>> >> "lag" on the internet takes you to a lot of wrong places,
>>> >> misinformation, and snake oil. So perhaps in doc page I would have an
>>> >> explanation of lag as it relates to bufferbloat and other possible
>>> >> causes of these behaviors.
>>> >>
>>> >> I also do not see the gauge in my linux firefox, that you are showing
>>> >> on youtube. Am I using a wrong link. I LOVE this gauge, however.
>>> >
>>> > I see this as well (Firefox in Linux). It seems OK in other browser
>>> > combinations. (I have *not* done the full set of variations...)
>>> >
>>> > If this is a matter that FF won't show that gizmo, perhaps there could
>>> > be a rectangle (like the blue/red ones) for Latency that shows:
>>> >
>>> > Latency
>>> > Down: min/avg/max
>>> > Up: min/avg/max
>>> >
>>> >> Lastly, the static radar plot of pings occupies center stage yet does
>>> >> not do anything later in the test. Either animating it to show the
>>> >> bloat, or moving it off of center stage and the new bloat gauge to
>>> >> center stage after it sounds the net, would be good.
>>> >
>>> > I have also wondered about whether we should find a way to add further
>>> > value to the radar display. I have not yet come up with useful suggestions,
>>> > though.
>>>
>>> Stick it center stage and call it the "Bloat-o-Meter?"
>>>
>>> >> bufferbloat as a single word is quite googlable to good resources, and
>>> >> there is some activity on fixing up wikipedia going on that I like a
>>> >> lot.
>>> >>
>>> >>>
>>> >>> 2) I can't explain why the latency gauge starts at 1-3 msec. I am
>>> >>> guessing that it's showing incremental latency above the nominal value
>>> >>> measured during the initial setup. I recommend that the gauge always show
>>> >>> actual latency. Thus the gauge could start at 45 msec (0:11 in the video)
>>> >>> then change during the measurements.
>>> >>>
>>> >>> 3) I was a bit confused by the behavior of the gauge before/after the
>>> >>> test. I'd like it to change only when when something else is moving in the
>>> >>> window. Here are some suggestions for what would make it clearer:
>>> >>> - The gauge should not change until the graph starts moving. I
>>> >>> found it confusing to see the latency jump up at 0:13 just before the blue
>>> >>> download chart started, or at 0:28 before the upload chart started at 0:31.
>>> >>> - Between the download and upload tests, the gauge should drop
>>> >>> back to the nominal measured values. I think it does.
>>> >>> - After the test, the gauge should also drop back to the
>>> >>> nominal measured value. It seems stuck at 4928 msec (0:55).
>>> >>
>>> >> We had/have a lot of this problem in netperf-wrapper - a lot of data
>>> >> tends to accumulate at the end of the test(s) and pollute the last few
>>> >> data points in bloated scenarios. You have to wait for the queues to
>>> >> drain to get a "clean" test - although this begins to show what
>>> >> actually happen when the link is buried in both directions.
>>> >>
>>> >> Is there any chance to add a simultaneous up+down+ping test at the
>>> >> conclusion?
>>> >
>>> > This confuses the "speed test" notion of this site. Since the flow of
>>> > ack's can eat up 25% of the bandwidth of a slow, asymmetric link, I am
>>> > concerend that people would wonder why their upload bandwidth suddenly went
>>> > down dramatically...
>>>
>>> To me, that would help. Far too many think that data just arrives by
>>> magic and doesn't have an ack clock.
>>>
>>> > Given that other speed test sites only show upload/download, I would
>>> > vote to keep that format here. Perhaps there could be an
>>> > option/preference/setting to do up/down/ping .
>>> >
>>> >>> 4) I like the way the latency gauge changes color during the test.
>>> >>> It's OK for it to use the color to indicate an "opinion". Are you happy with
>>> >>> the thresholds for yellow & red colors?
>>> >>
>>> >> It is not clear to me what they are.
>>> >>
>>> >>> 5) The gauge makes it appear that moderate latency - 765 msec (0:29) -
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 3:20 ` Jim Gettys
@ 2015-04-24 3:39 ` Dave Taht
2015-04-24 4:04 ` Dave Taht
2015-04-24 16:09 ` Rick Jones
0 siblings, 2 replies; 183+ messages in thread
From: Dave Taht @ 2015-04-24 3:39 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
On Thu, Apr 23, 2015 at 8:20 PM, Jim Gettys <jg@freedesktop.org> wrote:
> I love the test, and thanks for the video!
>
> There is an interesting problem: some paths have for all intents and
> purposes infinite buffering, so you can end up with not just seconds, but
> even minutes of bloat. The current interplanetary record for bufferbloat is
> GoGo inflight is 760(!) seconds of buffering (using netperf-wrapper, RRUL
> test, on several flights); Mars is closer to Earth than that for part of its
> orbit. I've seen 60 seconds or so on some XFinity WiFi and similar amounts
> of bloat on some cellular systems. Exactly how quickly one might fill such
> buffers depends on the details of load parts of a test.
>
> Please don't try the netperf-wrapper test on GoGo; all the users on the
> plane will suffer, and their Squid proxy dies entirely. And the government
> just asked people to report "hackers" on airplanes.... Would that GoGo
> listen to the mail and tweets I sent them to try to report the problem to
> them.... If anyone on the list happens to know someone from GoGo, I'd like
> to hear about it.
I have also sent mail and tweets to no effect.
I hereby donate 1k to the "bufferbloat testing vs gogo-in-flight legal
defense fund". Anyone that gets busted by testing for bufferbloat on
an airplane using these new tools or the rrul test can tap me for
that. Anyone else willing to chip in?[1]
I note that tweeting in the air after such a test might be impossible
(on at least one bloat test done so far the connection never came
back) so you'd probably have to tweet something like
"I am about to test for bufferbloat on my flight. If I do not tweet
again for the next 4 hours, I blew up gogo-in-flight, and expect to be
met by secret service agents on landing with no sense of humor about
how network congestion control is supposed to work."
FIRST. (and shrink the above to 140 pithy characters)
[1] I guess this makes me liable for inciting someone to do a network
test, also, which I hope is not illegal (?). I personally don't want
to do the test as I have better things to do than rewrite walden and
am not fond of roomates named "bubba".
... but I admit to being tempted.
> On Thu, Apr 23, 2015 at 7:40 PM, Dave Taht <dave.taht@gmail.com> wrote:
>>
>> On Thu, Apr 23, 2015 at 6:58 PM, Rich Brown <richb.hanover@gmail.com>
>> wrote:
>> >
>> > On Apr 23, 2015, at 6:22 PM, Dave Taht <dave.taht@gmail.com> wrote:
>> >
>> >> On Thu, Apr 23, 2015 at 2:44 PM, Rich Brown <richb.hanover@gmail.com>
>> >> wrote:
>> >>> Hi Justin,
>> >>>
>> >>> The newest Speed Test is great! It is more convincing than I even
>> >>> thought it would be. These comments are focused on the "theater" of the
>> >>> measurements, so that they are unambiguous, and that people can figure out
>> >>> what's happening
>> >>>
>> >>> I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to
>> >>> illustrate my points. NB: I turned fq_codel off for this demo, so that the
>> >>> results would be more extreme.
>> >>>
>> >>> 1) It would be great to label the gauge as "Latency (msec)" I love the
>> >>> term "bufferbloat" as much as the next guy, but the Speed Test page should
>> >>> call the measurement what it really is. (The help page can explain that the
>> >>> latency is almost certainly caused by bufferbloat, but that should be the
>> >>> place it's mentioned.)
>> >>
>> >> I would prefer that it say "bufferbloat (lag in msec)" there,
>> >
>> > OK - I change my opinion. I'll be fussy and say it should be
>> > "Bufferbloat (lag) in msec"
>>
>> Works for me.
>>
>> >
>> >> ... rather
>> >> than make people look up another word buried in the doc. Sending
>> >> people to the right thing, at the getgo, is important - looking for
>> >> "lag" on the internet takes you to a lot of wrong places,
>> >> misinformation, and snake oil. So perhaps in doc page I would have an
>> >> explanation of lag as it relates to bufferbloat and other possible
>> >> causes of these behaviors.
>> >>
>> >> I also do not see the gauge in my linux firefox, that you are showing
>> >> on youtube. Am I using a wrong link. I LOVE this gauge, however.
>> >
>> > I see this as well (Firefox in Linux). It seems OK in other browser
>> > combinations. (I have *not* done the full set of variations...)
>> >
>> > If this is a matter that FF won't show that gizmo, perhaps there could
>> > be a rectangle (like the blue/red ones) for Latency that shows:
>> >
>> > Latency
>> > Down: min/avg/max
>> > Up: min/avg/max
>> >
>> >> Lastly, the static radar plot of pings occupies center stage yet does
>> >> not do anything later in the test. Either animating it to show the
>> >> bloat, or moving it off of center stage and the new bloat gauge to
>> >> center stage after it sounds the net, would be good.
>> >
>> > I have also wondered about whether we should find a way to add further
>> > value to the radar display. I have not yet come up with useful suggestions,
>> > though.
>>
>> Stick it center stage and call it the "Bloat-o-Meter?"
>>
>> >> bufferbloat as a single word is quite googlable to good resources, and
>> >> there is some activity on fixing up wikipedia going on that I like a
>> >> lot.
>> >>
>> >>>
>> >>> 2) I can't explain why the latency gauge starts at 1-3 msec. I am
>> >>> guessing that it's showing incremental latency above the nominal value
>> >>> measured during the initial setup. I recommend that the gauge always show
>> >>> actual latency. Thus the gauge could start at 45 msec (0:11 in the video)
>> >>> then change during the measurements.
>> >>>
>> >>> 3) I was a bit confused by the behavior of the gauge before/after the
>> >>> test. I'd like it to change only when when something else is moving in the
>> >>> window. Here are some suggestions for what would make it clearer:
>> >>> - The gauge should not change until the graph starts moving. I
>> >>> found it confusing to see the latency jump up at 0:13 just before the blue
>> >>> download chart started, or at 0:28 before the upload chart started at 0:31.
>> >>> - Between the download and upload tests, the gauge should drop
>> >>> back to the nominal measured values. I think it does.
>> >>> - After the test, the gauge should also drop back to the
>> >>> nominal measured value. It seems stuck at 4928 msec (0:55).
>> >>
>> >> We had/have a lot of this problem in netperf-wrapper - a lot of data
>> >> tends to accumulate at the end of the test(s) and pollute the last few
>> >> data points in bloated scenarios. You have to wait for the queues to
>> >> drain to get a "clean" test - although this begins to show what
>> >> actually happen when the link is buried in both directions.
>> >>
>> >> Is there any chance to add a simultaneous up+down+ping test at the
>> >> conclusion?
>> >
>> > This confuses the "speed test" notion of this site. Since the flow of
>> > ack's can eat up 25% of the bandwidth of a slow, asymmetric link, I am
>> > concerend that people would wonder why their upload bandwidth suddenly went
>> > down dramatically...
>>
>> To me, that would help. Far too many think that data just arrives by
>> magic and doesn't have an ack clock.
>>
>> > Given that other speed test sites only show upload/download, I would
>> > vote to keep that format here. Perhaps there could be an
>> > option/preference/setting to do up/down/ping .
>> >
>> >>> 4) I like the way the latency gauge changes color during the test.
>> >>> It's OK for it to use the color to indicate an "opinion". Are you happy with
>> >>> the thresholds for yellow & red colors?
>> >>
>> >> It is not clear to me what they are.
>> >>
>> >>> 5) The gauge makes it appear that moderate latency - 765 msec (0:29) -
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 2:40 ` Dave Taht
@ 2015-04-24 3:20 ` Jim Gettys
2015-04-24 3:39 ` Dave Taht
0 siblings, 1 reply; 183+ messages in thread
From: Jim Gettys @ 2015-04-24 3:20 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 11205 bytes --]
I love the test, and thanks for the video!
There is an interesting problem: some paths have for all intents and
purposes infinite buffering, so you can end up with not just seconds, but
even minutes of bloat. The current interplanetary record for bufferbloat
is GoGo inflight is 760(!) seconds of buffering (using netperf-wrapper,
RRUL test, on several flights); Mars is closer to Earth than that for part
of its orbit. I've seen 60 seconds or so on some XFinity WiFi and similar
amounts of bloat on some cellular systems. Exactly how quickly one might
fill such buffers depends on the details of load parts of a test.
Please don't try the netperf-wrapper test on GoGo; all the users on the
plane will suffer, and their Squid proxy dies entirely. And the government
just asked people to report "hackers" on airplanes.... Would that GoGo
listen to the mail and tweets I sent them to try to report the problem to
them.... If anyone on the list happens to know someone from GoGo, I'd like
to hear about it.
But I agree that a log display hides just how bad things are; in some
cases, rescaleing the display probably needs to happen, possibly more than
once as the buffers fill during the tests.
- Jim
On Thu, Apr 23, 2015 at 7:40 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, Apr 23, 2015 at 6:58 PM, Rich Brown <richb.hanover@gmail.com>
> wrote:
> >
> > On Apr 23, 2015, at 6:22 PM, Dave Taht <dave.taht@gmail.com> wrote:
> >
> >> On Thu, Apr 23, 2015 at 2:44 PM, Rich Brown <richb.hanover@gmail.com>
> wrote:
> >>> Hi Justin,
> >>>
> >>> The newest Speed Test is great! It is more convincing than I even
> thought it would be. These comments are focused on the "theater" of the
> measurements, so that they are unambiguous, and that people can figure out
> what's happening
> >>>
> >>> I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to
> illustrate my points. NB: I turned fq_codel off for this demo, so that the
> results would be more extreme.
> >>>
> >>> 1) It would be great to label the gauge as "Latency (msec)" I love the
> term "bufferbloat" as much as the next guy, but the Speed Test page should
> call the measurement what it really is. (The help page can explain that the
> latency is almost certainly caused by bufferbloat, but that should be the
> place it's mentioned.)
> >>
> >> I would prefer that it say "bufferbloat (lag in msec)" there,
> >
> > OK - I change my opinion. I'll be fussy and say it should be
> "Bufferbloat (lag) in msec"
>
> Works for me.
>
> >
> >> ... rather
> >> than make people look up another word buried in the doc. Sending
> >> people to the right thing, at the getgo, is important - looking for
> >> "lag" on the internet takes you to a lot of wrong places,
> >> misinformation, and snake oil. So perhaps in doc page I would have an
> >> explanation of lag as it relates to bufferbloat and other possible
> >> causes of these behaviors.
> >>
> >> I also do not see the gauge in my linux firefox, that you are showing
> >> on youtube. Am I using a wrong link. I LOVE this gauge, however.
> >
> > I see this as well (Firefox in Linux). It seems OK in other browser
> combinations. (I have *not* done the full set of variations...)
> >
> > If this is a matter that FF won't show that gizmo, perhaps there could
> be a rectangle (like the blue/red ones) for Latency that shows:
> >
> > Latency
> > Down: min/avg/max
> > Up: min/avg/max
> >
> >> Lastly, the static radar plot of pings occupies center stage yet does
> >> not do anything later in the test. Either animating it to show the
> >> bloat, or moving it off of center stage and the new bloat gauge to
> >> center stage after it sounds the net, would be good.
> >
> > I have also wondered about whether we should find a way to add further
> value to the radar display. I have not yet come up with useful suggestions,
> though.
>
> Stick it center stage and call it the "Bloat-o-Meter?"
>
> >> bufferbloat as a single word is quite googlable to good resources, and
> >> there is some activity on fixing up wikipedia going on that I like a
> >> lot.
> >>
> >>>
> >>> 2) I can't explain why the latency gauge starts at 1-3 msec. I am
> guessing that it's showing incremental latency above the nominal value
> measured during the initial setup. I recommend that the gauge always show
> actual latency. Thus the gauge could start at 45 msec (0:11 in the video)
> then change during the measurements.
> >>>
> >>> 3) I was a bit confused by the behavior of the gauge before/after the
> test. I'd like it to change only when when something else is moving in the
> window. Here are some suggestions for what would make it clearer:
> >>> - The gauge should not change until the graph starts moving. I
> found it confusing to see the latency jump up at 0:13 just before the blue
> download chart started, or at 0:28 before the upload chart started at 0:31.
> >>> - Between the download and upload tests, the gauge should drop
> back to the nominal measured values. I think it does.
> >>> - After the test, the gauge should also drop back to the
> nominal measured value. It seems stuck at 4928 msec (0:55).
> >>
> >> We had/have a lot of this problem in netperf-wrapper - a lot of data
> >> tends to accumulate at the end of the test(s) and pollute the last few
> >> data points in bloated scenarios. You have to wait for the queues to
> >> drain to get a "clean" test - although this begins to show what
> >> actually happen when the link is buried in both directions.
> >>
> >> Is there any chance to add a simultaneous up+down+ping test at the
> conclusion?
> >
> > This confuses the "speed test" notion of this site. Since the flow of
> ack's can eat up 25% of the bandwidth of a slow, asymmetric link, I am
> concerend that people would wonder why their upload bandwidth suddenly went
> down dramatically...
>
> To me, that would help. Far too many think that data just arrives by
> magic and doesn't have an ack clock.
>
> > Given that other speed test sites only show upload/download, I would
> vote to keep that format here. Perhaps there could be an
> option/preference/setting to do up/down/ping .
> >
> >>> 4) I like the way the latency gauge changes color during the test.
> It's OK for it to use the color to indicate an "opinion". Are you happy
> with the thresholds for yellow & red colors?
> >>
> >> It is not clear to me what they are.
> >>
> >>> 5) The gauge makes it appear that moderate latency - 765 msec (0:29) -
> is the same as when the value goes to 1768 msec (0:31), and also when it
> goes to 4,447 msec (0:35), etc. It might make more sense to have the
> chart's full-scale at something like 10 seconds during the test. The scale
> could be logarithmic, so that "normal" values occupy up to a third or half
> of scale, and bad values get pretty close to the top end. Horrible latency
> - greater than 10 sec, say - should peg the indicator at full scale.
> >>
> >> I am generally resistant to log scales as misleading an untrained eye.
> >> In this case I can certainly see the gauge behaving almost as
> >> described above, except that I would nearly flatline the gauge at >
> >> 250ms, and add indicators for higher rates at the outer edges of the
> >> graph.
> >
> > I am suggesting a slightly different representation. Instead of
> flatlining (I assume that you mean full-scale indicator of the gauge) at
> 250msec (which I agree is bad), perhaps the gauge could use the levels I
> sent in a previous message...
>
> Nearly full scale (graph in red), with a few additional lines just
> past that with the snarky annotations. (think of a classic automobile
> rpm tach)
>
> >
> >> I agree that the results graph should never be logarithmic - it hides
> the bad
> >> news of high latency.
> >>
> >> However, the gauge that shows instantaneous latency could be
> logarithmic. I was
> >> reacting to the appearance of slamming against the limit at 765 msec,
> then not making it
> >> more evident when latency jumped to 1768 msec, then to 4447 msec.
>
> Well the actual number above the beyond redlined gauge makes for a
> good screenshot.
>
> >> Imagine the same gauge, with the following gradations at these clock
> positions,
> >> with the bar colored to match:
> >>
> >> 0 msec - 9:00 (straight to the left)
> >> 25 msec - 10:00
> >> 100 msec - 11:00
> >> 250 msec - 12:00
> >> 1,000 msec - 1:00
> >> 3,000 msec - 2:00
> >> 10,000+ msec - 3:00 (straight to the left)
> >
> >> I can see staying below 30ms induced latency as "green", below 100ms
> >> as "blue", below 250ms as yellow, and > 250ms as red, and a line
> >> marking "rediculous" at > 1sec, and "insane" at 2sec would be good.
> >> Other pithy markings at the end of the tach would be fun. For example,
> >> gogo in flight has the interplanetary record for bufferbloat,
> >> something like 760 seconds the last time we tried it, so a 3rd line on
> >> the tach for earth-mars distances would be amusing.
> >
> > These color schemes sound good. It could indeed be amusing to have
> editorial comments ("ludicrous bloat") for the worst situations.
> >
> >> In the long term, somehow detecting if FQ is in play would be good,
> >> but I have no idea how to do that in a browser.
> >>
> >>> 6) On the Results page (1:20), I like the red background behind the
> latency values. I don't understand why the grey bars at the right end of
> the chart are so high. Is the latency still decreasing as the queue drains?
> Perhaps the ping tests should run longer until it gets closer to the
> nominal value.
> >>
> >> I would suspect the queues are still draining, filled with missing
> >> acknowledgements, etc, etc. waiting until the ping returned closer to
> >> normal before starting the next phase of the test would help.
> >
> > This sounds right to me.
> >
> >>> This is such a great tool. Thanks!
> >>
> >> I am very, very, very delighted also. I hope that with tools like
> >> these in more users hands, AND the data collected from them, that we
> >> can make a logarithmic jump in the number of users, devices, and ISPs
> >> that have good bandwidth and low latency in the near future.
> >>
> >> Thank you very, very much for the work.
> >>
> >> As a side note, justin, have you fixed your own bloat at home?
> >>
> >>>
> >>> Rich
> >>> _______________________________________________
> >>> Bloat mailing list
> >>> Bloat@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/bloat
> >>
> >>
> >>
> >> --
> >> Dave Täht
> >> Open Networking needs **Open Source Hardware**
> >>
> >> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> >
> > Rich
> >
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 14119 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-24 1:58 ` Rich Brown
@ 2015-04-24 2:40 ` Dave Taht
2015-04-24 3:20 ` Jim Gettys
2015-04-24 13:49 ` Pedro Tumusok
1 sibling, 1 reply; 183+ messages in thread
From: Dave Taht @ 2015-04-24 2:40 UTC (permalink / raw)
To: Rich Brown; +Cc: bloat
On Thu, Apr 23, 2015 at 6:58 PM, Rich Brown <richb.hanover@gmail.com> wrote:
>
> On Apr 23, 2015, at 6:22 PM, Dave Taht <dave.taht@gmail.com> wrote:
>
>> On Thu, Apr 23, 2015 at 2:44 PM, Rich Brown <richb.hanover@gmail.com> wrote:
>>> Hi Justin,
>>>
>>> The newest Speed Test is great! It is more convincing than I even thought it would be. These comments are focused on the "theater" of the measurements, so that they are unambiguous, and that people can figure out what's happening
>>>
>>> I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to illustrate my points. NB: I turned fq_codel off for this demo, so that the results would be more extreme.
>>>
>>> 1) It would be great to label the gauge as "Latency (msec)" I love the term "bufferbloat" as much as the next guy, but the Speed Test page should call the measurement what it really is. (The help page can explain that the latency is almost certainly caused by bufferbloat, but that should be the place it's mentioned.)
>>
>> I would prefer that it say "bufferbloat (lag in msec)" there,
>
> OK - I change my opinion. I'll be fussy and say it should be "Bufferbloat (lag) in msec"
Works for me.
>
>> ... rather
>> than make people look up another word buried in the doc. Sending
>> people to the right thing, at the getgo, is important - looking for
>> "lag" on the internet takes you to a lot of wrong places,
>> misinformation, and snake oil. So perhaps in doc page I would have an
>> explanation of lag as it relates to bufferbloat and other possible
>> causes of these behaviors.
>>
>> I also do not see the gauge in my linux firefox, that you are showing
>> on youtube. Am I using a wrong link. I LOVE this gauge, however.
>
> I see this as well (Firefox in Linux). It seems OK in other browser combinations. (I have *not* done the full set of variations...)
>
> If this is a matter that FF won't show that gizmo, perhaps there could be a rectangle (like the blue/red ones) for Latency that shows:
>
> Latency
> Down: min/avg/max
> Up: min/avg/max
>
>> Lastly, the static radar plot of pings occupies center stage yet does
>> not do anything later in the test. Either animating it to show the
>> bloat, or moving it off of center stage and the new bloat gauge to
>> center stage after it sounds the net, would be good.
>
> I have also wondered about whether we should find a way to add further value to the radar display. I have not yet come up with useful suggestions, though.
Stick it center stage and call it the "Bloat-o-Meter?"
>> bufferbloat as a single word is quite googlable to good resources, and
>> there is some activity on fixing up wikipedia going on that I like a
>> lot.
>>
>>>
>>> 2) I can't explain why the latency gauge starts at 1-3 msec. I am guessing that it's showing incremental latency above the nominal value measured during the initial setup. I recommend that the gauge always show actual latency. Thus the gauge could start at 45 msec (0:11 in the video) then change during the measurements.
>>>
>>> 3) I was a bit confused by the behavior of the gauge before/after the test. I'd like it to change only when when something else is moving in the window. Here are some suggestions for what would make it clearer:
>>> - The gauge should not change until the graph starts moving. I found it confusing to see the latency jump up at 0:13 just before the blue download chart started, or at 0:28 before the upload chart started at 0:31.
>>> - Between the download and upload tests, the gauge should drop back to the nominal measured values. I think it does.
>>> - After the test, the gauge should also drop back to the nominal measured value. It seems stuck at 4928 msec (0:55).
>>
>> We had/have a lot of this problem in netperf-wrapper - a lot of data
>> tends to accumulate at the end of the test(s) and pollute the last few
>> data points in bloated scenarios. You have to wait for the queues to
>> drain to get a "clean" test - although this begins to show what
>> actually happen when the link is buried in both directions.
>>
>> Is there any chance to add a simultaneous up+down+ping test at the conclusion?
>
> This confuses the "speed test" notion of this site. Since the flow of ack's can eat up 25% of the bandwidth of a slow, asymmetric link, I am concerend that people would wonder why their upload bandwidth suddenly went down dramatically...
To me, that would help. Far too many think that data just arrives by
magic and doesn't have an ack clock.
> Given that other speed test sites only show upload/download, I would vote to keep that format here. Perhaps there could be an option/preference/setting to do up/down/ping .
>
>>> 4) I like the way the latency gauge changes color during the test. It's OK for it to use the color to indicate an "opinion". Are you happy with the thresholds for yellow & red colors?
>>
>> It is not clear to me what they are.
>>
>>> 5) The gauge makes it appear that moderate latency - 765 msec (0:29) - is the same as when the value goes to 1768 msec (0:31), and also when it goes to 4,447 msec (0:35), etc. It might make more sense to have the chart's full-scale at something like 10 seconds during the test. The scale could be logarithmic, so that "normal" values occupy up to a third or half of scale, and bad values get pretty close to the top end. Horrible latency - greater than 10 sec, say - should peg the indicator at full scale.
>>
>> I am generally resistant to log scales as misleading an untrained eye.
>> In this case I can certainly see the gauge behaving almost as
>> described above, except that I would nearly flatline the gauge at >
>> 250ms, and add indicators for higher rates at the outer edges of the
>> graph.
>
> I am suggesting a slightly different representation. Instead of flatlining (I assume that you mean full-scale indicator of the gauge) at 250msec (which I agree is bad), perhaps the gauge could use the levels I sent in a previous message...
Nearly full scale (graph in red), with a few additional lines just
past that with the snarky annotations. (think of a classic automobile
rpm tach)
>
>> I agree that the results graph should never be logarithmic - it hides the bad
>> news of high latency.
>>
>> However, the gauge that shows instantaneous latency could be logarithmic. I was
>> reacting to the appearance of slamming against the limit at 765 msec, then not making it
>> more evident when latency jumped to 1768 msec, then to 4447 msec.
Well the actual number above the beyond redlined gauge makes for a
good screenshot.
>> Imagine the same gauge, with the following gradations at these clock positions,
>> with the bar colored to match:
>>
>> 0 msec - 9:00 (straight to the left)
>> 25 msec - 10:00
>> 100 msec - 11:00
>> 250 msec - 12:00
>> 1,000 msec - 1:00
>> 3,000 msec - 2:00
>> 10,000+ msec - 3:00 (straight to the left)
>
>> I can see staying below 30ms induced latency as "green", below 100ms
>> as "blue", below 250ms as yellow, and > 250ms as red, and a line
>> marking "rediculous" at > 1sec, and "insane" at 2sec would be good.
>> Other pithy markings at the end of the tach would be fun. For example,
>> gogo in flight has the interplanetary record for bufferbloat,
>> something like 760 seconds the last time we tried it, so a 3rd line on
>> the tach for earth-mars distances would be amusing.
>
> These color schemes sound good. It could indeed be amusing to have editorial comments ("ludicrous bloat") for the worst situations.
>
>> In the long term, somehow detecting if FQ is in play would be good,
>> but I have no idea how to do that in a browser.
>>
>>> 6) On the Results page (1:20), I like the red background behind the latency values. I don't understand why the grey bars at the right end of the chart are so high. Is the latency still decreasing as the queue drains? Perhaps the ping tests should run longer until it gets closer to the nominal value.
>>
>> I would suspect the queues are still draining, filled with missing
>> acknowledgements, etc, etc. waiting until the ping returned closer to
>> normal before starting the next phase of the test would help.
>
> This sounds right to me.
>
>>> This is such a great tool. Thanks!
>>
>> I am very, very, very delighted also. I hope that with tools like
>> these in more users hands, AND the data collected from them, that we
>> can make a logarithmic jump in the number of users, devices, and ISPs
>> that have good bandwidth and low latency in the near future.
>>
>> Thank you very, very much for the work.
>>
>> As a side note, justin, have you fixed your own bloat at home?
>>
>>>
>>> Rich
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>>
>> --
>> Dave Täht
>> Open Networking needs **Open Source Hardware**
>>
>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>
> Rich
>
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 22:22 ` Dave Taht
2015-04-23 22:29 ` Dave Taht
@ 2015-04-24 1:58 ` Rich Brown
2015-04-24 2:40 ` Dave Taht
2015-04-24 13:49 ` Pedro Tumusok
1 sibling, 2 replies; 183+ messages in thread
From: Rich Brown @ 2015-04-24 1:58 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
On Apr 23, 2015, at 6:22 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, Apr 23, 2015 at 2:44 PM, Rich Brown <richb.hanover@gmail.com> wrote:
>> Hi Justin,
>>
>> The newest Speed Test is great! It is more convincing than I even thought it would be. These comments are focused on the "theater" of the measurements, so that they are unambiguous, and that people can figure out what's happening
>>
>> I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to illustrate my points. NB: I turned fq_codel off for this demo, so that the results would be more extreme.
>>
>> 1) It would be great to label the gauge as "Latency (msec)" I love the term "bufferbloat" as much as the next guy, but the Speed Test page should call the measurement what it really is. (The help page can explain that the latency is almost certainly caused by bufferbloat, but that should be the place it's mentioned.)
>
> I would prefer that it say "bufferbloat (lag in msec)" there,
OK - I change my opinion. I'll be fussy and say it should be "Bufferbloat (lag) in msec"
> ... rather
> than make people look up another word buried in the doc. Sending
> people to the right thing, at the getgo, is important - looking for
> "lag" on the internet takes you to a lot of wrong places,
> misinformation, and snake oil. So perhaps in doc page I would have an
> explanation of lag as it relates to bufferbloat and other possible
> causes of these behaviors.
>
> I also do not see the gauge in my linux firefox, that you are showing
> on youtube. Am I using a wrong link. I LOVE this gauge, however.
I see this as well (Firefox in Linux). It seems OK in other browser combinations. (I have *not* done the full set of variations...)
If this is a matter that FF won't show that gizmo, perhaps there could be a rectangle (like the blue/red ones) for Latency that shows:
Latency
Down: min/avg/max
Up: min/avg/max
> Lastly, the static radar plot of pings occupies center stage yet does
> not do anything later in the test. Either animating it to show the
> bloat, or moving it off of center stage and the new bloat gauge to
> center stage after it sounds the net, would be good.
I have also wondered about whether we should find a way to add further value to the radar display. I have not yet come up with useful suggestions, though.
>
> bufferbloat as a single word is quite googlable to good resources, and
> there is some activity on fixing up wikipedia going on that I like a
> lot.
>
>>
>> 2) I can't explain why the latency gauge starts at 1-3 msec. I am guessing that it's showing incremental latency above the nominal value measured during the initial setup. I recommend that the gauge always show actual latency. Thus the gauge could start at 45 msec (0:11 in the video) then change during the measurements.
>>
>> 3) I was a bit confused by the behavior of the gauge before/after the test. I'd like it to change only when when something else is moving in the window. Here are some suggestions for what would make it clearer:
>> - The gauge should not change until the graph starts moving. I found it confusing to see the latency jump up at 0:13 just before the blue download chart started, or at 0:28 before the upload chart started at 0:31.
>> - Between the download and upload tests, the gauge should drop back to the nominal measured values. I think it does.
>> - After the test, the gauge should also drop back to the nominal measured value. It seems stuck at 4928 msec (0:55).
>
> We had/have a lot of this problem in netperf-wrapper - a lot of data
> tends to accumulate at the end of the test(s) and pollute the last few
> data points in bloated scenarios. You have to wait for the queues to
> drain to get a "clean" test - although this begins to show what
> actually happen when the link is buried in both directions.
>
> Is there any chance to add a simultaneous up+down+ping test at the conclusion?
This confuses the "speed test" notion of this site. Since the flow of ack's can eat up 25% of the bandwidth of a slow, asymmetric link, I am concerend that people would wonder why their upload bandwidth suddenly went down dramatically...
Given that other speed test sites only show upload/download, I would vote to keep that format here. Perhaps there could be an option/preference/setting to do up/down/ping .
>> 4) I like the way the latency gauge changes color during the test. It's OK for it to use the color to indicate an "opinion". Are you happy with the thresholds for yellow & red colors?
>
> It is not clear to me what they are.
>
>> 5) The gauge makes it appear that moderate latency - 765 msec (0:29) - is the same as when the value goes to 1768 msec (0:31), and also when it goes to 4,447 msec (0:35), etc. It might make more sense to have the chart's full-scale at something like 10 seconds during the test. The scale could be logarithmic, so that "normal" values occupy up to a third or half of scale, and bad values get pretty close to the top end. Horrible latency - greater than 10 sec, say - should peg the indicator at full scale.
>
> I am generally resistant to log scales as misleading an untrained eye.
> In this case I can certainly see the gauge behaving almost as
> described above, except that I would nearly flatline the gauge at >
> 250ms, and add indicators for higher rates at the outer edges of the
> graph.
I am suggesting a slightly different representation. Instead of flatlining (I assume that you mean full-scale indicator of the gauge) at 250msec (which I agree is bad), perhaps the gauge could use the levels I sent in a previous message...
> I agree that the results graph should never be logarithmic - it hides the bad
> news of high latency.
>
> However, the gauge that shows instantaneous latency could be logarithmic. I was
> reacting to the appearance of slamming against the limit at 765 msec, then not making it
> more evident when latency jumped to 1768 msec, then to 4447 msec.
>
> Imagine the same gauge, with the following gradations at these clock positions,
> with the bar colored to match:
>
> 0 msec - 9:00 (straight to the left)
> 25 msec - 10:00
> 100 msec - 11:00
> 250 msec - 12:00
> 1,000 msec - 1:00
> 3,000 msec - 2:00
> 10,000+ msec - 3:00 (straight to the left)
> I can see staying below 30ms induced latency as "green", below 100ms
> as "blue", below 250ms as yellow, and > 250ms as red, and a line
> marking "rediculous" at > 1sec, and "insane" at 2sec would be good.
> Other pithy markings at the end of the tach would be fun. For example,
> gogo in flight has the interplanetary record for bufferbloat,
> something like 760 seconds the last time we tried it, so a 3rd line on
> the tach for earth-mars distances would be amusing.
These color schemes sound good. It could indeed be amusing to have editorial comments ("ludicrous bloat") for the worst situations.
> In the long term, somehow detecting if FQ is in play would be good,
> but I have no idea how to do that in a browser.
>
>> 6) On the Results page (1:20), I like the red background behind the latency values. I don't understand why the grey bars at the right end of the chart are so high. Is the latency still decreasing as the queue drains? Perhaps the ping tests should run longer until it gets closer to the nominal value.
>
> I would suspect the queues are still draining, filled with missing
> acknowledgements, etc, etc. waiting until the ping returned closer to
> normal before starting the next phase of the test would help.
This sounds right to me.
>> This is such a great tool. Thanks!
>
> I am very, very, very delighted also. I hope that with tools like
> these in more users hands, AND the data collected from them, that we
> can make a logarithmic jump in the number of users, devices, and ISPs
> that have good bandwidth and low latency in the near future.
>
> Thank you very, very much for the work.
>
> As a side note, justin, have you fixed your own bloat at home?
>
>>
>> Rich
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
Rich
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 22:51 ` David Lang
@ 2015-04-24 1:38 ` Rich Brown
0 siblings, 0 replies; 183+ messages in thread
From: Rich Brown @ 2015-04-24 1:38 UTC (permalink / raw)
To: David Lang; +Cc: bloat
>> 5) The gauge makes it appear that moderate latency - 765 msec (0:29) - is the same as when the value goes to 1768 msec (0:31), and also when it goes to 4,447 msec (0:35), etc. It might make more sense to have the chart's full-scale at something like 10 seconds during the test. The scale could be logarithmic, so that "normal" values occupy up to a third or half of scale, and bad values get pretty close to the top end. Horrible latency - greater than 10 sec, say - should peg the indicator at full scale.
>
> the graph started out logarithmic and it was changed because that made it less obvious to people when the latency was significantly higher (most people are not used to evaluating log scale graphs)
I agree that the results graph should never be logarithmic - it hides the bad news of high latency.
However, the gauge that shows instantaneous latency could be logarithmic. I was reacting to the appearance of slamming against the limit at 765 msec, then not making it more evident when latency jumped to 1768 msec, then to 4447 msec.
Imagine the same gauge, with the following gradations at these clock positions, with the bar colored to match:
0 msec - 9:00 (straight to the left)
25 msec - 10:00
100 msec - 11:00
250 msec - 12:00
1,000 msec - 1:00
3,000 msec - 2:00
10,000+ msec - 3:00 (straight to the left)
Would that make sense?
Rich
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 21:44 ` Rich Brown
2015-04-23 22:22 ` Dave Taht
@ 2015-04-23 22:51 ` David Lang
2015-04-24 1:38 ` Rich Brown
1 sibling, 1 reply; 183+ messages in thread
From: David Lang @ 2015-04-23 22:51 UTC (permalink / raw)
To: Rich Brown; +Cc: bloat
On Thu, 23 Apr 2015, Rich Brown wrote:
> Hi Justin,
>
> The newest Speed Test is great! It is more convincing than I even thought it would be. These comments are focused on the "theater" of the measurements, so that they are unambiguous, and that people can figure out what's happening
>
> I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to illustrate my points. NB: I turned fq_codel off for this demo, so that the results would be more extreme.
>
> 1) It would be great to label the gauge as "Latency (msec)" I love the term "bufferbloat" as much as the next guy, but the Speed Test page should call the measurement what it really is. (The help page can explain that the latency is almost certainly caused by bufferbloat, but that should be the place it's mentioned.)
>
> 2) I can't explain why the latency gauge starts at 1-3 msec. I am guessing that it's showing incremental latency above the nominal value measured during the initial setup. I recommend that the gauge always show actual latency. Thus the gauge could start at 45 msec (0:11 in the video) then change during the measurements.
>
> 3) I was a bit confused by the behavior of the gauge before/after the test. I'd like it to change only when when something else is moving in the window. Here are some suggestions for what would make it clearer:
> - The gauge should not change until the graph starts moving. I found it confusing to see the latency jump up at 0:13 just before the blue download chart started, or at 0:28 before the upload chart started at 0:31.
> - Between the download and upload tests, the gauge should drop back to the nominal measured values. I think it does.
> - After the test, the gauge should also drop back to the nominal measured value. It seems stuck at 4928 msec (0:55).
>
> 4) I like the way the latency gauge changes color during the test. It's OK for
> it to use the color to indicate an "opinion". Are you happy with the
> thresholds for yellow & red colors?
>
> 5) The gauge makes it appear that moderate latency - 765 msec (0:29) - is the
> same as when the value goes to 1768 msec (0:31), and also when it goes to
> 4,447 msec (0:35), etc. It might make more sense to have the chart's
> full-scale at something like 10 seconds during the test. The scale could be
> logarithmic, so that "normal" values occupy up to a third or half of scale,
> and bad values get pretty close to the top end. Horrible latency - greater
> than 10 sec, say - should peg the indicator at full scale.
the graph started out logarithmic and it was changed because that made it less
obvious to people when the latency was significantly higher (most people are not
used to evaluating log scale graphs)
> 6) On the Results page (1:20), I like the red background behind the latency
> values. I don't understand why the grey bars at the right end of the chart are
> so high. Is the latency still decreasing as the queue drains? Perhaps the ping
> tests should run longer until it gets closer to the nominal value.
the client is no longer sending data, but the data hasn't arrived, so the
latency for the data that does arrive is going to get longer as time goes by (if
you stopped sending data at time X then the data arriving at time X+2 sec is
going to show higher latency than the data that arrived at time X+1 sec because
it's taken an additional second to arrive)
David Lang
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 22:22 ` Dave Taht
@ 2015-04-23 22:29 ` Dave Taht
2015-04-24 1:58 ` Rich Brown
1 sibling, 0 replies; 183+ messages in thread
From: Dave Taht @ 2015-04-23 22:29 UTC (permalink / raw)
To: Rich Brown, jb; +Cc: bloat
Justifications for the gradations of color and thresholds I just
suggested you can find in the "real time applications" section of:
https://gettys.wordpress.com/2013/07/10/low-latency-requires-smart-queuing-traditional-aqm-is-not-enough/
This might be a good set of phrases to insert lines on the tach for.
I note that jitter is important to track too, and I figure this test
gets good data on that but am visually challenged enough to not know a
good way to represent that, either.
On Thu, Apr 23, 2015 at 3:22 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, Apr 23, 2015 at 2:44 PM, Rich Brown <richb.hanover@gmail.com> wrote:
>> Hi Justin,
>>
>> The newest Speed Test is great! It is more convincing than I even thought it would be. These comments are focused on the "theater" of the measurements, so that they are unambiguous, and that people can figure out what's happening
>>
>> I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to illustrate my points. NB: I turned fq_codel off for this demo, so that the results would be more extreme.
>>
>> 1) It would be great to label the gauge as "Latency (msec)" I love the term "bufferbloat" as much as the next guy, but the Speed Test page should call the measurement what it really is. (The help page can explain that the latency is almost certainly caused by bufferbloat, but that should be the place it's mentioned.)
>
> I would prefer that it say "bufferbloat (lag in msec)" there, rather
> than make people look up another word buried in the doc. Sending
> people to the right thing, at the getgo, is important - looking for
> "lag" on the internet takes you to a lot of wrong places,
> misinformation, and snake oil. So perhaps in doc page I would have an
> explanation of lag as it relates to bufferbloat and other possible
> causes of these behaviors.
>
> I also do not see the gauge in my linux firefox, that you are showing
> on youtube. Am I using a wrong link. I LOVE this gauge, however.
>
> Lastly, the static radar plot of pings occupies center stage yet does
> not do anything later in the test. Either animating it to show the
> bloat, or moving it off of center stage and the new bloat gauge to
> center stage after it sounds the net, would be good.
>
> bufferbloat as a single word is quite googlable to good resources, and
> there is some activity on fixing up wikipedia going on that I like a
> lot.
>
>>
>> 2) I can't explain why the latency gauge starts at 1-3 msec. I am guessing that it's showing incremental latency above the nominal value measured during the initial setup. I recommend that the gauge always show actual latency. Thus the gauge could start at 45 msec (0:11 in the video) then change during the measurements.
>>
>> 3) I was a bit confused by the behavior of the gauge before/after the test. I'd like it to change only when when something else is moving in the window. Here are some suggestions for what would make it clearer:
>> - The gauge should not change until the graph starts moving. I found it confusing to see the latency jump up at 0:13 just before the blue download chart started, or at 0:28 before the upload chart started at 0:31.
>> - Between the download and upload tests, the gauge should drop back to the nominal measured values. I think it does.
>> - After the test, the gauge should also drop back to the nominal measured value. It seems stuck at 4928 msec (0:55).
>
> We had/have a lot of this problem in netperf-wrapper - a lot of data
> tends to accumulate at the end of the test(s) and pollute the last few
> data points in bloated scenarios. You have to wait for the queues to
> drain to get a "clean" test - although this begins to show what
> actually happen when the link is buried in both directions.
>
> Is there any chance to add a simultaneous up+down+ping test at the conclusion?
>
>> 4) I like the way the latency gauge changes color during the test. It's OK for it to use the color to indicate an "opinion". Are you happy with the thresholds for yellow & red colors?
>
> It is not clear to me what they are.
>
>> 5) The gauge makes it appear that moderate latency - 765 msec (0:29) - is the same as when the value goes to 1768 msec (0:31), and also when it goes to 4,447 msec (0:35), etc. It might make more sense to have the chart's full-scale at something like 10 seconds during the test. The scale could be logarithmic, so that "normal" values occupy up to a third or half of scale, and bad values get pretty close to the top end. Horrible latency - greater than 10 sec, say - should peg the indicator at full scale.
>
> I am generally resistant to log scales as misleading an untrained eye.
> In this case I can certainly see the gauge behaving almost as
> described above, except that I would nearly flatline the gauge at >
> 250ms, and add indicators for higher rates at the outer edges of the
> graph.
>
> I can see staying below 30ms induced latency as "green", below 100ms
> as "blue", below 250ms as yellow, and > 250ms as red, and a line
> marking "rediculous" at > 1sec, and "insane" at 2sec would be good.
> Other pithy markings at the end of the tach would be fun. For example,
> gogo in flight has the interplanetary record for bufferbloat,
> something like 760 seconds the last time we tried it, so a 3rd line on
> the tach for earth-mars distances would be amusing.
>
> In the long term, somehow detecting if FQ is in play would be good,
> but I have no idea how to do that in a browser.
>
>> 6) On the Results page (1:20), I like the red background behind the latency values. I don't understand why the grey bars at the right end of the chart are so high. Is the latency still decreasing as the queue drains? Perhaps the ping tests should run longer until it gets closer to the nominal value.
>
> I would suspect the queues are still draining, filled with missing
> acknowledgements, etc, etc. waiting until the ping returned closer to
> normal before starting the next phase of the test would help.
>
>> This is such a great tool. Thanks!
>
> I am very, very, very delighted also. I hope that with tools like
> these in more users hands, AND the data collected from them, that we
> can make a logarithmic jump in the number of users, devices, and ISPs
> that have good bandwidth and low latency in the near future.
>
> Thank you very, very much for the work.
>
> As a side note, justin, have you fixed your own bloat at home?
>
>>
>> Rich
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 21:44 ` Rich Brown
@ 2015-04-23 22:22 ` Dave Taht
2015-04-23 22:29 ` Dave Taht
2015-04-24 1:58 ` Rich Brown
2015-04-23 22:51 ` David Lang
1 sibling, 2 replies; 183+ messages in thread
From: Dave Taht @ 2015-04-23 22:22 UTC (permalink / raw)
To: Rich Brown, jb; +Cc: bloat
On Thu, Apr 23, 2015 at 2:44 PM, Rich Brown <richb.hanover@gmail.com> wrote:
> Hi Justin,
>
> The newest Speed Test is great! It is more convincing than I even thought it would be. These comments are focused on the "theater" of the measurements, so that they are unambiguous, and that people can figure out what's happening
>
> I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to illustrate my points. NB: I turned fq_codel off for this demo, so that the results would be more extreme.
>
> 1) It would be great to label the gauge as "Latency (msec)" I love the term "bufferbloat" as much as the next guy, but the Speed Test page should call the measurement what it really is. (The help page can explain that the latency is almost certainly caused by bufferbloat, but that should be the place it's mentioned.)
I would prefer that it say "bufferbloat (lag in msec)" there, rather
than make people look up another word buried in the doc. Sending
people to the right thing, at the getgo, is important - looking for
"lag" on the internet takes you to a lot of wrong places,
misinformation, and snake oil. So perhaps in doc page I would have an
explanation of lag as it relates to bufferbloat and other possible
causes of these behaviors.
I also do not see the gauge in my linux firefox, that you are showing
on youtube. Am I using a wrong link. I LOVE this gauge, however.
Lastly, the static radar plot of pings occupies center stage yet does
not do anything later in the test. Either animating it to show the
bloat, or moving it off of center stage and the new bloat gauge to
center stage after it sounds the net, would be good.
bufferbloat as a single word is quite googlable to good resources, and
there is some activity on fixing up wikipedia going on that I like a
lot.
>
> 2) I can't explain why the latency gauge starts at 1-3 msec. I am guessing that it's showing incremental latency above the nominal value measured during the initial setup. I recommend that the gauge always show actual latency. Thus the gauge could start at 45 msec (0:11 in the video) then change during the measurements.
>
> 3) I was a bit confused by the behavior of the gauge before/after the test. I'd like it to change only when when something else is moving in the window. Here are some suggestions for what would make it clearer:
> - The gauge should not change until the graph starts moving. I found it confusing to see the latency jump up at 0:13 just before the blue download chart started, or at 0:28 before the upload chart started at 0:31.
> - Between the download and upload tests, the gauge should drop back to the nominal measured values. I think it does.
> - After the test, the gauge should also drop back to the nominal measured value. It seems stuck at 4928 msec (0:55).
We had/have a lot of this problem in netperf-wrapper - a lot of data
tends to accumulate at the end of the test(s) and pollute the last few
data points in bloated scenarios. You have to wait for the queues to
drain to get a "clean" test - although this begins to show what
actually happen when the link is buried in both directions.
Is there any chance to add a simultaneous up+down+ping test at the conclusion?
> 4) I like the way the latency gauge changes color during the test. It's OK for it to use the color to indicate an "opinion". Are you happy with the thresholds for yellow & red colors?
It is not clear to me what they are.
> 5) The gauge makes it appear that moderate latency - 765 msec (0:29) - is the same as when the value goes to 1768 msec (0:31), and also when it goes to 4,447 msec (0:35), etc. It might make more sense to have the chart's full-scale at something like 10 seconds during the test. The scale could be logarithmic, so that "normal" values occupy up to a third or half of scale, and bad values get pretty close to the top end. Horrible latency - greater than 10 sec, say - should peg the indicator at full scale.
I am generally resistant to log scales as misleading an untrained eye.
In this case I can certainly see the gauge behaving almost as
described above, except that I would nearly flatline the gauge at >
250ms, and add indicators for higher rates at the outer edges of the
graph.
I can see staying below 30ms induced latency as "green", below 100ms
as "blue", below 250ms as yellow, and > 250ms as red, and a line
marking "rediculous" at > 1sec, and "insane" at 2sec would be good.
Other pithy markings at the end of the tach would be fun. For example,
gogo in flight has the interplanetary record for bufferbloat,
something like 760 seconds the last time we tried it, so a 3rd line on
the tach for earth-mars distances would be amusing.
In the long term, somehow detecting if FQ is in play would be good,
but I have no idea how to do that in a browser.
> 6) On the Results page (1:20), I like the red background behind the latency values. I don't understand why the grey bars at the right end of the chart are so high. Is the latency still decreasing as the queue drains? Perhaps the ping tests should run longer until it gets closer to the nominal value.
I would suspect the queues are still draining, filled with missing
acknowledgements, etc, etc. waiting until the ping returned closer to
normal before starting the next phase of the test would help.
> This is such a great tool. Thanks!
I am very, very, very delighted also. I hope that with tools like
these in more users hands, AND the data collected from them, that we
can make a logarithmic jump in the number of users, devices, and ISPs
that have good bandwidth and low latency in the near future.
Thank you very, very much for the work.
As a side note, justin, have you fixed your own bloat at home?
>
> Rich
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 18:04 ` Mikael Abrahamsson
2015-04-23 18:08 ` Jonathan Morton
2015-04-23 20:19 ` jb
@ 2015-04-23 21:44 ` Rich Brown
2015-04-23 22:22 ` Dave Taht
2015-04-23 22:51 ` David Lang
2015-04-24 4:16 ` Mikael Abrahamsson
3 siblings, 2 replies; 183+ messages in thread
From: Rich Brown @ 2015-04-23 21:44 UTC (permalink / raw)
To: bloat
Hi Justin,
The newest Speed Test is great! It is more convincing than I even thought it would be. These comments are focused on the "theater" of the measurements, so that they are unambiguous, and that people can figure out what's happening
I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to illustrate my points. NB: I turned fq_codel off for this demo, so that the results would be more extreme.
1) It would be great to label the gauge as "Latency (msec)" I love the term "bufferbloat" as much as the next guy, but the Speed Test page should call the measurement what it really is. (The help page can explain that the latency is almost certainly caused by bufferbloat, but that should be the place it's mentioned.)
2) I can't explain why the latency gauge starts at 1-3 msec. I am guessing that it's showing incremental latency above the nominal value measured during the initial setup. I recommend that the gauge always show actual latency. Thus the gauge could start at 45 msec (0:11 in the video) then change during the measurements.
3) I was a bit confused by the behavior of the gauge before/after the test. I'd like it to change only when when something else is moving in the window. Here are some suggestions for what would make it clearer:
- The gauge should not change until the graph starts moving. I found it confusing to see the latency jump up at 0:13 just before the blue download chart started, or at 0:28 before the upload chart started at 0:31.
- Between the download and upload tests, the gauge should drop back to the nominal measured values. I think it does.
- After the test, the gauge should also drop back to the nominal measured value. It seems stuck at 4928 msec (0:55).
4) I like the way the latency gauge changes color during the test. It's OK for it to use the color to indicate an "opinion". Are you happy with the thresholds for yellow & red colors?
5) The gauge makes it appear that moderate latency - 765 msec (0:29) - is the same as when the value goes to 1768 msec (0:31), and also when it goes to 4,447 msec (0:35), etc. It might make more sense to have the chart's full-scale at something like 10 seconds during the test. The scale could be logarithmic, so that "normal" values occupy up to a third or half of scale, and bad values get pretty close to the top end. Horrible latency - greater than 10 sec, say - should peg the indicator at full scale.
6) On the Results page (1:20), I like the red background behind the latency values. I don't understand why the grey bars at the right end of the chart are so high. Is the latency still decreasing as the queue drains? Perhaps the ping tests should run longer until it gets closer to the nominal value.
This is such a great tool. Thanks!
Rich
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 20:19 ` jb
@ 2015-04-23 20:39 ` Dave Taht
2015-04-24 21:45 ` Rich Brown
1 sibling, 0 replies; 183+ messages in thread
From: Dave Taht @ 2015-04-23 20:39 UTC (permalink / raw)
To: jb; +Cc: bloat
On Thu, Apr 23, 2015 at 1:19 PM, jb <justin@dslr.net> wrote:
>> It's actually remarkably un-bloated...
>
> I have seen a number of remarkably unbloated comcast tests
> including this one at gigabit symmetric:
>
> http://www.dslreports.com/speedtest/348305
>
> which I know is someone testing their future products.
>
> Unfortunately I have no visibility as to the hardware in use in each test,
> and should really ask when such a result floats past. Put up a big box
> begging them to fill out a little survey.
Well, our concern with the browser based tests was that - as of 4
years ago - your typical browser and pc could not drive the network to
saturation at speeds much greater than 20Mbit reliably. Neither could
netalyzer.
Thus, netperf-wrapper - now good to 40Gbit - was born.
So some work on determing the real cpu usage and network capability of
the client machine for these tests would be nice. I certainly do see
bloat on the comcast 110 and 200Mbit services almost as severe as the
lower end services using the netperf-wrapper tests.
>
> On Fri, Apr 24, 2015 at 4:04 AM, Mikael Abrahamsson <swmike@swm.pp.se>
> wrote:
>>
>> On Thu, 23 Apr 2015, Dave Taht wrote:
>>
>>> justin:
>>>
>>> thx for nuking the log scale. that makes the bloat much more visible
>>> here (typical cablemodem)
>>>
>>> http://www.dslreports.com/speedtest/322800
>>>
>>> I am puzzled as to my post fq_codel result here at T+40 and will have
>>> to repeat...
>>>
>>> http://www.dslreports.com/speedtest/322992
>>
>>
>> This is my DOCSIS3 250/50 connection:
>>
>> http://www.dslreports.com/speedtest/349547
>>
>> During the evenings (as it was when I tested now) I seem to not get over
>> 100-150 megabit/s downstream, indicating it's saturating on the coax segment
>> at peak usage. I should probably get a newer modem, mine is a few years old.
>>
>> It's actually remarkably un-bloated...
>>
>> --
>> Mikael Abrahamsson email: swmike@swm.pp.se
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 18:04 ` Mikael Abrahamsson
2015-04-23 18:08 ` Jonathan Morton
@ 2015-04-23 20:19 ` jb
2015-04-23 20:39 ` Dave Taht
2015-04-24 21:45 ` Rich Brown
2015-04-23 21:44 ` Rich Brown
2015-04-24 4:16 ` Mikael Abrahamsson
3 siblings, 2 replies; 183+ messages in thread
From: jb @ 2015-04-23 20:19 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 1438 bytes --]
> It's actually remarkably un-bloated...
I have seen a number of remarkably unbloated comcast tests
including this one at gigabit symmetric:
http://www.dslreports.com/speedtest/348305
which I know is someone testing their future products.
Unfortunately I have no visibility as to the hardware in use in each test,
and should really ask when such a result floats past. Put up a big box
begging them to fill out a little survey.
On Fri, Apr 24, 2015 at 4:04 AM, Mikael Abrahamsson <swmike@swm.pp.se>
wrote:
> On Thu, 23 Apr 2015, Dave Taht wrote:
>
> justin:
>>
>> thx for nuking the log scale. that makes the bloat much more visible
>> here (typical cablemodem)
>>
>> http://www.dslreports.com/speedtest/322800
>>
>> I am puzzled as to my post fq_codel result here at T+40 and will have
>> to repeat...
>>
>> http://www.dslreports.com/speedtest/322992
>>
>
> This is my DOCSIS3 250/50 connection:
>
> http://www.dslreports.com/speedtest/349547
>
> During the evenings (as it was when I tested now) I seem to not get over
> 100-150 megabit/s downstream, indicating it's saturating on the coax
> segment at peak usage. I should probably get a newer modem, mine is a few
> years old.
>
> It's actually remarkably un-bloated...
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 2753 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 18:04 ` Mikael Abrahamsson
@ 2015-04-23 18:08 ` Jonathan Morton
2015-04-23 20:19 ` jb
` (2 subsequent siblings)
3 siblings, 0 replies; 183+ messages in thread
From: Jonathan Morton @ 2015-04-23 18:08 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 177 bytes --]
Probably un-bloated only because it's old, and now running close to its
design speed. A newer modem will be designed to cope with higher future
speeds, so...
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 220 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-23 17:03 ` Dave Taht
@ 2015-04-23 18:04 ` Mikael Abrahamsson
2015-04-23 18:08 ` Jonathan Morton
` (3 more replies)
0 siblings, 4 replies; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-04-23 18:04 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
On Thu, 23 Apr 2015, Dave Taht wrote:
> justin:
>
> thx for nuking the log scale. that makes the bloat much more visible
> here (typical cablemodem)
>
> http://www.dslreports.com/speedtest/322800
>
> I am puzzled as to my post fq_codel result here at T+40 and will have
> to repeat...
>
> http://www.dslreports.com/speedtest/322992
This is my DOCSIS3 250/50 connection:
http://www.dslreports.com/speedtest/349547
During the evenings (as it was when I tested now) I seem to not get over
100-150 megabit/s downstream, indicating it's saturating on the coax
segment at peak usage. I should probably get a newer modem, mine is a few
years old.
It's actually remarkably un-bloated...
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 19:22 ` Dave Taht
@ 2015-04-23 17:03 ` Dave Taht
2015-04-23 18:04 ` Mikael Abrahamsson
0 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2015-04-23 17:03 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
justin:
thx for nuking the log scale. that makes the bloat much more visible
here (typical cablemodem)
http://www.dslreports.com/speedtest/322800
I am puzzled as to my post fq_codel result here at T+40 and will have
to repeat...
http://www.dslreports.com/speedtest/322992
On Sun, Apr 19, 2015 at 12:22 PM, Dave Taht <dave.taht@gmail.com> wrote:
> So my "acid test", taken on a linux box over ethernet, through a
> cerowrt box with sqm-scripts turned off. This is my gf's comcast
> "blast" service, which is rated for 55mbits down and 5.5mbits up. The
> new speedtest does indeed show results that have the typical bloat
> (nearly a second) a cable modem has when tested solely for up and
> down, separately.
>
> http://www.dslreports.com/speedtest/322800
>
> A) I definitely am not particularly huge on defaulting to a log scale
> for this graph. Or rather, I would be huge on the graph defaulting to
> a linear scale AND huge when you get numbers as bad as this. :)
>
> B) is there a way to specify ipv6?
>
> 2) So I did a follow on speedtest with fq_codel enabled to shape with
> sqm. it may be that my upload shaper is a bit over what is desirable
> for this link, and I need to repeat.
>
> http://www.dslreports.com/speedtest/322992
>
> 3) For giggles, this one is with ecn enabled both on the shaper and on
> the tcp I am using, showing that this speedtest site will use ecn when
> enabled:
>
> http://www.dslreports.com/speedtest/323101
>
> Some ecn mark results on the shaper for that:
>
> http://pastebin.com/sniePC1M
>
> Both these tests are showing some latency spikes so it does look like
> I should tune down the shaper a bit.
>
> 3) The rrul, rrul_be, tcp_upload, and tcp_download test data from
> netperf-wrapper under both these circumstances (no ecn) is up at:
>
> http://snapon.lab.bufferbloat.net/~d/lorna-ethernet.tgz
>
> Feel free to create graphs and comparisons to suit selves.
>
> A few graphs:
> cdf plot of latency (have to use a log scale!!) compared between the
> shaped and unshaped alternatives...
>
> http://snapon.lab.bufferbloat.net/~d/lorna-ethernet/comcast_55_speedtest_comparison.png
>
> what the overbuffered download looks like:
>
> http://snapon.lab.bufferbloat.net/~d/lorna-ethernet/comcast_55_speedtest_comparison_download_bloated.png
>
> The shaped fq_codeled equivalent:
> http://snapon.lab.bufferbloat.net/~d/lorna-ethernet/comcast_55_speedtest_comparison_download_fq_codeled.png
>
> (log scale comparison again - 10ms of induced latency vs 400+! Aggh)
>
> And a graph of probably minimal usefulness comparing the behavior of
> the shaped vs unshaped download flow configurations, for both ipv4 and
> ipv6.
>
> http://snapon.lab.bufferbloat.net/~d/lorna-ethernet/comcast_55_speedtest_comparison_download_compared.png
>
> On Sun, Apr 19, 2015 at 10:43 AM, Dave Taht <dave.taht@gmail.com> wrote:
>> I am going to do an acid test today. The line I tested last night is a
>> comcast cable line (with htb+fq_codel on the link). So I plan to plug
>> in the ethernet on both my mac and linux laptops and repeat the
>> comparison with the shaper on and off, with both linux and osx.
>>
>> the *really funny* part of this is that I do not have a single extra
>> ethernet cable in my gf's SF apartment, and the less funny part of
>> this is the nearest radio shack is now closed....
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-20 15:51 ` Dave Taht
@ 2015-04-20 16:15 ` Dave Taht
0 siblings, 0 replies; 183+ messages in thread
From: Dave Taht @ 2015-04-20 16:15 UTC (permalink / raw)
To: David Lang; +Cc: bloat
this video (using ipfire) goes and shows speedof.me results unshaped and shaped.
https://www.youtube.com/watch?v=XB3w09VZn7g&feature=youtu.be
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-20 14:51 ` David Lang
@ 2015-04-20 15:51 ` Dave Taht
2015-04-20 16:15 ` Dave Taht
0 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2015-04-20 15:51 UTC (permalink / raw)
To: David Lang; +Cc: bloat
1) I have 7 dual stack linode vm servers around the world you could
use - they are in england, japan, newark, atlanta, california, and one
other place. They are configured to use sch_fq (but I don't know what
is on the bare metal), and have other analysis tools on them, but you
would be welcome to use those.
2) There is a tcp socket option (something_lowat) which makes a big
difference in the amount of stuff that gets buffered in the lower
portions of the stack - would be useful in the server, and I think,
but am not sure, it is turned on in a few of the latest browsers on
some platforms. Stuart cheshire gave a talk about it recently that
should end up on youtube soon.
3) I like smokeping's plotting facility for more clearly showing the
range of outliers -
4) and do not cut things off at the 95 percentile, ever. 98% is more
reasonable - but I still prefer cdf plots to capture the entire range
of the data.
The analogy I use here is what if your steering wheel failed to
respond one time in 20 on your way to work - how long would you live?
(and then, what if everyone's steering wheel failed one time in 20?)
Another way of thinking about it is what if your voip call went to
hell twice a minute?
On Mon, Apr 20, 2015 at 7:51 AM, David Lang <david@lang.hm> wrote:
> On Mon, 20 Apr 2015, jb wrote:
>
>> 2. The test does not do latency pinging on 3G and GPRS
>> because of a concern I had that with slower lines (a lot of 3G results are
>> less than half a megabit) the pings would make the speed measured
>> unreliable. And/or, a slow android phone would be being asked to do too
>> much. I'll do some tests with and without pinging on a 56kbit shaped line
>> and see if there is a difference. If there is not, I can enable it.
>
>
> remember that people can be testing from a laptop tethered to a phone.
>
>> 3. The graph of latency post-test is log X-axis at the moment
>> because one spike can render the whole graph almost useless with axis
>> scaling. What I might do is X-Axis breaking, and see if that looks ok.
>> Alternatively, a two panel graph, one with 0-200ms axis, the other in full
>> perspective. Or just live with the spike. Or scale to the 95% highest
>> number and let the other 5% crop. There are tool-tips after all to show
>> the
>> actual numbers.
>
>
> I think that showing the spike is significant because that latency spike
> would impact the user. It's not false data.
>
> showing two graphs may be the way to go.
>
> I don't think showing the 95th percentile is right. Again it is throwing
> away significant datapoints.
>
> I think summarizing the number as an average (with error bands) for each
> category of the test would be useful.
>
> David Lang
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 23:21 ` jb
@ 2015-04-20 14:51 ` David Lang
2015-04-20 15:51 ` Dave Taht
0 siblings, 1 reply; 183+ messages in thread
From: David Lang @ 2015-04-20 14:51 UTC (permalink / raw)
To: jb; +Cc: bloat
[-- Attachment #1: Type: TEXT/Plain, Size: 1370 bytes --]
On Mon, 20 Apr 2015, jb wrote:
> 2. The test does not do latency pinging on 3G and GPRS
> because of a concern I had that with slower lines (a lot of 3G results are
> less than half a megabit) the pings would make the speed measured
> unreliable. And/or, a slow android phone would be being asked to do too
> much. I'll do some tests with and without pinging on a 56kbit shaped line
> and see if there is a difference. If there is not, I can enable it.
remember that people can be testing from a laptop tethered to a phone.
> 3. The graph of latency post-test is log X-axis at the moment
> because one spike can render the whole graph almost useless with axis
> scaling. What I might do is X-Axis breaking, and see if that looks ok.
> Alternatively, a two panel graph, one with 0-200ms axis, the other in full
> perspective. Or just live with the spike. Or scale to the 95% highest
> number and let the other 5% crop. There are tool-tips after all to show the
> actual numbers.
I think that showing the spike is significant because that latency spike would
impact the user. It's not false data.
showing two graphs may be the way to go.
I don't think showing the 95th percentile is right. Again it is throwing away
significant datapoints.
I think summarizing the number as an average (with error bands) for each
category of the test would be useful.
David Lang
[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 4:01 ` Dave Taht
@ 2015-04-20 14:33 ` Colin Dearborn
0 siblings, 0 replies; 183+ messages in thread
From: Colin Dearborn @ 2015-04-20 14:33 UTC (permalink / raw)
To: Dave Taht, Rich Brown; +Cc: cerowrt-devel, bloat
If you go to the results page, you get a "ping time during test" graph.
-----Original Message-----
From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Dave Taht
Sent: Saturday, April 18, 2015 10:02 PM
To: Rich Brown
Cc: cerowrt-devel; bloat
Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in
What I see here is the same old latency, upload, download series, not latency and bandwidth at the same time.
http://www.dslreports.com/speedtest/319616
On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com> wrote:
> Folks,
>
> I am delighted to pass along the news that Justin has added latency measurements into the Speed Test at DSLReports.com.
>
> Go to: https://www.dslreports.com/speedtest and click the button for your Internet link. This controls the number of simultaneous connections that get established between your browser and the speedtest server. After you run the test, click the green "Results + Share" button to see detailed info. For the moment, you need to be logged in to see the latency results. There's a "register" link on each page.
>
> The speed test measures latency using websocket pings: Justin says that a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just ran a test and got 48 msec latency from DSLReports, while ping gstatic.com gave 38-40 msec, so they're pretty fast.
>
> You can leave feedback on this page - http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>
> Enjoy!
>
> Rich
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
[not found] ` <CACQiMXbF9Uk3H=81at-Z9a2fdYKrRtRorSXRg5dBcPB8-aR4Cw@mail.gmail.com>
@ 2015-04-20 8:11 ` jb
0 siblings, 0 replies; 183+ messages in thread
From: jb @ 2015-04-20 8:11 UTC (permalink / raw)
To: Pedro Tumusok, bloat
[-- Attachment #1: Type: text/plain, Size: 4931 bytes --]
Whoops I better set that z-index correctly thanks.
It is interesting you mentioned gaming because 10 of the servers are from a
place that rents clan servers. They have to be in top quality data centres
and not congest anything because their customers abandon them immediately
and they're always watching ping time and packet loss.
Always happy for long term commitments to a servers. I just need permanent
root on an ubuntu 14.10 virtual or real box, or centos I guess. ipv6
dual-stack would be good. Memory cpu and disk unimportant. A 1 gig/e port,
but average usage can be set to stay below whatever they prefer.
I'm going to do some kind of donor recognition thing, so if a donated
server is used it'll show something like a company name and URL I just
haven't had to do that yet.
thanks
-Justin
On Mon, Apr 20, 2015 at 5:28 PM, Pedro Tumusok <pedro.tumusok@gmail.com>
wrote:
> I noticed on my tests that the label Ping time during test, was displayed
> on top of the tool tips, which meant I only had the y-axis to look at and
> had to guessestimate my ping time.
>
> Another step to help cure the bufferbloat, maybe drawing a few vertical
> threshold lines through the ping times.
> Visualizing that ping over x ms will make VoIP work badly, gaming will
> suffer etc.
> At least VoIP have some hard numbers we can use, gaming is more dependant
> upon the network code of the game and its client-side prediction I guess.
> But still anything over y ms in a fps game means you're dead before you
> even see your opponent.
>
> Are you looking for places to deploy servers? I got a couple of people
> here in Norway and Sweden, I can reach out to and ask about that. If yes,
> what requirements do you have?
>
> Pedro
>
> On Mon, Apr 20, 2015 at 9:00 AM, jb <justin@dslr.net> wrote:
>
>> IPv6 is now available as an option, you just select it in the preferences
>> pane.
>>
>> Unfortunately only one of the test servers (in Michigan) is native dual
>> stack so the test is then fixed to that location. In addition the latency
>> pinging during test is stays as ipv4 traffic, until I setup a web socket
>> server on the ipv6 server.
>>
>> All the amazon google and other cloud servers do not support ipv6. They
>> do support it as an edge network feature, like as a load balancing front
>> end, however the test needs custom server software and custom code, and
>> using a cloud proxy that must then talk to an ipv4 test server inside the
>> cloud is rather useless. It should be native all the way. So until I get
>> more native ipv6 servers, one location it is.
>>
>> Nevertheless as a proof of concept it works. Using the hurricane electric
>> ipv6 tunnel from my australian non ipv6 ISP, I get about 80% of the speed
>> that the local sydney ipv4 test server would give.
>>
>>
>> On Mon, Apr 20, 2015 at 1:15 PM, Aaron Wood <woody77@gmail.com> wrote:
>>
>>> Toke,
>>>
>>> I actually tend to see a bit higher latency with ICMP at the higher
>>> percentiles.
>>>
>>>
>>> http://burntchrome.blogspot.com/2014/05/fixing-bufferbloat-on-comcasts-blast.html
>>>
>>> http://burntchrome.blogspot.com/2014/05/measured-bufferbloat-on-orangefr-dsl.html
>>>
>>> Although the biggest "boost" I've seen ICMP given was on Free.fr's
>>> network:
>>>
>>> http://burntchrome.blogspot.com/2014/01/bufferbloat-or-lack-thereof-on-freefr.html
>>>
>>> -Aaron
>>>
>>> On Sun, Apr 19, 2015 at 11:30 AM, Toke Høiland-Jørgensen <toke@toke.dk>
>>> wrote:
>>>
>>>> Jonathan Morton <chromatix99@gmail.com> writes:
>>>>
>>>> >> Why not? They can be a quite useful measure of how competing traffic
>>>> >> performs when bulk flows congest the link. Which for many
>>>> >> applications is more important then the latency experienced by the
>>>> >> bulk flow itself.
>>>> >
>>>> > One clear objection is that ICMP is often prioritised when UDP is not.
>>>> > So measuring with UDP gives a better indication in those cases.
>>>> > Measuring with a separate TCP flow, such as HTTPing, is better still
>>>> > by some measures, but most truly latency-sensitive traffic does use
>>>> > UDP.
>>>>
>>>> Sure, well I tend to do both. Can't recall ever actually seeing any
>>>> performance difference between the UDP and ICMP latency measurements,
>>>> though...
>>>>
>>>> -Toke
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>
>
> --
> Best regards / Mvh
> Jan Pedro Tumusok
>
>
[-- Attachment #2: Type: text/html, Size: 7500 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-20 3:15 ` Aaron Wood
@ 2015-04-20 7:00 ` jb
[not found] ` <CACQiMXbF9Uk3H=81at-Z9a2fdYKrRtRorSXRg5dBcPB8-aR4Cw@mail.gmail.com>
0 siblings, 1 reply; 183+ messages in thread
From: jb @ 2015-04-20 7:00 UTC (permalink / raw)
Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 2700 bytes --]
IPv6 is now available as an option, you just select it in the preferences
pane.
Unfortunately only one of the test servers (in Michigan) is native dual
stack so the test is then fixed to that location. In addition the latency
pinging during test is stays as ipv4 traffic, until I setup a web socket
server on the ipv6 server.
All the amazon google and other cloud servers do not support ipv6. They do
support it as an edge network feature, like as a load balancing front end,
however the test needs custom server software and custom code, and using a
cloud proxy that must then talk to an ipv4 test server inside the cloud is
rather useless. It should be native all the way. So until I get more native
ipv6 servers, one location it is.
Nevertheless as a proof of concept it works. Using the hurricane electric
ipv6 tunnel from my australian non ipv6 ISP, I get about 80% of the speed
that the local sydney ipv4 test server would give.
On Mon, Apr 20, 2015 at 1:15 PM, Aaron Wood <woody77@gmail.com> wrote:
> Toke,
>
> I actually tend to see a bit higher latency with ICMP at the higher
> percentiles.
>
>
> http://burntchrome.blogspot.com/2014/05/fixing-bufferbloat-on-comcasts-blast.html
>
> http://burntchrome.blogspot.com/2014/05/measured-bufferbloat-on-orangefr-dsl.html
>
> Although the biggest "boost" I've seen ICMP given was on Free.fr's network:
>
> http://burntchrome.blogspot.com/2014/01/bufferbloat-or-lack-thereof-on-freefr.html
>
> -Aaron
>
> On Sun, Apr 19, 2015 at 11:30 AM, Toke Høiland-Jørgensen <toke@toke.dk>
> wrote:
>
>> Jonathan Morton <chromatix99@gmail.com> writes:
>>
>> >> Why not? They can be a quite useful measure of how competing traffic
>> >> performs when bulk flows congest the link. Which for many
>> >> applications is more important then the latency experienced by the
>> >> bulk flow itself.
>> >
>> > One clear objection is that ICMP is often prioritised when UDP is not.
>> > So measuring with UDP gives a better indication in those cases.
>> > Measuring with a separate TCP flow, such as HTTPing, is better still
>> > by some measures, but most truly latency-sensitive traffic does use
>> > UDP.
>>
>> Sure, well I tend to do both. Can't recall ever actually seeing any
>> performance difference between the UDP and ICMP latency measurements,
>> though...
>>
>> -Toke
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
[-- Attachment #2: Type: text/html, Size: 4429 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 18:30 ` Toke Høiland-Jørgensen
2015-04-19 19:15 ` Jonathan Morton
@ 2015-04-20 3:15 ` Aaron Wood
2015-04-20 7:00 ` jb
1 sibling, 1 reply; 183+ messages in thread
From: Aaron Wood @ 2015-04-20 3:15 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: Jonathan Morton, bloat
[-- Attachment #1: Type: text/plain, Size: 1444 bytes --]
Toke,
I actually tend to see a bit higher latency with ICMP at the higher
percentiles.
http://burntchrome.blogspot.com/2014/05/fixing-bufferbloat-on-comcasts-blast.html
http://burntchrome.blogspot.com/2014/05/measured-bufferbloat-on-orangefr-dsl.html
Although the biggest "boost" I've seen ICMP given was on Free.fr's network:
http://burntchrome.blogspot.com/2014/01/bufferbloat-or-lack-thereof-on-freefr.html
-Aaron
On Sun, Apr 19, 2015 at 11:30 AM, Toke Høiland-Jørgensen <toke@toke.dk>
wrote:
> Jonathan Morton <chromatix99@gmail.com> writes:
>
> >> Why not? They can be a quite useful measure of how competing traffic
> >> performs when bulk flows congest the link. Which for many
> >> applications is more important then the latency experienced by the
> >> bulk flow itself.
> >
> > One clear objection is that ICMP is often prioritised when UDP is not.
> > So measuring with UDP gives a better indication in those cases.
> > Measuring with a separate TCP flow, such as HTTPing, is better still
> > by some measures, but most truly latency-sensitive traffic does use
> > UDP.
>
> Sure, well I tend to do both. Can't recall ever actually seeing any
> performance difference between the UDP and ICMP latency measurements,
> though...
>
> -Toke
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 2540 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 21:57 ` Rich Brown
@ 2015-04-19 23:21 ` jb
2015-04-20 14:51 ` David Lang
0 siblings, 1 reply; 183+ messages in thread
From: jb @ 2015-04-19 23:21 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 5699 bytes --]
I just woke up.
I'm sure there are some questions missed however I'll just put some info in
here and the plans. If I missed something please reply direct or to the
list?
1, the latency is done with web socket pings, they are light weight. They
are done to dslreports.com not to any of the participating servers in a
speed test. I could change this to pick a participating server but unless
there is a really good reason to, it is easier to worry about one web
socket server rather than 21+ of them.
2. The test does not do latency pinging on 3G and GPRS
because of a concern I had that with slower lines (a lot of 3G results are
less than half a megabit) the pings would make the speed measured
unreliable. And/or, a slow android phone would be being asked to do too
much. I'll do some tests with and without pinging on a 56kbit shaped line
and see if there is a difference. If there is not, I can enable it.
3. The graph of latency post-test is log X-axis at the moment
because one spike can render the whole graph almost useless with axis
scaling. What I might do is X-Axis breaking, and see if that looks ok.
Alternatively, a two panel graph, one with 0-200ms axis, the other in full
perspective. Or just live with the spike. Or scale to the 95% highest
number and let the other 5% crop. There are tool-tips after all to show the
actual numbers.
4. The selection of speed test locations to use
I have been spending most time getting it right for USA users, and all
servers are mine, not donated ISP servers, so we're only at an early stage
for a testing network. Even so I think having a sever in your city, or ISP,
is no longer as critical as it would be with a single TCP stream. But if
you're on fiber and not in USA then it might take a bit longer before this
is solved. And I'm betting in some cases some ISPs will just not have every
inbound route capable of driving their fastest products to maximum, so
locations may have to be probed by speed, not latency...
5. Displaying latencies while the test is running
could be as simple as displaying some numbers. e.g., 95% confidence and
peak, in three categories: idle, up and down. Or it could be a calculation
showing number of packets in in flight (determined by current speed vs
current latency). Or it could be a bunch of small coloured boxes that light
up in a grid, indicating a queue, one per 1500 byte packet. Or it could be
a color-coded gauge that goes to yellow and red as the bandwidth delay
product rises. Probably there is a simple way to start, and a better way to
do it after some consideration.
6. The congestion window, re-transmits, and bandwidth per stream tables are
very early stage.
I don't think I'll be showing a giant table for too much longer. Instead
the numbers have to be summarised by location and only shown with another
click. Typically if an ISP has a great connection to location X and a poor
one to location Y, then all streams from X show high bandwidth, low RTT,
low RTT variability and appropriate congestion window. Whereas all streams
from Y show high re-transmits, high RTT and big congestion window and/or
slow speeds. The difference between these stats for a google fiber user in
Kansas city. and a poorer quality ISP is startling to say the least.
I also need to do something with PMTU/MSS and TCP option information for
cases where there is weird stuff there. Finally, on the server, tcptrace
can be run and report back much more detailed statistics but if you've used
tcptrace you know there is a huge amount of data, including xplot files,
that just one transfer will generate. But if necessary we can go there.
I try to keep in mind that 99.9% of the people using the tests just want an
upload and download number, and perhaps a statement that says compared to
their peers, things are great. Everything else provokes confusion.
that's all I can think of right now..
On Mon, Apr 20, 2015 at 7:57 AM, Rich Brown <richb.hanover@gmail.com> wrote:
> Hi folks,
>
> A couple comments re: the DSLReports Speed Test.
>
> 1) It's just becoming daylight for Justin, so he hasn't had a chance to
> respond to all these notes. :-)
>
> 2) In a private note earlier this week, he mentioned that he uses
> "websocket pings" which he believes are pretty speedy/low latency.
>
> 3) He does have plans to incorporate stats from the server end's TCP stack
> (cwnd, packet loss, retransmissions, etc.) in a future version of the speed
> test. I imagine it would help him to know what you'd like to see...
>
> Best,
>
> Rich
>
> On Apr 19, 2015, at 1:15 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>
> > On Sun, 19 Apr 2015, Toke Høiland-Jørgensen wrote:
> >
> >> The upload latency figures are definitely iffy, but the download ones
> seem to match roughly what I've measured myself on this link.
> >
> > Also, I don't trust parallel latency measures done by for instance ICMP
> ping. Yes, they indicate something, but what?
> >
> > We need insight into the TCP stack. So how can an application like
> dslresports that runs in a browser, get meaningful performance metrics on
> its measurement TCP-sessions from the OS TCP stack? This is a multi-layer
> problem and I don't see any meaningful progress in this area...
> >
> > --
> > Mikael Abrahamsson email:
> swmike@swm.pp.se_______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 6751 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 17:15 ` Mikael Abrahamsson
2015-04-19 17:43 ` Dave Taht
2015-04-19 17:45 ` Toke Høiland-Jørgensen
@ 2015-04-19 21:57 ` Rich Brown
2015-04-19 23:21 ` jb
2 siblings, 1 reply; 183+ messages in thread
From: Rich Brown @ 2015-04-19 21:57 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
Hi folks,
A couple comments re: the DSLReports Speed Test.
1) It's just becoming daylight for Justin, so he hasn't had a chance to respond to all these notes. :-)
2) In a private note earlier this week, he mentioned that he uses "websocket pings" which he believes are pretty speedy/low latency.
3) He does have plans to incorporate stats from the server end's TCP stack (cwnd, packet loss, retransmissions, etc.) in a future version of the speed test. I imagine it would help him to know what you'd like to see...
Best,
Rich
On Apr 19, 2015, at 1:15 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Sun, 19 Apr 2015, Toke Høiland-Jørgensen wrote:
>
>> The upload latency figures are definitely iffy, but the download ones seem to match roughly what I've measured myself on this link.
>
> Also, I don't trust parallel latency measures done by for instance ICMP ping. Yes, they indicate something, but what?
>
> We need insight into the TCP stack. So how can an application like dslresports that runs in a browser, get meaningful performance metrics on its measurement TCP-sessions from the OS TCP stack? This is a multi-layer problem and I don't see any meaningful progress in this area...
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se_______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 17:43 ` Dave Taht
@ 2015-04-19 19:22 ` Dave Taht
2015-04-23 17:03 ` Dave Taht
0 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2015-04-19 19:22 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
So my "acid test", taken on a linux box over ethernet, through a
cerowrt box with sqm-scripts turned off. This is my gf's comcast
"blast" service, which is rated for 55mbits down and 5.5mbits up. The
new speedtest does indeed show results that have the typical bloat
(nearly a second) a cable modem has when tested solely for up and
down, separately.
http://www.dslreports.com/speedtest/322800
A) I definitely am not particularly huge on defaulting to a log scale
for this graph. Or rather, I would be huge on the graph defaulting to
a linear scale AND huge when you get numbers as bad as this. :)
B) is there a way to specify ipv6?
2) So I did a follow on speedtest with fq_codel enabled to shape with
sqm. it may be that my upload shaper is a bit over what is desirable
for this link, and I need to repeat.
http://www.dslreports.com/speedtest/322992
3) For giggles, this one is with ecn enabled both on the shaper and on
the tcp I am using, showing that this speedtest site will use ecn when
enabled:
http://www.dslreports.com/speedtest/323101
Some ecn mark results on the shaper for that:
http://pastebin.com/sniePC1M
Both these tests are showing some latency spikes so it does look like
I should tune down the shaper a bit.
3) The rrul, rrul_be, tcp_upload, and tcp_download test data from
netperf-wrapper under both these circumstances (no ecn) is up at:
http://snapon.lab.bufferbloat.net/~d/lorna-ethernet.tgz
Feel free to create graphs and comparisons to suit selves.
A few graphs:
cdf plot of latency (have to use a log scale!!) compared between the
shaped and unshaped alternatives...
http://snapon.lab.bufferbloat.net/~d/lorna-ethernet/comcast_55_speedtest_comparison.png
what the overbuffered download looks like:
http://snapon.lab.bufferbloat.net/~d/lorna-ethernet/comcast_55_speedtest_comparison_download_bloated.png
The shaped fq_codeled equivalent:
http://snapon.lab.bufferbloat.net/~d/lorna-ethernet/comcast_55_speedtest_comparison_download_fq_codeled.png
(log scale comparison again - 10ms of induced latency vs 400+! Aggh)
And a graph of probably minimal usefulness comparing the behavior of
the shaped vs unshaped download flow configurations, for both ipv4 and
ipv6.
http://snapon.lab.bufferbloat.net/~d/lorna-ethernet/comcast_55_speedtest_comparison_download_compared.png
On Sun, Apr 19, 2015 at 10:43 AM, Dave Taht <dave.taht@gmail.com> wrote:
> I am going to do an acid test today. The line I tested last night is a
> comcast cable line (with htb+fq_codel on the link). So I plan to plug
> in the ethernet on both my mac and linux laptops and repeat the
> comparison with the shaper on and off, with both linux and osx.
>
> the *really funny* part of this is that I do not have a single extra
> ethernet cable in my gf's SF apartment, and the less funny part of
> this is the nearest radio shack is now closed....
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 17:45 ` Toke Høiland-Jørgensen
2015-04-19 18:26 ` Jonathan Morton
@ 2015-04-19 19:19 ` Mikael Abrahamsson
1 sibling, 0 replies; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-04-19 19:19 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 952 bytes --]
On Sun, 19 Apr 2015, Toke Høiland-Jørgensen wrote:
> Why not? They can be a quite useful measure of how competing traffic
> performs when bulk flows congest the link. Which for many applications
> is more important then the latency experienced by the bulk flow
> itself...
I agree, but it's also easy to create a two queue system where ICMP is
sent in the second queue. So for stupid FIFOs, yes, they measure what you
want, but for anything else they don't.
> I do agree, though, that other types of measurements are also needed.
> Ideally we should have good latency characteristics for *both* competing
> traffic and the bulk flows themselves.
My thinking was that instead of using ICMP PING, for instance a simulated
UDP VoIP call could be done, send one packet every 100ms (yes, I know most
VoIP is 20ms or so) and use that. ICMP is not a good measurement, UDP
would be more valid.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 18:30 ` Toke Høiland-Jørgensen
@ 2015-04-19 19:15 ` Jonathan Morton
2015-04-20 3:15 ` Aaron Wood
1 sibling, 0 replies; 183+ messages in thread
From: Jonathan Morton @ 2015-04-19 19:15 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: bloat
> On 19 Apr, 2015, at 21:30, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>
>>> Why not? They can be a quite useful measure of how competing traffic
>>> performs when bulk flows congest the link. Which for many
>>> applications is more important then the latency experienced by the
>>> bulk flow itself.
>>
>> One clear objection is that ICMP is often prioritised when UDP is not.
>> So measuring with UDP gives a better indication in those cases.
>> Measuring with a separate TCP flow, such as HTTPing, is better still
>> by some measures, but most truly latency-sensitive traffic does use
>> UDP.
>
> Sure, well I tend to do both. Can't recall ever actually seeing any
> performance difference between the UDP and ICMP latency measurements,
> though...
On a LAN, I usually see that ICMP gets a small bonus from being returned by the kernel rather than needing to wake up a userspace process. That might matter less when the target host is multicore; I’ve often been using an old Thinkpad as a target. In any case, it’s a constant factor without much dependence on network load.
The one ISP I know of on this side of the pond which does any prioritisation - there are many I don’t know about - prioritises small packets in general, though I don’t know where they set their threshold, specifically to help VoIP and gaming traffic. I don’t think they treat ICMP specially beyond that. But they’re a clueful one; there are many others less clueful.
Meanwhile, I’m presently (briefly) in a very rural part of Finland, where old 2G equipment (which I assume was redundant from early populated-area deployments) was relocated to improve coverage. Twenty miles from any decent-sized town, fifteen miles from the nearest 3G tower, and about 7.5 miles from any worthwhile shops (ie. that sell food). That means the local coverage is GPRS - not even EDGE. And I brought some of the more portable components of my lab with me. This should be fun…
- Jonathan Morton
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 18:26 ` Jonathan Morton
@ 2015-04-19 18:30 ` Toke Høiland-Jørgensen
2015-04-19 19:15 ` Jonathan Morton
2015-04-20 3:15 ` Aaron Wood
0 siblings, 2 replies; 183+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-19 18:30 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
Jonathan Morton <chromatix99@gmail.com> writes:
>> Why not? They can be a quite useful measure of how competing traffic
>> performs when bulk flows congest the link. Which for many
>> applications is more important then the latency experienced by the
>> bulk flow itself.
>
> One clear objection is that ICMP is often prioritised when UDP is not.
> So measuring with UDP gives a better indication in those cases.
> Measuring with a separate TCP flow, such as HTTPing, is better still
> by some measures, but most truly latency-sensitive traffic does use
> UDP.
Sure, well I tend to do both. Can't recall ever actually seeing any
performance difference between the UDP and ICMP latency measurements,
though...
-Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 17:45 ` Toke Høiland-Jørgensen
@ 2015-04-19 18:26 ` Jonathan Morton
2015-04-19 18:30 ` Toke Høiland-Jørgensen
2015-04-19 19:19 ` Mikael Abrahamsson
1 sibling, 1 reply; 183+ messages in thread
From: Jonathan Morton @ 2015-04-19 18:26 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: bloat
> On 19 Apr, 2015, at 20:45, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>
> Why not? They can be a quite useful measure of how competing traffic
> performs when bulk flows congest the link. Which for many applications
> is more important then the latency experienced by the bulk flow
> itself.
One clear objection is that ICMP is often prioritised when UDP is not. So measuring with UDP gives a better indication in those cases. Measuring with a separate TCP flow, such as HTTPing, is better still by some measures, but most truly latency-sensitive traffic does use UDP.
- Jonathan Morton
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 17:15 ` Mikael Abrahamsson
2015-04-19 17:43 ` Dave Taht
@ 2015-04-19 17:45 ` Toke Høiland-Jørgensen
2015-04-19 18:26 ` Jonathan Morton
2015-04-19 19:19 ` Mikael Abrahamsson
2015-04-19 21:57 ` Rich Brown
2 siblings, 2 replies; 183+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-19 17:45 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
Mikael Abrahamsson <swmike@swm.pp.se> writes:
> On Sun, 19 Apr 2015, Toke Høiland-Jørgensen wrote:
>
>> The upload latency figures are definitely iffy, but the download ones
>> seem to match roughly what I've measured myself on this link.
>
> Also, I don't trust parallel latency measures done by for instance
> ICMP ping. Yes, they indicate something, but what?
Why not? They can be a quite useful measure of how competing traffic
performs when bulk flows congest the link. Which for many applications
is more important then the latency experienced by the bulk flow
itself...
I do agree, though, that other types of measurements are also needed.
Ideally we should have good latency characteristics for *both* competing
traffic and the bulk flows themselves.
-Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 17:15 ` Mikael Abrahamsson
@ 2015-04-19 17:43 ` Dave Taht
2015-04-19 19:22 ` Dave Taht
2015-04-19 17:45 ` Toke Høiland-Jørgensen
2015-04-19 21:57 ` Rich Brown
2 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2015-04-19 17:43 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: bloat
I am going to do an acid test today. The line I tested last night is a
comcast cable line (with htb+fq_codel on the link). So I plan to plug
in the ethernet on both my mac and linux laptops and repeat the
comparison with the shaper on and off, with both linux and osx.
the *really funny* part of this is that I do not have a single extra
ethernet cable in my gf's SF apartment, and the less funny part of
this is the nearest radio shack is now closed....
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 16:38 ` Toke Høiland-Jørgensen
@ 2015-04-19 17:15 ` Mikael Abrahamsson
2015-04-19 17:43 ` Dave Taht
` (2 more replies)
0 siblings, 3 replies; 183+ messages in thread
From: Mikael Abrahamsson @ 2015-04-19 17:15 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 644 bytes --]
On Sun, 19 Apr 2015, Toke Høiland-Jørgensen wrote:
> The upload latency figures are definitely iffy, but the download ones
> seem to match roughly what I've measured myself on this link.
Also, I don't trust parallel latency measures done by for instance ICMP
ping. Yes, they indicate something, but what?
We need insight into the TCP stack. So how can an application like
dslresports that runs in a browser, get meaningful performance metrics on
its measurement TCP-sessions from the OS TCP stack? This is a multi-layer
problem and I don't see any meaningful progress in this area...
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 15:38 ` Toke Høiland-Jørgensen
@ 2015-04-19 16:38 ` Toke Høiland-Jørgensen
2015-04-19 17:15 ` Mikael Abrahamsson
0 siblings, 1 reply; 183+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-19 16:38 UTC (permalink / raw)
To: jb; +Cc: bloat
Toke Høiland-Jørgensen <toke@toke.dk> writes:
> Yup, works: http://www.dslreports.com/speedtest/321853 -- cool!
Oh, and here is a test with a FIFO queue and shaping turned off:
http://www.dslreports.com/speedtest/322208
The upload latency figures are definitely iffy, but the download ones
seem to match roughly what I've measured myself on this link.
Also, I just noticed the legend on the latency-under-load graph says
"Upoading" :)
-Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 13:53 ` jb
@ 2015-04-19 15:38 ` Toke Høiland-Jørgensen
2015-04-19 16:38 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 183+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-19 15:38 UTC (permalink / raw)
To: jb; +Cc: bloat
jb <justin@dslr.net> writes:
> But all is ok now, the current test does the latency thing
> irregardless.
Yup, works: http://www.dslreports.com/speedtest/321853 -- cool!
So from that it looks like it does the under-load ping to somewhere else
than the nearest endpoint? Also, can you tell what the cause of that
spike just as the upload starts is?
> (Unless you click 3g or GPRS).
Well as someone who likes to break networks, having these measurements
on 3g and GPRS as well would be interesting ;)
What's the reasoning behind turning them off for those?
-Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 13:10 ` Toke Høiland-Jørgensen
@ 2015-04-19 13:53 ` jb
2015-04-19 15:38 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 183+ messages in thread
From: jb @ 2015-04-19 13:53 UTC (permalink / raw)
To: Toke Høiland-Jørgensen, bloat
[-- Attachment #1: Type: text/plain, Size: 852 bytes --]
Sorry its midnight here and I've got to that stage where my mistake ratio
rises above 50%. In the log it says "0.81s Forcing lo-fi mode due to slow
CPU"
which actually meant no animated graphs because platform is _linux_
(forget Firefox), as often linux has a tenuous connection between gpu and
browsers. And thus no latency measurements.
But all is ok now, the current test does the latency thing irregardless.
(Unless you click 3g or GPRS).
On Sun, Apr 19, 2015 at 11:10 PM, Toke Høiland-Jørgensen <toke@toke.dk>
wrote:
> jb <justin@dslr.net> writes:
>
> > Oh my apologies, I see from your log you are using firefox+linux.
>
> Nope, chromium. The first of the two links is mine. The other one I
> pasted from one of the previous mails on the list; and that *does* show
> the under-load latency measurements.
>
> -Toke
>
[-- Attachment #2: Type: text/html, Size: 1350 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 12:56 jb
@ 2015-04-19 13:10 ` Toke Høiland-Jørgensen
2015-04-19 13:53 ` jb
0 siblings, 1 reply; 183+ messages in thread
From: Toke Høiland-Jørgensen @ 2015-04-19 13:10 UTC (permalink / raw)
To: jb; +Cc: bloat
jb <justin@dslr.net> writes:
> Oh my apologies, I see from your log you are using firefox+linux.
Nope, chromium. The first of the two links is mine. The other one I
pasted from one of the previous mails on the list; and that *does* show
the under-load latency measurements.
-Toke
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
@ 2015-04-19 12:56 jb
2015-04-19 13:10 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 183+ messages in thread
From: jb @ 2015-04-19 12:56 UTC (permalink / raw)
To: Toke Høiland-Jørgensen, bloat
[-- Attachment #1: Type: text/plain, Size: 1160 bytes --]
Oh my apologies, I see from your log you are using firefox+linux.
To cut a long story short, the measurement of latency while the test is
running is suppressed for firefox+linux because originally it was
associated with a graphics gauge, and firefox has terrible canvas
performance on linux.
Anyway there isn't any reason to suppress latency measurements for this
combination of browser+os, because I removed the gauge widget thing today.
thanks for persisting with it. Shortly it will do the latency thing while
the test is running and will show in the result.
On Sun, Apr 19, 2015 at 10:14 PM, Toke Høiland-Jørgensen <toke@toke.dk>
wrote:
> jb <justin@dslr.net> writes:
>
> > The graph below the upload and download is what is new. (unfortunately
> > you do have to be logged into the site to see this) it shows the
> > latency during the upload and download, color coded. (see attached
> > image).
>
> So where is that graph? I only see the regular up- and down graphs.
>
> http://www.dslreports.com/speedtest/320936
>
> It shows up for this result, though...
>
> http://www.dslreports.com/speedtest/319616
>
> -Toke
>
[-- Attachment #2: Type: text/html, Size: 1809 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 8:38 ` Dave Taht
@ 2015-04-19 12:21 ` jb
0 siblings, 0 replies; 183+ messages in thread
From: jb @ 2015-04-19 12:21 UTC (permalink / raw)
To: Dave Taht, bloat
[-- Attachment #1: Type: text/plain, Size: 4115 bytes --]
Hi Dave, Is that rrul_be test stealing upload capacity because the Comcast
scatter graph shows some distinctly popular upload bands for their
different products: 12mbit, 6mbt, 24mbit, but your test is very low for
upload.
Some people on linux with firefox got completely tripped up by the realtime
graphs, they seem to choke up the browser, and that ruins the speed. I
thought I'd fixed the test to be as light as possible on linux+firefox but
maybe not?
> This was a test taken *during* a 2 minute rrul_be test.
>
> http://www.dslreports.com/speedtest/320377
On Sun, Apr 19, 2015 at 6:38 PM, Dave Taht <dave.taht@gmail.com> wrote:
> This was a test taken *during* a 2 minute rrul_be test.
>
> http://www.dslreports.com/speedtest/320377
>
> Flient (formerly netperf-wrapper) data here:
> http://snapon.lab.bufferbloat.net/~d/lorna-wifi.tgz
>
> Puzzle over this!
> http://snapon.lab.bufferbloat.net/~d/lorna-wifi/reconcile_this.png and
> the rawer data.... in comparison to this and other of these new
> speedtest reports.
>
> There are a couple other tests of the same link in the same
> configuration (laptop on lap 10 feet from the access point through a
> wall) [1] in the same dir testing upload and download (without
> simultaneously running the new dslreport tests)
>
> CDF plots are nice. So are mountain plots.
>
> http://snapon.lab.bufferbloat.net/~d/lorna-wifi/wifi_download.png
>
>
>
>
>
>
> [1] I was trying for comfort^H^H^H^H^^H^H^H^Hrealism
>
> On Sun, Apr 19, 2015 at 1:29 AM, Dave Taht <dave.taht@gmail.com> wrote:
> > This test was taken on linux, about 20 feet and one room away from the
> > access point:
> >
> > http://www.dslreports.com/speedtest/320328
> >
> > This was taken on the same box, about 10 feet and one room from the
> > access point.
> >
> > http://www.dslreports.com/speedtest/320340
> >
> > In all cases, the uplink is a comcast box configured for 55Mbit down,
> > 5Mbit up and just to make it weird this is a two router configuration,
> > where the nearest hop is over a powerline box (TP600) before hitting
> > the net.
> >
> > I *like* that the test does not let you switch browser tabs (something
> > I do instinctively when something takes longer than 3 seconds.)
> >
> >
> >
> >
> > On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com>
> wrote:
> >> Folks,
> >>
> >> I am delighted to pass along the news that Justin has added latency
> measurements into the Speed Test at DSLReports.com.
> >>
> >> Go to: https://www.dslreports.com/speedtest and click the button for
> your Internet link. This controls the number of simultaneous connections
> that get established between your browser and the speedtest server. After
> you run the test, click the green "Results + Share" button to see detailed
> info. For the moment, you need to be logged in to see the latency results.
> There's a "register" link on each page.
> >>
> >> The speed test measures latency using websocket pings: Justin says that
> a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just
> ran a test and got 48 msec latency from DSLReports, while ping gstatic.com
> gave 38-40 msec, so they're pretty fast.
> >>
> >> You can leave feedback on this page -
> http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest
> - or wait 'til Justin creates a new Bufferbloat topic on the forums.
> >>
> >> Enjoy!
> >>
> >> Rich
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >
> >
> >
> > --
> > Dave Täht
> > Open Networking needs **Open Source Hardware**
> >
> > https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 6631 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 10:53 ` dikshie
@ 2015-04-19 12:11 ` jb
0 siblings, 0 replies; 183+ messages in thread
From: jb @ 2015-04-19 12:11 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 3798 bytes --]
Your test used a mixture of Taiwan, Japan and Dallas Texas which were the
three best locations.
Ideally the test should concentrate less on Dallas/Taiwan, and more on
Japan, instead of picking randomly among them.
I'll be adjusting the strategy to be more optimal. Then it is just a matter
of adding servers.
Despite the non optimal choice, as I mentioned, it did test a near gigabit
connection at Sony so the servers are not really lacking, just minimising
the total of the latency especially for Windows clients.
16.83s stream0 mbits=0.52 (16% of 64k buffer) Dallas, TX, USA 1sec=0.56 5%
16.83s stream1 mbits=0.43 (13% of 64k buffer) Dallas, TX, USA 1sec=0.7 0%
16.83s stream2 mbits=1.94 (61% of 64k buffer) Dallas, TX, USA 1sec=1.62 1%
16.83s stream3 mbits=7.3 (39% of 64k buffer) Tokyo, Japan 1sec=8.77 3%
16.83s stream4 mbits=3.46 (35% of 64k buffer) Changhua, Taiwan 1sec=5.23 2%
16.83s stream5 mbits=1.36 (43% of 64k buffer) Dallas, TX, USA 1sec=1.85 1%
16.83s stream6 mbits=0.78 (24% of 64k buffer) Dallas, TX, USA 1sec=1.39 0%
16.83s stream7 mbits=1.62 (51% of 64k buffer) Dallas, TX, USA 1sec=0.83 1%
16.83s stream8 mbits=0.67 (21% of 64k buffer) Dallas, TX, USA 1sec=0.65 0%
16.83s stream9 mbits=8.64 (46% of 64k buffer) Tokyo, Japan 1sec=6.82 3%
16.83s stream10 mbits=3.81 (38% of 64k buffer) Changhua, Taiwan 1sec=3.11
2%
16.83s stream11 mbits=0.27 (8% of 64k buffer) Dallas, TX, USA 1sec=0.6 0%
16.83s stream12 mbits=1.67 (17% of 64k buffer) Changhua, Taiwan 1sec=3.24
1%
16.83s stream13 mbits=5.83 (31% of 64k buffer) Tokyo, Japan 1sec=6.04 3%
16.83s stream14 mbits=3.17 (32% of 64k buffer) Changhua, Taiwan 1sec=3.42
1%
16.83s stream15 mbits=3.46 (35% of 64k buffer) Changhua, Taiwan 1sec=2.75
2%
16.83s stream16 mbits=1.19 (37% of 64k buffer) Dallas, TX, USA 1sec=0.72 0%
16.83s stream17 mbits=6.07 (32% of 64k buffer) Tokyo, Japan 1sec=13.31 2%
16.83s stream18 mbits=0.8 (25% of 64k buffer) Dallas, TX, USA 1sec=1.39 0%
16.83s stream19 mbits=1.44 (45% of 64k buffer) Dallas, TX, USA 1sec=0.65 1%
16.83s stream20 mbits=4.11 (42% of 64k buffer) Changhua, Taiwan 1sec=2.37
2%
16.83s stream21 mbits=1.25 (39% of 64k buffer) Dallas, TX, USA 1sec=0.67 1%
16.83s stream22 mbits=6.13 (33% of 64k buffer) Tokyo, Japan 1sec=6.75 3%
16.83s stream23 mbits=4.12 (22% of 64k buffer) Tokyo, Japan 1sec=6.04 2%
On Sun, Apr 19, 2015 at 8:53 PM, dikshie <dikshie@gmail.com> wrote:
> On Sun, Apr 19, 2015 at 9:57 AM, Rich Brown <richb.hanover@gmail.com>
> wrote:
> > Folks,
> >
> > I am delighted to pass along the news that Justin has added latency
> measurements into the Speed Test at DSLReports.com.
> >
> > Go to: https://www.dslreports.com/speedtest and click the button for
> your Internet link. This controls the number of simultaneous connections
> that get established between your browser and the speedtest server. After
> you run the test, click the green "Results + Share" button to see detailed
> info. For the moment, you need to be logged in to see the latency results.
> There's a "register" link on each page.
> >
> > The speed test measures latency using websocket pings: Justin says that
> a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just
> ran a test and got 48 msec latency from DSLReports, while ping gstatic.com
> gave 38-40 msec, so they're pretty fast.
> >
> > You can leave feedback on this page -
> http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest
> - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>
>
>
> here from JP:
> http://www.dslreports.com/speedtest/320711
>
> there are not many servers in JP.
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 5169 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 9:17 ` MUSCARIELLO Luca IMT/OLN
@ 2015-04-19 12:03 ` jb
0 siblings, 0 replies; 183+ messages in thread
From: jb @ 2015-04-19 12:03 UTC (permalink / raw)
To: MUSCARIELLO Luca IMT/OLN, bloat
[-- Attachment #1: Type: text/plain, Size: 3062 bytes --]
I would not say the server situation is not ideal in Europe but it is
enough for this french result I saw float past in the last few days:
http://www.dslreports.com/speedtest/308767
757 megabit down, 191 megabit up.
If you want to see other near gigabit results (mostly US but others,
including Sony) there is a page that notes them:
http://www.dslreports.com/speedtest/gigabit
however Wanadoo ISP might not have such great connections, I'd have to look
at it. What does your ISP hosted test have you at?
There are servers at google europe, and amazon ireland, and internap
(frankfurt).
On Sun, Apr 19, 2015 at 7:17 PM, MUSCARIELLO Luca IMT/OLN <
luca.muscariello@orange.com> wrote:
> Nice tool.
>
> Here is a test from France.
> http://www.dslreports.com/speedtest/320422
>
> There are not so many servers in Europe I guess.
>
>
>
>
> On 04/19/2015 10:29 AM, Dave Taht wrote:
>
>> This test was taken on linux, about 20 feet and one room away from the
>> access point:
>>
>> http://www.dslreports.com/speedtest/320328
>>
>> This was taken on the same box, about 10 feet and one room from the
>> access point.
>>
>> http://www.dslreports.com/speedtest/320340
>>
>> In all cases, the uplink is a comcast box configured for 55Mbit down,
>> 5Mbit up and just to make it weird this is a two router configuration,
>> where the nearest hop is over a powerline box (TP600) before hitting
>> the net.
>>
>> I *like* that the test does not let you switch browser tabs (something
>> I do instinctively when something takes longer than 3 seconds.)
>>
>>
>>
>>
>> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com>
>> wrote:
>>
>>> Folks,
>>>
>>> I am delighted to pass along the news that Justin has added latency
>>> measurements into the Speed Test at DSLReports.com.
>>>
>>> Go to: https://www.dslreports.com/speedtest and click the button for
>>> your Internet link. This controls the number of simultaneous connections
>>> that get established between your browser and the speedtest server. After
>>> you run the test, click the green "Results + Share" button to see detailed
>>> info. For the moment, you need to be logged in to see the latency results.
>>> There's a "register" link on each page.
>>>
>>> The speed test measures latency using websocket pings: Justin says that
>>> a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just
>>> ran a test and got 48 msec latency from DSLReports, while ping
>>> gstatic.com gave 38-40 msec, so they're pretty fast.
>>>
>>> You can leave feedback on this page -
>>> http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest
>>> - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>>>
>>> Enjoy!
>>>
>>> Rich
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>>
>>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 4766 bytes --]
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 0:57 Rich Brown
2015-04-19 4:01 ` Dave Taht
2015-04-19 8:29 ` Dave Taht
@ 2015-04-19 10:53 ` dikshie
2015-04-19 12:11 ` jb
2 siblings, 1 reply; 183+ messages in thread
From: dikshie @ 2015-04-19 10:53 UTC (permalink / raw)
To: bloat
On Sun, Apr 19, 2015 at 9:57 AM, Rich Brown <richb.hanover@gmail.com> wrote:
> Folks,
>
> I am delighted to pass along the news that Justin has added latency measurements into the Speed Test at DSLReports.com.
>
> Go to: https://www.dslreports.com/speedtest and click the button for your Internet link. This controls the number of simultaneous connections that get established between your browser and the speedtest server. After you run the test, click the green "Results + Share" button to see detailed info. For the moment, you need to be logged in to see the latency results. There's a "register" link on each page.
>
> The speed test measures latency using websocket pings: Justin says that a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just ran a test and got 48 msec latency from DSLReports, while ping gstatic.com gave 38-40 msec, so they're pretty fast.
>
> You can leave feedback on this page - http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest - or wait 'til Justin creates a new Bufferbloat topic on the forums.
here from JP:
http://www.dslreports.com/speedtest/320711
there are not many servers in JP.
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 8:29 ` Dave Taht
2015-04-19 8:38 ` Dave Taht
@ 2015-04-19 9:17 ` MUSCARIELLO Luca IMT/OLN
2015-04-19 12:03 ` jb
1 sibling, 1 reply; 183+ messages in thread
From: MUSCARIELLO Luca IMT/OLN @ 2015-04-19 9:17 UTC (permalink / raw)
To: Dave Taht, Rich Brown; +Cc: cerowrt-devel, bloat
Nice tool.
Here is a test from France.
http://www.dslreports.com/speedtest/320422
There are not so many servers in Europe I guess.
On 04/19/2015 10:29 AM, Dave Taht wrote:
> This test was taken on linux, about 20 feet and one room away from the
> access point:
>
> http://www.dslreports.com/speedtest/320328
>
> This was taken on the same box, about 10 feet and one room from the
> access point.
>
> http://www.dslreports.com/speedtest/320340
>
> In all cases, the uplink is a comcast box configured for 55Mbit down,
> 5Mbit up and just to make it weird this is a two router configuration,
> where the nearest hop is over a powerline box (TP600) before hitting
> the net.
>
> I *like* that the test does not let you switch browser tabs (something
> I do instinctively when something takes longer than 3 seconds.)
>
>
>
>
> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com> wrote:
>> Folks,
>>
>> I am delighted to pass along the news that Justin has added latency measurements into the Speed Test at DSLReports.com.
>>
>> Go to: https://www.dslreports.com/speedtest and click the button for your Internet link. This controls the number of simultaneous connections that get established between your browser and the speedtest server. After you run the test, click the green "Results + Share" button to see detailed info. For the moment, you need to be logged in to see the latency results. There's a "register" link on each page.
>>
>> The speed test measures latency using websocket pings: Justin says that a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just ran a test and got 48 msec latency from DSLReports, while ping gstatic.com gave 38-40 msec, so they're pretty fast.
>>
>> You can leave feedback on this page - http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>>
>> Enjoy!
>>
>> Rich
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 8:29 ` Dave Taht
@ 2015-04-19 8:38 ` Dave Taht
2015-04-19 12:21 ` jb
2015-04-19 9:17 ` MUSCARIELLO Luca IMT/OLN
1 sibling, 1 reply; 183+ messages in thread
From: Dave Taht @ 2015-04-19 8:38 UTC (permalink / raw)
To: Rich Brown; +Cc: cerowrt-devel, bloat
This was a test taken *during* a 2 minute rrul_be test.
http://www.dslreports.com/speedtest/320377
Flient (formerly netperf-wrapper) data here:
http://snapon.lab.bufferbloat.net/~d/lorna-wifi.tgz
Puzzle over this!
http://snapon.lab.bufferbloat.net/~d/lorna-wifi/reconcile_this.png and
the rawer data.... in comparison to this and other of these new
speedtest reports.
There are a couple other tests of the same link in the same
configuration (laptop on lap 10 feet from the access point through a
wall) [1] in the same dir testing upload and download (without
simultaneously running the new dslreport tests)
CDF plots are nice. So are mountain plots.
http://snapon.lab.bufferbloat.net/~d/lorna-wifi/wifi_download.png
[1] I was trying for comfort^H^H^H^H^^H^H^H^Hrealism
On Sun, Apr 19, 2015 at 1:29 AM, Dave Taht <dave.taht@gmail.com> wrote:
> This test was taken on linux, about 20 feet and one room away from the
> access point:
>
> http://www.dslreports.com/speedtest/320328
>
> This was taken on the same box, about 10 feet and one room from the
> access point.
>
> http://www.dslreports.com/speedtest/320340
>
> In all cases, the uplink is a comcast box configured for 55Mbit down,
> 5Mbit up and just to make it weird this is a two router configuration,
> where the nearest hop is over a powerline box (TP600) before hitting
> the net.
>
> I *like* that the test does not let you switch browser tabs (something
> I do instinctively when something takes longer than 3 seconds.)
>
>
>
>
> On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com> wrote:
>> Folks,
>>
>> I am delighted to pass along the news that Justin has added latency measurements into the Speed Test at DSLReports.com.
>>
>> Go to: https://www.dslreports.com/speedtest and click the button for your Internet link. This controls the number of simultaneous connections that get established between your browser and the speedtest server. After you run the test, click the green "Results + Share" button to see detailed info. For the moment, you need to be logged in to see the latency results. There's a "register" link on each page.
>>
>> The speed test measures latency using websocket pings: Justin says that a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just ran a test and got 48 msec latency from DSLReports, while ping gstatic.com gave 38-40 msec, so they're pretty fast.
>>
>> You can leave feedback on this page - http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>>
>> Enjoy!
>>
>> Rich
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> Dave Täht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 0:57 Rich Brown
2015-04-19 4:01 ` Dave Taht
@ 2015-04-19 8:29 ` Dave Taht
2015-04-19 8:38 ` Dave Taht
2015-04-19 9:17 ` MUSCARIELLO Luca IMT/OLN
2015-04-19 10:53 ` dikshie
2 siblings, 2 replies; 183+ messages in thread
From: Dave Taht @ 2015-04-19 8:29 UTC (permalink / raw)
To: Rich Brown; +Cc: cerowrt-devel, bloat
This test was taken on linux, about 20 feet and one room away from the
access point:
http://www.dslreports.com/speedtest/320328
This was taken on the same box, about 10 feet and one room from the
access point.
http://www.dslreports.com/speedtest/320340
In all cases, the uplink is a comcast box configured for 55Mbit down,
5Mbit up and just to make it weird this is a two router configuration,
where the nearest hop is over a powerline box (TP600) before hitting
the net.
I *like* that the test does not let you switch browser tabs (something
I do instinctively when something takes longer than 3 seconds.)
On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com> wrote:
> Folks,
>
> I am delighted to pass along the news that Justin has added latency measurements into the Speed Test at DSLReports.com.
>
> Go to: https://www.dslreports.com/speedtest and click the button for your Internet link. This controls the number of simultaneous connections that get established between your browser and the speedtest server. After you run the test, click the green "Results + Share" button to see detailed info. For the moment, you need to be logged in to see the latency results. There's a "register" link on each page.
>
> The speed test measures latency using websocket pings: Justin says that a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just ran a test and got 48 msec latency from DSLReports, while ping gstatic.com gave 38-40 msec, so they're pretty fast.
>
> You can leave feedback on this page - http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>
> Enjoy!
>
> Rich
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* Re: [Bloat] DSLReports Speed Test has latency measurement built-in
2015-04-19 0:57 Rich Brown
@ 2015-04-19 4:01 ` Dave Taht
2015-04-20 14:33 ` Colin Dearborn
2015-04-19 8:29 ` Dave Taht
2015-04-19 10:53 ` dikshie
2 siblings, 1 reply; 183+ messages in thread
From: Dave Taht @ 2015-04-19 4:01 UTC (permalink / raw)
To: Rich Brown; +Cc: cerowrt-devel, bloat
What I see here is the same old latency, upload, download series, not
latency and bandwidth at the same time.
http://www.dslreports.com/speedtest/319616
On Sat, Apr 18, 2015 at 5:57 PM, Rich Brown <richb.hanover@gmail.com> wrote:
> Folks,
>
> I am delighted to pass along the news that Justin has added latency measurements into the Speed Test at DSLReports.com.
>
> Go to: https://www.dslreports.com/speedtest and click the button for your Internet link. This controls the number of simultaneous connections that get established between your browser and the speedtest server. After you run the test, click the green "Results + Share" button to see detailed info. For the moment, you need to be logged in to see the latency results. There's a "register" link on each page.
>
> The speed test measures latency using websocket pings: Justin says that a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just ran a test and got 48 msec latency from DSLReports, while ping gstatic.com gave 38-40 msec, so they're pretty fast.
>
> You can leave feedback on this page - http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest - or wait 'til Justin creates a new Bufferbloat topic on the forums.
>
> Enjoy!
>
> Rich
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 183+ messages in thread
* [Bloat] DSLReports Speed Test has latency measurement built-in
@ 2015-04-19 0:57 Rich Brown
2015-04-19 4:01 ` Dave Taht
` (2 more replies)
0 siblings, 3 replies; 183+ messages in thread
From: Rich Brown @ 2015-04-19 0:57 UTC (permalink / raw)
To: cerowrt-devel, bloat
Folks,
I am delighted to pass along the news that Justin has added latency measurements into the Speed Test at DSLReports.com.
Go to: https://www.dslreports.com/speedtest and click the button for your Internet link. This controls the number of simultaneous connections that get established between your browser and the speedtest server. After you run the test, click the green "Results + Share" button to see detailed info. For the moment, you need to be logged in to see the latency results. There's a "register" link on each page.
The speed test measures latency using websocket pings: Justin says that a zero-latency link can give 1000 Hz - faster than a full HTTP ping. I just ran a test and got 48 msec latency from DSLReports, while ping gstatic.com gave 38-40 msec, so they're pretty fast.
You can leave feedback on this page - http://www.dslreports.com/forum/r29910594-FYI-for-general-feedback-on-the-new-speedtest - or wait 'til Justin creates a new Bufferbloat topic on the forums.
Enjoy!
Rich
^ permalink raw reply [flat|nested] 183+ messages in thread
end of thread, other threads:[~2015-05-08 14:22 UTC | newest]
Thread overview: 183+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-19 5:26 [Bloat] DSLReports Speed Test has latency measurement built-in jb
2015-04-19 7:36 ` David Lang
2015-04-19 7:48 ` David Lang
2015-04-19 9:33 ` jb
2015-04-19 10:45 ` David Lang
2015-04-19 8:28 ` Alex Burr
2015-04-19 10:20 ` Sebastian Moeller
2015-04-19 10:46 ` Jonathan Morton
2015-04-19 16:30 ` Sebastian Moeller
2015-04-19 17:41 ` Jonathan Morton
2015-04-19 19:40 ` Sebastian Moeller
2015-04-19 20:53 ` Jonathan Morton
2015-04-21 2:56 ` Simon Barber
2015-04-21 4:15 ` jb
2015-04-21 4:47 ` David Lang
2015-04-21 7:35 ` jb
2015-04-21 9:14 ` Steinar H. Gunderson
2015-04-21 14:20 ` David Lang
2015-04-21 14:25 ` David Lang
2015-04-21 14:28 ` David Lang
2015-04-21 22:13 ` jb
2015-04-21 22:39 ` Aaron Wood
2015-04-21 23:17 ` jb
2015-04-22 2:14 ` Simon Barber
2015-04-22 2:56 ` jb
2015-04-22 14:32 ` Simon Barber
2015-04-22 17:35 ` David Lang
2015-04-23 1:37 ` Simon Barber
2015-04-24 16:54 ` David Lang
2015-04-24 17:00 ` Rick Jones
2015-04-21 9:37 ` Jonathan Morton
2015-04-21 10:35 ` jb
2015-04-22 4:04 ` Steinar H. Gunderson
2015-04-22 4:28 ` Eric Dumazet
2015-04-22 8:51 ` [Bloat] RE : " luca.muscariello
2015-04-22 12:02 ` jb
2015-04-22 13:08 ` Jonathan Morton
[not found] ` <14ce17a7810.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net>
2015-04-22 14:15 ` Simon Barber
2015-04-22 13:50 ` [Bloat] " Eric Dumazet
2015-04-22 14:09 ` Steinar H. Gunderson
2015-04-22 15:26 ` [Bloat] RE : " luca.muscariello
2015-04-22 15:44 ` [Bloat] " Eric Dumazet
2015-04-22 16:35 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 17:16 ` Eric Dumazet
2015-04-22 17:24 ` Steinar H. Gunderson
2015-04-22 17:28 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 17:45 ` MUSCARIELLO Luca IMT/OLN
2015-04-23 5:27 ` MUSCARIELLO Luca IMT/OLN
2015-04-23 6:48 ` Eric Dumazet
[not found] ` <CAH3Ss96VwE_fWNMOMOY4AgaEnVFtCP3rPDHSudOcHxckSDNMqQ@mail.gmail.com>
2015-04-23 10:08 ` jb
2015-04-24 8:18 ` Sebastian Moeller
2015-04-24 8:29 ` Toke Høiland-Jørgensen
2015-04-24 8:55 ` Sebastian Moeller
2015-04-24 9:02 ` Toke Høiland-Jørgensen
2015-04-24 13:32 ` jb
2015-04-24 13:58 ` Toke Høiland-Jørgensen
2015-04-24 16:51 ` David Lang
2015-04-25 3:15 ` Simon Barber
2015-04-25 4:04 ` Dave Taht
2015-04-25 4:26 ` Simon Barber
2015-04-25 6:03 ` Sebastian Moeller
2015-04-27 16:39 ` Dave Taht
2015-04-28 7:18 ` Sebastian Moeller
2015-04-28 8:01 ` David Lang
2015-04-28 8:19 ` Toke Høiland-Jørgensen
2015-04-28 15:42 ` David Lang
2015-04-28 8:38 ` Sebastian Moeller
2015-04-28 12:09 ` Rich Brown
2015-04-28 15:26 ` David Lang
2015-04-28 15:39 ` David Lang
2015-04-28 11:04 ` Mikael Abrahamsson
2015-04-28 11:49 ` Sebastian Moeller
2015-04-28 12:24 ` Mikael Abrahamsson
2015-04-28 13:44 ` Sebastian Moeller
2015-04-28 19:09 ` Rick Jones
2015-04-28 14:06 ` Dave Taht
2015-04-28 14:02 ` Dave Taht
2015-05-06 5:08 ` Simon Barber
2015-05-06 8:50 ` Sebastian Moeller
2015-05-06 15:30 ` Jim Gettys
2015-05-06 18:03 ` Sebastian Moeller
2015-05-06 20:25 ` Jonathan Morton
2015-05-06 20:43 ` Toke Høiland-Jørgensen
2015-05-07 7:33 ` Sebastian Moeller
2015-05-07 4:29 ` Mikael Abrahamsson
2015-05-07 7:08 ` jb
2015-05-07 7:18 ` Jonathan Morton
2015-05-07 7:24 ` Mikael Abrahamsson
2015-05-07 7:40 ` Sebastian Moeller
2015-05-07 9:16 ` Mikael Abrahamsson
2015-05-07 10:44 ` jb
2015-05-07 11:36 ` Sebastian Moeller
2015-05-07 11:44 ` Mikael Abrahamsson
2015-05-07 13:10 ` Jim Gettys
2015-05-07 13:18 ` Mikael Abrahamsson
2015-05-07 13:14 ` jb
2015-05-07 13:26 ` Neil Davies
2015-05-07 14:45 ` Simon Barber
2015-05-07 22:27 ` Dave Taht
2015-05-07 22:45 ` Dave Taht
2015-05-07 23:09 ` Dave Taht
2015-05-08 2:05 ` jb
2015-05-08 4:16 ` David Lang
2015-05-08 3:54 ` Eric Dumazet
2015-05-08 4:20 ` Dave Taht
2015-05-08 13:20 ` [Bloat] DSLReports Jitter/PDV test Rich Brown
2015-05-08 14:22 ` jb
2015-05-07 7:37 ` [Bloat] DSLReports Speed Test has latency measurement built-in Sebastian Moeller
2015-05-07 7:19 ` Mikael Abrahamsson
2015-05-07 6:19 ` Sebastian Moeller
2015-04-25 3:23 ` Simon Barber
2015-04-24 15:20 ` Bill Ver Steeg (versteb)
2015-04-25 2:24 ` Simon Barber
2015-04-23 10:17 ` renaud sallantin
2015-04-23 14:10 ` Eric Dumazet
2015-04-23 14:38 ` renaud sallantin
2015-04-23 15:52 ` Jonathan Morton
2015-04-23 16:00 ` Simon Barber
2015-04-23 13:17 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 18:22 ` Eric Dumazet
2015-04-22 18:39 ` [Bloat] Pacing --- was " MUSCARIELLO Luca IMT/OLN
2015-04-22 19:05 ` Jonathan Morton
2015-04-22 15:59 ` [Bloat] RE : " Steinar H. Gunderson
2015-04-22 16:16 ` Eric Dumazet
2015-04-22 16:19 ` Dave Taht
2015-04-22 17:15 ` Rick Jones
2015-04-19 12:14 ` [Bloat] " Toke Høiland-Jørgensen
-- strict thread matches above, loose matches on Subject: below --
2015-04-19 12:56 jb
2015-04-19 13:10 ` Toke Høiland-Jørgensen
2015-04-19 13:53 ` jb
2015-04-19 15:38 ` Toke Høiland-Jørgensen
2015-04-19 16:38 ` Toke Høiland-Jørgensen
2015-04-19 17:15 ` Mikael Abrahamsson
2015-04-19 17:43 ` Dave Taht
2015-04-19 19:22 ` Dave Taht
2015-04-23 17:03 ` Dave Taht
2015-04-23 18:04 ` Mikael Abrahamsson
2015-04-23 18:08 ` Jonathan Morton
2015-04-23 20:19 ` jb
2015-04-23 20:39 ` Dave Taht
2015-04-24 21:45 ` Rich Brown
2015-04-25 1:14 ` jb
2015-04-23 21:44 ` Rich Brown
2015-04-23 22:22 ` Dave Taht
2015-04-23 22:29 ` Dave Taht
2015-04-24 1:58 ` Rich Brown
2015-04-24 2:40 ` Dave Taht
2015-04-24 3:20 ` Jim Gettys
2015-04-24 3:39 ` Dave Taht
2015-04-24 4:04 ` Dave Taht
2015-04-24 4:17 ` Dave Taht
2015-04-24 16:13 ` Rick Jones
2015-04-24 5:00 ` jb
2015-04-27 16:28 ` Dave Taht
2015-04-24 16:09 ` Rick Jones
2015-04-24 13:49 ` Pedro Tumusok
2015-04-23 22:51 ` David Lang
2015-04-24 1:38 ` Rich Brown
2015-04-24 4:16 ` Mikael Abrahamsson
2015-04-24 4:24 ` Dave Taht
2015-04-19 17:45 ` Toke Høiland-Jørgensen
2015-04-19 18:26 ` Jonathan Morton
2015-04-19 18:30 ` Toke Høiland-Jørgensen
2015-04-19 19:15 ` Jonathan Morton
2015-04-20 3:15 ` Aaron Wood
2015-04-20 7:00 ` jb
[not found] ` <CACQiMXbF9Uk3H=81at-Z9a2fdYKrRtRorSXRg5dBcPB8-aR4Cw@mail.gmail.com>
2015-04-20 8:11 ` jb
2015-04-19 19:19 ` Mikael Abrahamsson
2015-04-19 21:57 ` Rich Brown
2015-04-19 23:21 ` jb
2015-04-20 14:51 ` David Lang
2015-04-20 15:51 ` Dave Taht
2015-04-20 16:15 ` Dave Taht
2015-04-19 0:57 Rich Brown
2015-04-19 4:01 ` Dave Taht
2015-04-20 14:33 ` Colin Dearborn
2015-04-19 8:29 ` Dave Taht
2015-04-19 8:38 ` Dave Taht
2015-04-19 12:21 ` jb
2015-04-19 9:17 ` MUSCARIELLO Luca IMT/OLN
2015-04-19 12:03 ` jb
2015-04-19 10:53 ` dikshie
2015-04-19 12:11 ` jb
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox