* [Bloat] Bufferbloat test measurements (was: Open Source Speed Test)
@ 2016-08-27 11:46 Rich Brown
2016-08-27 15:19 ` Kathleen Nichols
0 siblings, 1 reply; 9+ messages in thread
From: Rich Brown @ 2016-08-27 11:46 UTC (permalink / raw)
To: bloat
It has always been my intent to define bufferbloat as *latency*. The first sentence on www.bufferbloat.net says, "Bufferbloat is the undesirable latency that comes from a router or other network equipment buffering too much data."
That definition focuses on observable/measurable values. It sidesteps objections I've seen on the forums, "How could $TEST measure the size of buffers?"
So what matters is whether the buffers (of any size) are filling up.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Bufferbloat test measurements (was: Open Source Speed Test)
2016-08-27 11:46 [Bloat] Bufferbloat test measurements (was: Open Source Speed Test) Rich Brown
@ 2016-08-27 15:19 ` Kathleen Nichols
2016-08-27 16:03 ` [Bloat] Bufferbloat test measurements Alan Jenkins
0 siblings, 1 reply; 9+ messages in thread
From: Kathleen Nichols @ 2016-08-27 15:19 UTC (permalink / raw)
To: bloat
Yeah.
I admit to muddying the waters because I think of the size of a buffer as
being in megabytes and the size of a queue (latency) as being in
milliseconds. I think the tests attempt to measure the worst possible
latency/queue that can occur on a path.
On 8/27/16 4:46 AM, Rich Brown wrote:
> It has always been my intent to define bufferbloat as *latency*. The
> first sentence on www.bufferbloat.net says, "Bufferbloat is the
> undesirable latency that comes from a router or other network
> equipment buffering too much data."
>
> That definition focuses on observable/measurable values. It sidesteps
> objections I've seen on the forums, "How could $TEST measure the size
> of buffers?"
>
> So what matters is whether the buffers (of any size) are filling up.
> _______________________________________________ Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Bufferbloat test measurements
2016-08-27 15:19 ` Kathleen Nichols
@ 2016-08-27 16:03 ` Alan Jenkins
2016-08-27 17:37 ` Kathleen Nichols
0 siblings, 1 reply; 9+ messages in thread
From: Alan Jenkins @ 2016-08-27 16:03 UTC (permalink / raw)
To: Kathleen Nichols, bloat
[-- Attachment #1: Type: text/plain, Size: 2110 bytes --]
That's the simplest measure of bufferbloat though :).
Do you have a criticism in terms of dslreports.com? I think it's fairly
transparent, showing idle v.s. download v.s. upload. The headline
figures are an average, and you can look at all the data points. (You
can increase the measurement frequency if you're specifically interested
in that).
[random selection from Google]
http://www.dslreports.com/speedtest/419540
http://forum.kitz.co.uk/index.php?topic=15620.0
It's more aggressive than a single file download, but that's not
deliberately to exacerbate bufferbloat. It's just designed to measure
performance of prolonged downloads / streaming, in a competitively short
test. "For our busy lives", as the overused saying goes.
(The initial summary only gives a grade. The figure wouldn't be one of
the headlines their ISP advertises. Saying "100ms" would confuse
people. And the tests they're used to / compare with, show idle latency
instead.)
On 27/08/16 16:19, Kathleen Nichols wrote:
> Yeah.
>
> I admit to muddying the waters because I think of the size of a buffer as
> being in megabytes and the size of a queue (latency) as being in
> milliseconds. I think the tests attempt to measure the worst possible
> latency/queue that can occur on a path.
>
> On 8/27/16 4:46 AM, Rich Brown wrote:
>> It has always been my intent to define bufferbloat as *latency*. The
>> first sentence on www.bufferbloat.net says, "Bufferbloat is the
>> undesirable latency that comes from a router or other network
>> equipment buffering too much data."
>>
>> That definition focuses on observable/measurable values. It sidesteps
>> objections I've seen on the forums, "How could $TEST measure the size
>> of buffers?"
>>
>> So what matters is whether the buffers (of any size) are filling up.
>> _______________________________________________ Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
[-- Attachment #2: Type: text/html, Size: 3219 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Bufferbloat test measurements
2016-08-27 16:03 ` [Bloat] Bufferbloat test measurements Alan Jenkins
@ 2016-08-27 17:37 ` Kathleen Nichols
2016-08-27 19:18 ` Alan Jenkins
2016-08-27 23:39 ` jb
0 siblings, 2 replies; 9+ messages in thread
From: Kathleen Nichols @ 2016-08-27 17:37 UTC (permalink / raw)
To: Alan Jenkins, bloat
In-line below. Only for geeks.
On 8/27/16 9:03 AM, Alan Jenkins wrote:
>
> That's the simplest measure of bufferbloat though :).
Don't I know! :) Have spent a couple of years figuring out how to measure
experienced delay...
>
> Do you have a criticism in terms of dslreports.com? I think it's fairly
> transparent, showing idle v.s. download v.s. upload. The headline
> figures are an average, and you can look at all the data points. (You
> can increase the measurement frequency if you're specifically interested
> in that).
"Criticism" seems too harsh. The uplink and downlink speed stuff is great
and agrees with all other measures. The "bufferbloat" grade is, I think,
sort
of confusing. Also, it's not clear where the queue the test builds up is
located.
It could be in ISP or home. So, I ran the test while I was also streaming a
Netflix video. Under the column "RTT/jitter Avg" the test lists values that
range from 654 to 702 with +/- 5.2 to 20.8 ms (for the four servers). I
couldn't
figure out what that means. I can see the delay ramp up over the test
(the video
stream also follows that ramp though it's normal delay ranges from about
1ms to
about 40ms). If I took the RT delay experienced by the packets to/from
those servers,
I got median values between 391 and 429 ms. The IQRs were about 240ms to
536-616ms. The maximum values where all just over 700ms, agreeing with
the dslreports
number. But that number is listed as an average so I don't understand?
Also what is the
jitter. I looked around for info on the numbers but I didn't find it.
Probably I just totally
missed some obvious thing to click on.
But the thing is, I've been doing a lot of monitoring of my link and I
don't normally see
those kinds of delays. In the note I put out a bit ago, there is
definitely this bursting
behavior that is particularly indulged in by netflix and google but when
our downstream
bandwidth went up, those bursts no longer caused such long transient
delays (okay, duh).
So, I'm not sure who should be tagged as the "responsible party" for the
grade that the
test gives. Nor am I convinced that users have to assume that means they
are going to see
those kinds of delays. But this is a general issue with active measurement.
It's not a criticism of the test, but maybe of the presentation of the
result.
Kathie
>
> [random selection from Google]
> http://www.dslreports.com/speedtest/419540
> http://forum.kitz.co.uk/index.php?topic=15620.0
>
> It's more aggressive than a single file download, but that's not
> deliberately to exacerbate bufferbloat. It's just designed to measure
> performance of prolonged downloads / streaming, in a competitively short
> test. "For our busy lives", as the overused saying goes.
>
> (The initial summary only gives a grade. The figure wouldn't be one of
> the headlines their ISP advertises. Saying "100ms" would confuse
> people. And the tests they're used to / compare with, show idle latency
> instead.)
>
> On 27/08/16 16:19, Kathleen Nichols wrote:
>> Yeah.
>>
>> I admit to muddying the waters because I think of the size of a buffer as
>> being in megabytes and the size of a queue (latency) as being in
>> milliseconds. I think the tests attempt to measure the worst possible
>> latency/queue that can occur on a path.
>>
>> On 8/27/16 4:46 AM, Rich Brown wrote:
>>> It has always been my intent to define bufferbloat as *latency*. The
>>> first sentence on www.bufferbloat.net says, "Bufferbloat is the
>>> undesirable latency that comes from a router or other network
>>> equipment buffering too much data."
>>>
>>> That definition focuses on observable/measurable values. It sidesteps
>>> objections I've seen on the forums, "How could $TEST measure the size
>>> of buffers?"
>>>
>>> So what matters is whether the buffers (of any size) are filling up.
>>> _______________________________________________ Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Bufferbloat test measurements
2016-08-27 17:37 ` Kathleen Nichols
@ 2016-08-27 19:18 ` Alan Jenkins
2016-08-27 19:39 ` Alan Jenkins
2016-08-27 23:39 ` jb
1 sibling, 1 reply; 9+ messages in thread
From: Alan Jenkins @ 2016-08-27 19:18 UTC (permalink / raw)
To: Kathleen Nichols, bloat
[-- Attachment #1: Type: text/plain, Size: 3614 bytes --]
On 27/08/16 18:37, Kathleen Nichols wrote:
> In-line below. Only for geeks.
Present.
> On 8/27/16 9:03 AM, Alan Jenkins wrote:
>> That's the simplest measure of bufferbloat though :).
> Don't I know! :) Have spent a couple of years figuring out how to measure
> experienced delay...
>> Do you have a criticism in terms of dslreports.com? I think it's fairly
>> transparent, showing idle v.s. download v.s. upload. The headline
>> figures are an average, and you can look at all the data points. (You
>> can increase the measurement frequency if you're specifically interested
>> in that).
> "Criticism" seems too harsh. The uplink and downlink speed stuff is great
> and agrees with all other measures. The "bufferbloat" grade is, I think,
> sort
> of confusing. Also, it's not clear where the queue the test builds up is
> located.
> It could be in ISP or home. So, I ran the test while I was also streaming a
> Netflix video. Under the column "RTT/jitter Avg" the test lists values that
> range from 654 to 702 with +/- 5.2 to 20.8 ms (for the four servers). I
> couldn't
> figure out what that means.
My assumption is the RTT is just read out from the TCP socket, i.e. it's
one of the kernel statistics.
http://stackoverflow.com/questions/16231600/fetching-the-tcp-rtt-in-linux/16232250#16232250
Looking in `ss.c` as suggested, the second figure shown by `ss` is
`rttvar`. And that's the kernel's measure of RTT variation. If my
assumption is right, that would tell us where the "jitter" figure comes
from as well.
> I can see the delay ramp up over the test
> (the video
> stream also follows that ramp though it's normal delay ranges from about
> 1ms to
> about 40ms). If I took the RT delay experienced by the packets to/from
> those servers,
> I got median values between 391 and 429 ms. The IQRs were about 240ms to
> 536-616ms. The maximum values where all just over 700ms, agreeing with
> the dslreports
> number. But that number is listed as an average so I don't understand?
> Also what is the
> jitter. I looked around for info on the numbers but I didn't find it.
> Probably I just totally
> missed some obvious thing to click on.
>
> But the thing is, I've been doing a lot of monitoring of my link and I
> don't normally see
> those kinds of delays. In the note I put out a bit ago, there is
> definitely this bursting
> behavior that is particularly indulged in by netflix and google but when
> our downstream
> bandwidth went up, those bursts no longer caused such long transient
> delays (okay, duh).
> So, I'm not sure who should be tagged as the "responsible party" for the
> grade that the
> test gives. Nor am I convinced that users have to assume that means they
> are going to see
> those kinds of delays. But this is a general issue with active measurement.
>
> It's not a criticism of the test, but maybe of the presentation of the
> result.
>
> Kathie
Thanks. I wasn't clear what you meant the first time, particularly
about the 40ms figure. Very comprehensive explanation of your point.
It's easy to rant about ubiquitous dumb FIFO's in consumer equipment
:). Whereas the queue on the ISP side of the link ("download"), can be
more complex and varied. Mine gets good results on dslreports.com, but
the latency isn't always so good during torrent downloads.
I do have doubts about the highly "multi-threaded" test method.
dslreports let you dial it down manually (Preferences). As you say, a
better real-world test is to just use your connection normally & run
smokeping... or an online "line monitor" like
http://www.thinkbroadband.com/ping :).
Alan
[-- Attachment #2: Type: text/html, Size: 4721 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Bufferbloat test measurements
2016-08-27 19:18 ` Alan Jenkins
@ 2016-08-27 19:39 ` Alan Jenkins
0 siblings, 0 replies; 9+ messages in thread
From: Alan Jenkins @ 2016-08-27 19:39 UTC (permalink / raw)
To: Kathleen Nichols, bloat
[-- Attachment #1: Type: text/plain, Size: 2383 bytes --]
On 27/08/16 20:18, Alan Jenkins wrote:
> On 27/08/16 18:37, Kathleen Nichols wrote:
> So, I ran the test while I was also streaming a
>> Netflix video. Under the column "RTT/jitter Avg" the test lists
>> values that
>> range from 654 to 702 with +/- 5.2 to 20.8 ms (for the four servers). I
>> couldn't
>> figure out what that means.
>
> My assumption is the RTT is just read out from the TCP socket, i.e.
> it's one of the kernel statistics.
>
> http://stackoverflow.com/questions/16231600/fetching-the-tcp-rtt-in-linux/16232250#16232250
>
>
> Looking in `ss.c` as suggested, the second figure shown by `ss` is
> `rttvar`. And that's the kernel's measure of RTT variation. If my
> assumption is right, that would tell us where the "jitter" figure
> comes from as well.
Sadly I don't think anyone's volunteered to restore the GMane web
interface yet. NNTP is still searchable though.
I'm not sure whether the author is saying they _are_ showing kernel
stats, or whether they ended up having to do their own RTT calculation.
(This is the Justin who runs dslreports.com).
-------- Forwarded Message --------
Subject: Re: delay-under-load really helps diagnose real world problems
Date: Sat, 25 Apr 2015 12:23:28 +1000
From: jb <justin-rsQtcOny2EM@public.gmane.org>
To: Matthew Ford <ford-pYXoxzOOsG8@public.gmane.org>, bloat
<bloat-JXvr2/1DY2fm6VMwtOF2vx4hnT+Y9+D1@public.gmane.org>
Newsgroups: gmane.network.routing.bufferbloat
References:
<AE7F97DB5FEE054088D82E836BD15BE9319C249E@xmb-aln-x05.cisco.com>
<22118EDD-F497-46F3-AC6A-A75C389DFBAB@isoc.org>
I have made the following changes a few hours ago:
Bloat latency stats run on every connection now except GPRS and 3G
if you don't seem them during the test (mobile), they should be there
afterwards.
Download phase waits for quiescent latency measurements, defined by
less than 2x the lowest ping seen, or it simply gives up waiting and
continues.
The flow stats table has combined stats per server, so the megabit per
stream are
summed and the other measurements are averaged. I'm not entirely trusting
of the RTT and RTT Variance numbers from Linux, they come from the TCP_INFO
structure but are probably heavily biased to the end of the connection
rather
than the entire connection. However the re-transmits are definitely ok and
the
congestion window packet count looks about right too.
that's it..
[-- Attachment #2: Type: text/html, Size: 3993 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Bufferbloat test measurements
2016-08-27 17:37 ` Kathleen Nichols
2016-08-27 19:18 ` Alan Jenkins
@ 2016-08-27 23:39 ` jb
2016-08-28 0:14 ` Kathleen Nichols
1 sibling, 1 reply; 9+ messages in thread
From: jb @ 2016-08-27 23:39 UTC (permalink / raw)
To: Kathleen Nichols; +Cc: Alan Jenkins, bloat
[-- Attachment #1: Type: text/plain, Size: 8117 bytes --]
Hi Kathleen,
On Sun, Aug 28, 2016 at 3:37 AM, Kathleen Nichols <nichols@pollere.com>
wrote:
> In-line below. Only for geeks.
>
> On 8/27/16 9:03 AM, Alan Jenkins wrote:
> >
> > That's the simplest measure of bufferbloat though :).
>
> Don't I know! :) Have spent a couple of years figuring out how to measure
> experienced delay...
> >
> > Do you have a criticism in terms of dslreports.com? I think it's fairly
> > transparent, showing idle v.s. download v.s. upload. The headline
> > figures are an average, and you can look at all the data points. (You
> > can increase the measurement frequency if you're specifically interested
> > in that).
>
> "Criticism" seems too harsh. The uplink and downlink speed stuff is great
> and agrees with all other measures. The "bufferbloat" grade is, I think,
> sort
> of confusing. Also, it's not clear where the queue the test builds up is
> located.
> It could be in ISP or home.
I haven't found any cases where the queue builds up in the ISP yet.
The people who have got grades and do something about it, always fix them
by fixing their home router/modem either by aggressively introducing the
kinds
of stacks developed here in this list, or simply applying more rough fixes
such
as capping speeds. When they do that, their grade goes to A+ and the latency
does not rise during the fastest transfer rates their connections run.
Which is,
after all, the easiest to demonstrate issue with excess buffer sizes.
If you consider the ISP relative to an individuals connection, is it
unlikely
that said individuals contribution running their (necessarily capped)
service at
full speed can fill shared, much higher speed, buffers within the ISP
infrastructure
assuming the perfect world where ISPs are not over-subscribed..
> So, I ran the test while I was also streaming a
> Netflix video. Under the column "RTT/jitter Avg" the test lists values that
> range from 654 to 702 with +/- 5.2 to 20.8 ms (for the four servers). I
> couldn't
> figure out what that means.
The jitter variation is a foot note, and not something that determines the
grade. I use it to discover connections that are poor or lossy but not
to draw any conclusions about the buffer bloat grade. The entire buffer
bloat
grade is the data in the graph that ties together the latency experienced
during idle, upload and download. If you turn on 'hires' buffer bloat in
preferences
you get finer grained sampling of that latency.
> I can see the delay ramp up over the test
> (the video
> stream also follows that ramp though it's normal delay ranges from about
> 1ms to
> about 40ms). If I took the RT delay experienced by the packets to/from
> those servers,
> I got median values between 391 and 429 ms. The IQRs were about 240ms to
> 536-616ms. The maximum values where all just over 700ms, agreeing with
> the dslreports
> number. But that number is listed as an average so I don't understand?
> Also what is the
> jitter. I looked around for info on the numbers but I didn't find it.
> Probably I just totally
> missed some obvious thing to click on.
>
Again, the grade for buffer bloat is based on the latency over the course of
the (hopefully) full speed download v the full speed upload vs the latency
during idle. The latency is measured to a different and uninvolved server.
Generally if a user does not use their connection to full capacity, latency
is acceptable (but obviously the buffers are still there, lurking). Bursts
and
so on can discover them again but the effects are transient.
>
> But the thing is, I've been doing a lot of monitoring of my link and I
> don't normally see
> those kinds of delays. In the note I put out a bit ago, there is
> definitely this bursting
> behavior that is particularly indulged in by netflix and google but when
> our downstream
> bandwidth went up, those bursts no longer caused such long transient
> delays (okay, duh).
> So, I'm not sure who should be tagged as the "responsible party" for the
> grade that the
> test gives. Nor am I convinced that users have to assume that means they
> are going to see
> those kinds of delays. But this is a general issue with active measurement.
>
The responsible party is almost always the modem, especially DSL modems,
that have buffers that are completely wrong for the speed set by the product
or ISP. For instance I still get an "F" grade because during upload, with my
very poor 1 megabit upload DSL sync, there is an enormous buffer and the
latency rises to over a second making typing anything impossible. The modem
tends to be the gatekeeper for the narrowest part of the flow of data and so
is the first to be revealed as the culprit by the tests. You can't really
fix any
issues with buffer bloat without first sizing that right. Then, after that,
there
may be other things to correct.
From looking at the data, the amount of excess latency tends to drop with
the higher product speed. So people on fiber and the faster cable
connections
don't see nearly such high latency up-lifts. I imagine to some extent this
is
because cable modem firmware has buffers sized for what they expect is
the highest speed connections are, and so the faster products tend to
operate in
that area where it isn't as severe. Although one could argue that a 10ms
increase
on a gigabit connection is just as bothersome as a 1000ms increase on a
1megabit
upload like mine, and both deserve an "F" grade, the fact is the consumer
with the gigabit connection is never going to notice that kind of latency
increase
unless they're running a stock-trading program at home!
There is various information on how to fix an "F" grade on the site, and it
is always
a challenge because it tends to boil down to : get rid of your ISP provided
modem
or pressure your ISP to introduce better home equipment because it is bound
to
be to the root cause of the worst of it, and you can't fix the rest
without fixing that
first. And that is often a difficult message to deliver.
> It's not a criticism of the test, but maybe of the presentation of the
> result.
>
> Kathie
> >
> > [random selection from Google]
> > http://www.dslreports.com/speedtest/419540
> > http://forum.kitz.co.uk/index.php?topic=15620.0
> >
> > It's more aggressive than a single file download, but that's not
> > deliberately to exacerbate bufferbloat. It's just designed to measure
> > performance of prolonged downloads / streaming, in a competitively short
> > test. "For our busy lives", as the overused saying goes.
> >
> > (The initial summary only gives a grade. The figure wouldn't be one of
> > the headlines their ISP advertises. Saying "100ms" would confuse
> > people. And the tests they're used to / compare with, show idle latency
> > instead.)
> >
> > On 27/08/16 16:19, Kathleen Nichols wrote:
> >> Yeah.
> >>
> >> I admit to muddying the waters because I think of the size of a buffer
> as
> >> being in megabytes and the size of a queue (latency) as being in
> >> milliseconds. I think the tests attempt to measure the worst possible
> >> latency/queue that can occur on a path.
> >>
> >> On 8/27/16 4:46 AM, Rich Brown wrote:
> >>> It has always been my intent to define bufferbloat as *latency*. The
> >>> first sentence on www.bufferbloat.net says, "Bufferbloat is the
> >>> undesirable latency that comes from a router or other network
> >>> equipment buffering too much data."
> >>>
> >>> That definition focuses on observable/measurable values. It sidesteps
> >>> objections I've seen on the forums, "How could $TEST measure the size
> >>> of buffers?"
> >>>
> >>> So what matters is whether the buffers (of any size) are filling up.
> >>> _______________________________________________ Bloat mailing list
> >>> Bloat@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/bloat
> >>>
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 11197 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Bufferbloat test measurements
2016-08-27 23:39 ` jb
@ 2016-08-28 0:14 ` Kathleen Nichols
2016-08-28 2:23 ` jb
0 siblings, 1 reply; 9+ messages in thread
From: Kathleen Nichols @ 2016-08-28 0:14 UTC (permalink / raw)
To: jb; +Cc: bloat
Hi, Justin,
Thanks for the explanations. So the grade is for the user not the ISP?
I just have to point out that the below jumped out at me a bit. A user
can fully use the link bandwidth capacity and not have an unacceptable
latency. After all, that's the goal of AQM. But, yes, there are those
pesky lurking buffers in the path which the user might unhappily
use to their full capacity and then latency can be unacceptable.
Kathie
On 8/27/16 4:39 PM, jb wrote:
>
> Generally if a user does not use their connection to full capacity, latency
> is acceptable (but obviously the buffers are still there, lurking).
> Bursts and
> so on can discover them again but the effects are transient.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Bufferbloat test measurements
2016-08-28 0:14 ` Kathleen Nichols
@ 2016-08-28 2:23 ` jb
0 siblings, 0 replies; 9+ messages in thread
From: jb @ 2016-08-28 2:23 UTC (permalink / raw)
To: Kathleen Nichols; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 1974 bytes --]
The grade for one speed test represents something that the user may already
recognise as being a problem, and may do something about. The aggregation
of grades can highlight ISPs that are afflicted with end-user hardware that
could be improved, I suppose. Is it even possible to detect ISPs afflicted
with buffer related latency issues within their infrastructure using an
environment where people running tests of any kind have huge CPE or Wifi
buffers already?
Yep the ideal situation is to have people use their entire link bandwidth
and yet any additional stream should be almost as low in latency as idle
latency is. That is what a grade of A+ would be highlighting, and many
people have got there after seeing a poor grade and doing something about
it.
Regarding capping of speeds I'm just pointing out that a "cheap fix" for
some people has been to throttle especially the upstream bandwidth
(somehow) just below the upstream rate discovered to be the max - which
reduces the opportunity to fill a large upload buffer in the modem. It is a
kludge but without replacing equipment or re-flashing firmware, sometimes
is the only option open to them.
On Sun, Aug 28, 2016 at 10:14 AM, Kathleen Nichols <nichols@pollere.com>
wrote:
>
> Hi, Justin,
> Thanks for the explanations. So the grade is for the user not the ISP?
>
> I just have to point out that the below jumped out at me a bit. A user
> can fully use the link bandwidth capacity and not have an unacceptable
> latency. After all, that's the goal of AQM. But, yes, there are those
> pesky lurking buffers in the path which the user might unhappily
> use to their full capacity and then latency can be unacceptable.
>
> Kathie
>
> On 8/27/16 4:39 PM, jb wrote:
> >
> > Generally if a user does not use their connection to full capacity,
> latency
> > is acceptable (but obviously the buffers are still there, lurking).
> > Bursts and
> > so on can discover them again but the effects are transient.
>
[-- Attachment #2: Type: text/html, Size: 2455 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2016-08-28 2:23 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-27 11:46 [Bloat] Bufferbloat test measurements (was: Open Source Speed Test) Rich Brown
2016-08-27 15:19 ` Kathleen Nichols
2016-08-27 16:03 ` [Bloat] Bufferbloat test measurements Alan Jenkins
2016-08-27 17:37 ` Kathleen Nichols
2016-08-27 19:18 ` Alan Jenkins
2016-08-27 19:39 ` Alan Jenkins
2016-08-27 23:39 ` jb
2016-08-28 0:14 ` Kathleen Nichols
2016-08-28 2:23 ` jb
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox