General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] Check out www.speedof.me - no Flash
@ 2014-07-20 13:19 Rich Brown
  2014-07-20 13:27 ` [Bloat] [Cerowrt-devel] " David P. Reed
                   ` (3 more replies)
  0 siblings, 4 replies; 24+ messages in thread
From: Rich Brown @ 2014-07-20 13:19 UTC (permalink / raw)
  To: cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 733 bytes --]

Doc Searls (http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/) mentioned in passing that he uses a new speed test website. I checked it out, and it was very cool…

www.speedof.me is an all-HTML5 website that seems to make accurate measurements of the up and download speeds of your internet connection. It’s also very attractive, and the real-time plots of the speed show interesting info. (screen shot at: http://richb-hanover.com/speedof-me/)

Now if we could get them to a) allow longer/bigger tests to circumvent PowerBoost, and b) include a latency measurement so people could point out their bufferbloated equipment. 

I'm going to send them a note. Anything else I should add?

Rich

[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 496 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] [Cerowrt-devel] Check out www.speedof.me - no Flash
  2014-07-20 13:19 [Bloat] Check out www.speedof.me - no Flash Rich Brown
@ 2014-07-20 13:27 ` David P. Reed
  2014-07-20 18:41 ` [Bloat] SIMET: nationwide bw/latency/jitter test effort in Brazil Henrique de Moraes Holschuh
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 24+ messages in thread
From: David P. Reed @ 2014-07-20 13:27 UTC (permalink / raw)
  To: Rich Brown, cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 1223 bytes --]

Include Doc in the discussion you have... need his email address?

On Jul 20, 2014, Rich Brown <richb.hanover@gmail.com> wrote:
>Doc Searls
>(http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/)
>mentioned in passing that he uses a new speed test website. I checked
>it out, and it was very cool…
>
>www.speedof.me is an all-HTML5 website that seems to make accurate
>measurements of the up and download speeds of your internet connection.
>It’s also very attractive, and the real-time plots of the speed show
>interesting info. (screen shot at:
>http://richb-hanover.com/speedof-me/)
>
>Now if we could get them to a) allow longer/bigger tests to circumvent
>PowerBoost, and b) include a latency measurement so people could point
>out their bufferbloated equipment.
>
>I'm going to send them a note. Anything else I should add?
>
>Rich
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.

[-- Attachment #2: Type: text/html, Size: 2080 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Bloat] SIMET: nationwide bw/latency/jitter test effort in Brazil
  2014-07-20 13:19 [Bloat] Check out www.speedof.me - no Flash Rich Brown
  2014-07-20 13:27 ` [Bloat] [Cerowrt-devel] " David P. Reed
@ 2014-07-20 18:41 ` Henrique de Moraes Holschuh
  2014-07-23  5:36 ` [Bloat] Check out www.speedof.me - no Flash Alex Elsayed
       [not found] ` <03292B76-5273-4912-BB18-90E95C16A9F5@pnsol.com>
  3 siblings, 0 replies; 24+ messages in thread
From: Henrique de Moraes Holschuh @ 2014-07-20 18:41 UTC (permalink / raw)
  To: Rich Brown; +Cc: cerowrt-devel, bloat

On Sun, 20 Jul 2014, Rich Brown wrote:
> Now if we could get them to a) allow longer/bigger tests to circumvent
> PowerBoost, and b) include a latency measurement so people could point out
> their bufferbloated equipment. 

You may find this interesting:

http://simet.nic.br/

NIC.br has it deployed nation-wide in Brazil, with remote endpoint servers
in most public IXPs ("PTTMetro" IXPs, which are also managed by NIC.br).

The end user client is available over the web (java applet), and also as a
mobile app for Android and iOS.  There's also an openwrt-based firmware for
a very inexpensive home-router box ("SIMET box"), which they've been giving
out to the general public so as to increase their test coverage.

SIMET does bandwidth (TCP and UDP) as well as latency and jitter
measurements, but it doesn't attempt to measure bufferbloat so I never
thought of mentioning it around here.

One interesting detail is that SIMET measurements are trusted (as in "can
have legal standing in Brazil"): the SIMET system is certified[1] by
INMETRO, Brazil's national body for scientific, industrial and legal
metrology.

There's not much material in english about SIMET, unfortunately.

Outside of Brazil, I like the Berkeley ICSI Netalyzr
(http://netalyzr.icsi.berkeley.edu/).  VERY comprehensive measurements and
several network diagnostics... and it does measure worst-case steady-state
latency (aka bufferbloat).

[1] "certified" is not exactly correct.  I don't know an exact word in
english for "has been verified to be correctly calibrated in accordance with
the official measurement standards and procedures", in portuguese:
"aferido".

-- 
  "One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie." -- The Silicon Valley Tarot
  Henrique Holschuh

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
  2014-07-20 13:19 [Bloat] Check out www.speedof.me - no Flash Rich Brown
  2014-07-20 13:27 ` [Bloat] [Cerowrt-devel] " David P. Reed
  2014-07-20 18:41 ` [Bloat] SIMET: nationwide bw/latency/jitter test effort in Brazil Henrique de Moraes Holschuh
@ 2014-07-23  5:36 ` Alex Elsayed
       [not found] ` <03292B76-5273-4912-BB18-90E95C16A9F5@pnsol.com>
  3 siblings, 0 replies; 24+ messages in thread
From: Alex Elsayed @ 2014-07-23  5:36 UTC (permalink / raw)
  To: bloat; +Cc: cerowrt-devel

Rich Brown wrote:

> Doc Searls
> (http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/)
> mentioned in passing that he uses a new speed test website. I checked it
> out, and it was very cool…
> 
> www.speedof.me is an all-HTML5 website that seems to make accurate
> measurements of the up and download speeds of your internet connection.
> It’s also very attractive, and the real-time plots of the speed show
> interesting info. (screen shot at: http://richb-hanover.com/speedof-me/)
> 
> Now if we could get them to a) allow longer/bigger tests to circumvent
> PowerBoost, and b) include a latency measurement so people could point out
> their bufferbloated equipment.
> 
> I'm going to send them a note. Anything else I should add?
> 
> Rich

So one of my friends ran this, and this might look familiar to some members 
of the list: http://speedof.me/show.php?img=140722143659-43302.png

(Comcast 50d/10u, Seattle-ish WA)


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
       [not found] ` <03292B76-5273-4912-BB18-90E95C16A9F5@pnsol.com>
@ 2014-07-25 12:09   ` Rich Brown
  2014-07-25 12:24     ` Neil Davies
  0 siblings, 1 reply; 24+ messages in thread
From: Rich Brown @ 2014-07-25 12:09 UTC (permalink / raw)
  To: Neil Davies; +Cc: cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 3170 bytes --]

Neil,

Thanks for the note and the observations. My thoughts:

1) I note that speedof.me does seem to overstate the speed results. At my home, it reports 5.98mbps down, and 638kbps up, while betterspeedtest.sh shows 5.49/0.61 mbps. (speedtest.net gives numbers similar to the betterspeedtest.net script.)

2) I think we're in agreement about the peak upload rate that you point out is too high. Their measurement code runs in the browser. It seems likely that the browser pumps out a few big packets before getting flow control information, thus giving the impression that they can send at a higher rate. This comports with the obvious decay that ramps toward the long-term rate. 

3) But that long-term speed should be at or below the theoretical long-term rate, not above it. 

Two experiments for you to try:

a) What does betterspeedtest.sh show? (It's in the latest CeroWrt, in /usr/lib/CeroWrtScripts, or get it from github: https://github.com/richb-hanover/CeroWrtScripts )

b) What does www.speedtest.net show?

I will add your question (about the inaccuracy) to the note that I want to send out to speedof.me this weekend. I will also ask that they include min/max latency measurements to their test, and an option to send for > 10 seconds to minimize any effect of PowerBoost...

Best regards,

Rich



On Jul 25, 2014, at 5:10 AM, Neil Davies <neil.davies@pnsol.com> wrote:

> Rich
> 
> You may want to check how accurate they are to start.
> 
> I just ran a “speed test” on my line (which I have complete control and visibility over the various network elements) and it reports an average “speed” (in the up direction) that is in excess of the capacity of the line, it reports the maximum rate at nearly twice the best possible rate of the ADSL connection.
> 
> Doesn’t matter how pretty it is, if its not accurate it is of no use. This is rather ironic as the web site claims it is the “smartest and most accurate”!
> 
> Neil
> 
> <speedof_me_14-07-25.png>
> 
> PS pretty clear to me what mistake they’ve made in the measurement process - its to do with incorrect inference and hence missing the buffering effects.
> 
> On 20 Jul 2014, at 14:19, Rich Brown <richb.hanover@gmail.com> wrote:
> 
>> Doc Searls (http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/) mentioned in passing that he uses a new speed test website. I checked it out, and it was very cool…
>> 
>> www.speedof.me is an all-HTML5 website that seems to make accurate measurements of the up and download speeds of your internet connection. It’s also very attractive, and the real-time plots of the speed show interesting info. (screen shot at: http://richb-hanover.com/speedof-me/)
>> 
>> Now if we could get them to a) allow longer/bigger tests to circumvent PowerBoost, and b) include a latency measurement so people could point out their bufferbloated equipment. 
>> 
>> I'm going to send them a note. Anything else I should add?
>> 
>> Rich
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
> 


[-- Attachment #1.2: Type: text/html, Size: 4800 bytes --]

[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 496 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
  2014-07-25 12:09   ` Rich Brown
@ 2014-07-25 12:24     ` Neil Davies
  2014-07-25 14:17       ` Sebastian Moeller
  2014-07-25 15:46       ` [Bloat] " Rich Brown
  0 siblings, 2 replies; 24+ messages in thread
From: Neil Davies @ 2014-07-25 12:24 UTC (permalink / raw)
  To: Rich Brown; +Cc: cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 4570 bytes --]

Rich

I have a deep worry over this style of single point measurement - and hence speed - as an appropriate measure. We know, and have evidence, that throughput/utilisation is not a good proxy for the network delivering suitable quality of experience. We work with organisation (Telco’s, large system integrators etc) where we spend a lot of time having to “undo” the consequences of “maximising speed”. Just like there is more to life than work, there is more to QoE than speed.

For more specific comments see inline

On 25 Jul 2014, at 13:09, Rich Brown <richb.hanover@gmail.com> wrote:

> Neil,
> 
> Thanks for the note and the observations. My thoughts:
> 
> 1) I note that speedof.me does seem to overstate the speed results. At my home, it reports 5.98mbps down, and 638kbps up, while betterspeedtest.sh shows 5.49/0.61 mbps. (speedtest.net gives numbers similar to the betterspeedtest.net script.)
> 
> 2) I think we're in agreement about the peak upload rate that you point out is too high. Their measurement code runs in the browser. It seems likely that the browser pumps out a few big packets before getting flow control information, thus giving the impression that they can send at a higher rate. This comports with the obvious decay that ramps toward the long-term rate. 

I think that its simpler than that, it is measuring the rate at which it can push packets out the interface - its real time rate is precisely that - it can not be the rate being reported by the far end, it can never exceed the limiting link. The long term average (if it is like other speed testers we’ve had to look into) is being measured at the TCP/IP SDU level by measuring the difference in time between the first and last timestamps of data stream and dividing that into the total data sent. Their “over-estimate” is because there are packets buffered in the CPE that have left the machine but not arrived at the far end.

> 
> 3) But that long-term speed should be at or below the theoretical long-term rate, not above it. 

Agreed, but in this case knowing the sync rate already defines that maximum.

> 
> Two experiments for you to try:
> 
> a) What does betterspeedtest.sh show? (It's in the latest CeroWrt, in /usr/lib/CeroWrtScripts, or get it from github: https://github.com/richb-hanover/CeroWrtScripts )
> 
> b) What does www.speedtest.net show?
> 
> I will add your question (about the inaccuracy) to the note that I want to send out to speedof.me this weekend. I will also ask that they include min/max latency measurements to their test, and an option to send for > 10 seconds to minimize any effect of PowerBoost...
> 
> Best regards,
> 
> Rich
> 
> 
> 
> On Jul 25, 2014, at 5:10 AM, Neil Davies <neil.davies@pnsol.com> wrote:
> 
>> Rich
>> 
>> You may want to check how accurate they are to start.
>> 
>> I just ran a “speed test” on my line (which I have complete control and visibility over the various network elements) and it reports an average “speed” (in the up direction) that is in excess of the capacity of the line, it reports the maximum rate at nearly twice the best possible rate of the ADSL connection.
>> 
>> Doesn’t matter how pretty it is, if its not accurate it is of no use. This is rather ironic as the web site claims it is the “smartest and most accurate”!
>> 
>> Neil
>> 
>> <speedof_me_14-07-25.png>
>> 
>> PS pretty clear to me what mistake they’ve made in the measurement process - its to do with incorrect inference and hence missing the buffering effects.
>> 
>> On 20 Jul 2014, at 14:19, Rich Brown <richb.hanover@gmail.com> wrote:
>> 
>>> Doc Searls (http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/) mentioned in passing that he uses a new speed test website. I checked it out, and it was very cool…
>>> 
>>> www.speedof.me is an all-HTML5 website that seems to make accurate measurements of the up and download speeds of your internet connection. It’s also very attractive, and the real-time plots of the speed show interesting info. (screen shot at: http://richb-hanover.com/speedof-me/)
>>> 
>>> Now if we could get them to a) allow longer/bigger tests to circumvent PowerBoost, and b) include a latency measurement so people could point out their bufferbloated equipment. 
>>> 
>>> I'm going to send them a note. Anything else I should add?
>>> 
>>> Rich
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>> 
> 


[-- Attachment #1.2: Type: text/html, Size: 6827 bytes --]

[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 235 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
  2014-07-25 12:24     ` Neil Davies
@ 2014-07-25 14:17       ` Sebastian Moeller
  2014-07-25 14:25         ` Martin Geddes
                           ` (2 more replies)
  2014-07-25 15:46       ` [Bloat] " Rich Brown
  1 sibling, 3 replies; 24+ messages in thread
From: Sebastian Moeller @ 2014-07-25 14:17 UTC (permalink / raw)
  To: Neil Davies; +Cc: cerowrt-devel, bloat

Hi Neil,


On Jul 25, 2014, at 14:24 , Neil Davies <Neil.Davies@pnsol.com> wrote:

> Rich
> 
> I have a deep worry over this style of single point measurement - and hence speed - as an appropriate measure.

	But how do you propose to measure the (bottleneck) link capacity then? It turns out for current CPE and CMTS/DSLAM equipment one typically can not relay on good QoE out of the box, since typically these devices do not use their (largish) buffers wisely. Instead the current remedy is to take back control over the bottleneck link by shaping the actually sent traffic to stay below the hardware link capacity thereby avoiding feeling the consequences of the over-buffering. But to do this is is quite helpful to get an educated guess what the bottleneck links capacity actually is. And for that purpose a speediest seems useful.


> We know, and have evidence, that throughput/utilisation is not a good proxy for the network delivering suitable quality of experience. We work with organisation (Telco’s, large system integrators etc) where we spend a lot of time having to “undo” the consequences of “maximising speed”. Just like there is more to life than work, there is more to QoE than speed.
> 
> For more specific comments see inline
> 
> On 25 Jul 2014, at 13:09, Rich Brown <richb.hanover@gmail.com> wrote:
> 
>> Neil,
>> 
>> Thanks for the note and the observations. My thoughts:
>> 
>> 1) I note that speedof.me does seem to overstate the speed results. At my home, it reports 5.98mbps down, and 638kbps up, while betterspeedtest.sh shows 5.49/0.61 mbps. (speedtest.net gives numbers similar to the betterspeedtest.net script.)
>> 
>> 2) I think we're in agreement about the peak upload rate that you point out is too high. Their measurement code runs in the browser. It seems likely that the browser pumps out a few big packets before getting flow control information, thus giving the impression that they can send at a higher rate. This comports with the obvious decay that ramps toward the long-term rate. 
> 
> I think that its simpler than that, it is measuring the rate at which it can push packets out the interface - its real time rate is precisely that - it can not be the rate being reported by the far end, it can never exceed the limiting link. The long term average (if it is like other speed testers we’ve had to look into) is being measured at the TCP/IP SDU level by measuring the difference in time between the first and last timestamps of data stream and dividing that into the total data sent. Their “over-estimate” is because there are packets buffered in the CPE that have left the machine but not arrived at the far end.

	Testing from an openwrt router located at a high-symmetric-bandwidth location shows that speedof.me does not scale higher than ~ 130 Mbps server to client and ~15Mbps client to server (on the same connection I can get 130Mbps S2C and ~80Mbps C2S, so the asymmetry in the speedof.me results is not caused by my local environment). 
	@Rich and Dave, this probably means that for the upper end of fiber and cable and VDSL connections speed of.me is not going to be a reliable speed measure… Side note www.sppedtest.net shows ~100Mbps S2C and ~100Mbps C2S, so might be better suited to high-upload links...

> 
>> 
>> 3) But that long-term speed should be at or below the theoretical long-term rate, not above it. 
> 
> Agreed, but in this case knowing the sync rate already defines that maximum.

	I fully agree, but for ADSL the sync rate also contains a lot of encapsulation, so the maximum achievable TCP rate is at best ~90% of link rate. Note for cerowrt’s SQM system the link rate is exactly the right number to start out with at that system can take the encapsulation into account. But even then it is somewhat unintuitive to deduce the expected good-put from the link rate.

> 
>> 
>> Two experiments for you to try:
>> 
>> a) What does betterspeedtest.sh show? (It's in the latest CeroWrt, in /usr/lib/CeroWrtScripts, or get it from github: https://github.com/richb-hanover/CeroWrtScripts )
>> 
>> b) What does www.speedtest.net show?
>> 
>> I will add your question (about the inaccuracy) to the note that I want to send out to speedof.me this weekend. I will also ask that they include min/max latency measurements to their test, and an option to send for > 10 seconds to minimize any effect of PowerBoost…

	I think they do already, at least for the download bandwidth; they start with 128Kb and keep doubling the file size until a file takes longer than 8 seconds to transfer, they only claim to report the numbers from that last transferred file, so worst case with a stable link and a bandwidth > 16kbps ;), it has taken at least 12 seconds (4 plus 8) of measuring before the end of the plot, so the bandwidth of at least the last half of the download plot should be representative even assuming power boost. Caveat, I assume that power boost will not be reset by the transient lack of data transfer between the differently sized files (but since it should involve the same IPs and port# why should power boost reset itself?).

Best Regards
	Sebastian



>> 
>> Best regards,
>> 
>> Rich
>> 
>> 
>> 
>> On Jul 25, 2014, at 5:10 AM, Neil Davies <neil.davies@pnsol.com> wrote:
>> 
>>> Rich
>>> 
>>> You may want to check how accurate they are to start.
>>> 
>>> I just ran a “speed test” on my line (which I have complete control and visibility over the various network elements) and it reports an average “speed” (in the up direction) that is in excess of the capacity of the line, it reports the maximum rate at nearly twice the best possible rate of the ADSL connection.
>>> 
>>> Doesn’t matter how pretty it is, if its not accurate it is of no use. This is rather ironic as the web site claims it is the “smartest and most accurate”!
>>> 
>>> Neil
>>> 
>>> <speedof_me_14-07-25.png>
>>> 
>>> PS pretty clear to me what mistake they’ve made in the measurement process - its to do with incorrect inference and hence missing the buffering effects.
>>> 
>>> On 20 Jul 2014, at 14:19, Rich Brown <richb.hanover@gmail.com> wrote:
>>> 
>>>> Doc Searls (http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/) mentioned in passing that he uses a new speed test website. I checked it out, and it was very cool…
>>>> 
>>>> www.speedof.me is an all-HTML5 website that seems to make accurate measurements of the up and download speeds of your internet connection. It’s also very attractive, and the real-time plots of the speed show interesting info. (screen shot at: http://richb-hanover.com/speedof-me/)
>>>> 
>>>> Now if we could get them to a) allow longer/bigger tests to circumvent PowerBoost, and b) include a latency measurement so people could point out their bufferbloated equipment. 
>>>> 
>>>> I'm going to send them a note. Anything else I should add?
>>>> 
>>>> Rich
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>> 
>> 
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
  2014-07-25 14:17       ` Sebastian Moeller
@ 2014-07-25 14:25         ` Martin Geddes
  2014-07-25 15:58           ` Sebastian Moeller
  2014-07-25 14:27         ` [Bloat] " Neil Davies
  2014-07-25 15:05         ` David P. Reed
  2 siblings, 1 reply; 24+ messages in thread
From: Martin Geddes @ 2014-07-25 14:25 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 9646 bytes --]

You may find the following useful background reading on the state of the
art in network measurement, and a primer on ΔQ (which is the property we
wish to measure).

First, start with this presentation: Network performance optimisation using
high-fidelity measures
<http://www.slideshare.net/mgeddes/network-performance-optimisation-using-highfidelity-measures>
Then read this one to decompose ΔQ into G, S and V: Fundamentals of network
performance engineering
<http://www.slideshare.net/mgeddes/fundamentals-of-network-performance-engineering-18548490>
Then read this one to get a bit more sense on what ΔQ is about: Introduction
to ΔQ and Network Performance Science (extracts)
<http://www.slideshare.net/mgeddes/introduction-to-q-extracts>

Then read these essays:

Foundation of Network Science
<http://www.martingeddes.com/think-tank/foundation-network-science/>
How to do network performance chemistry
<http://www.martingeddes.com/think-tank/network-performance-chemistry/>
How to X-ray a telecoms network
<http://www.martingeddes.com/x-ray-telecoms-network/>
There is no quality in averages: IPX case study
<http://www.martingeddes.com/ipx-quality-averages/>

Martin

*For fresh thinking about telecoms sign up for my free newsletter
<http://eepurl.com/dSkfz> or visit the Geddes Think Tank
<http://www.martingeddes.com/think-tank/>.*
LinkedIn <http://www.linkedin.com/in/mgeddes> Twitter
<https://twitter.com/martingeddes> Mobile: +44 7957 499219 Skype: mgeddes
Martin Geddes Consulting Ltd, Incorporated in Scotland, number SC275827 VAT
Number: 859 5634 72 Registered office: 17-19 East London Street, Edinburgh,
EH7 4BN



On 25 July 2014 15:17, Sebastian Moeller <moeller0@gmx.de> wrote:

> Hi Neil,
>
>
> On Jul 25, 2014, at 14:24 , Neil Davies <Neil.Davies@pnsol.com> wrote:
>
> > Rich
> >
> > I have a deep worry over this style of single point measurement - and
> hence speed - as an appropriate measure.
>
>         But how do you propose to measure the (bottleneck) link capacity
> then? It turns out for current CPE and CMTS/DSLAM equipment one typically
> can not relay on good QoE out of the box, since typically these devices do
> not use their (largish) buffers wisely. Instead the current remedy is to
> take back control over the bottleneck link by shaping the actually sent
> traffic to stay below the hardware link capacity thereby avoiding feeling
> the consequences of the over-buffering. But to do this is is quite helpful
> to get an educated guess what the bottleneck links capacity actually is.
> And for that purpose a speediest seems useful.
>
>
> > We know, and have evidence, that throughput/utilisation is not a good
> proxy for the network delivering suitable quality of experience. We work
> with organisation (Telco’s, large system integrators etc) where we spend a
> lot of time having to “undo” the consequences of “maximising speed”. Just
> like there is more to life than work, there is more to QoE than speed.
> >
> > For more specific comments see inline
> >
> > On 25 Jul 2014, at 13:09, Rich Brown <richb.hanover@gmail.com> wrote:
> >
> >> Neil,
> >>
> >> Thanks for the note and the observations. My thoughts:
> >>
> >> 1) I note that speedof.me does seem to overstate the speed results. At
> my home, it reports 5.98mbps down, and 638kbps up, while betterspeedtest.sh
> shows 5.49/0.61 mbps. (speedtest.net gives numbers similar to the
> betterspeedtest.net script.)
> >>
> >> 2) I think we're in agreement about the peak upload rate that you point
> out is too high. Their measurement code runs in the browser. It seems
> likely that the browser pumps out a few big packets before getting flow
> control information, thus giving the impression that they can send at a
> higher rate. This comports with the obvious decay that ramps toward the
> long-term rate.
> >
> > I think that its simpler than that, it is measuring the rate at which it
> can push packets out the interface - its real time rate is precisely that -
> it can not be the rate being reported by the far end, it can never exceed
> the limiting link. The long term average (if it is like other speed testers
> we’ve had to look into) is being measured at the TCP/IP SDU level by
> measuring the difference in time between the first and last timestamps of
> data stream and dividing that into the total data sent. Their
> “over-estimate” is because there are packets buffered in the CPE that have
> left the machine but not arrived at the far end.
>
>         Testing from an openwrt router located at a
> high-symmetric-bandwidth location shows that speedof.me does not scale
> higher than ~ 130 Mbps server to client and ~15Mbps client to server (on
> the same connection I can get 130Mbps S2C and ~80Mbps C2S, so the asymmetry
> in the speedof.me results is not caused by my local environment).
>         @Rich and Dave, this probably means that for the upper end of
> fiber and cable and VDSL connections speed of.me is not going to be a
> reliable speed measure… Side note www.sppedtest.net shows ~100Mbps S2C
> and ~100Mbps C2S, so might be better suited to high-upload links...
>
> >
> >>
> >> 3) But that long-term speed should be at or below the theoretical
> long-term rate, not above it.
> >
> > Agreed, but in this case knowing the sync rate already defines that
> maximum.
>
>         I fully agree, but for ADSL the sync rate also contains a lot of
> encapsulation, so the maximum achievable TCP rate is at best ~90% of link
> rate. Note for cerowrt’s SQM system the link rate is exactly the right
> number to start out with at that system can take the encapsulation into
> account. But even then it is somewhat unintuitive to deduce the expected
> good-put from the link rate.
>
> >
> >>
> >> Two experiments for you to try:
> >>
> >> a) What does betterspeedtest.sh show? (It's in the latest CeroWrt, in
> /usr/lib/CeroWrtScripts, or get it from github:
> https://github.com/richb-hanover/CeroWrtScripts )
> >>
> >> b) What does www.speedtest.net show?
> >>
> >> I will add your question (about the inaccuracy) to the note that I want
> to send out to speedof.me this weekend. I will also ask that they include
> min/max latency measurements to their test, and an option to send for > 10
> seconds to minimize any effect of PowerBoost…
>
>         I think they do already, at least for the download bandwidth; they
> start with 128Kb and keep doubling the file size until a file takes longer
> than 8 seconds to transfer, they only claim to report the numbers from that
> last transferred file, so worst case with a stable link and a bandwidth >
> 16kbps ;), it has taken at least 12 seconds (4 plus 8) of measuring before
> the end of the plot, so the bandwidth of at least the last half of the
> download plot should be representative even assuming power boost. Caveat, I
> assume that power boost will not be reset by the transient lack of data
> transfer between the differently sized files (but since it should involve
> the same IPs and port# why should power boost reset itself?).
>
> Best Regards
>         Sebastian
>
>
>
> >>
> >> Best regards,
> >>
> >> Rich
> >>
> >>
> >>
> >> On Jul 25, 2014, at 5:10 AM, Neil Davies <neil.davies@pnsol.com> wrote:
> >>
> >>> Rich
> >>>
> >>> You may want to check how accurate they are to start.
> >>>
> >>> I just ran a “speed test” on my line (which I have complete control
> and visibility over the various network elements) and it reports an average
> “speed” (in the up direction) that is in excess of the capacity of the
> line, it reports the maximum rate at nearly twice the best possible rate of
> the ADSL connection.
> >>>
> >>> Doesn’t matter how pretty it is, if its not accurate it is of no use.
> This is rather ironic as the web site claims it is the “smartest and most
> accurate”!
> >>>
> >>> Neil
> >>>
> >>> <speedof_me_14-07-25.png>
> >>>
> >>> PS pretty clear to me what mistake they’ve made in the measurement
> process - its to do with incorrect inference and hence missing the
> buffering effects.
> >>>
> >>> On 20 Jul 2014, at 14:19, Rich Brown <richb.hanover@gmail.com> wrote:
> >>>
> >>>> Doc Searls (
> http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/)
> mentioned in passing that he uses a new speed test website. I checked it
> out, and it was very cool…
> >>>>
> >>>> www.speedof.me is an all-HTML5 website that seems to make accurate
> measurements of the up and download speeds of your internet connection.
> It’s also very attractive, and the real-time plots of the speed show
> interesting info. (screen shot at: http://richb-hanover.com/speedof-me/)
> >>>>
> >>>> Now if we could get them to a) allow longer/bigger tests to
> circumvent PowerBoost, and b) include a latency measurement so people could
> point out their bufferbloated equipment.
> >>>>
> >>>> I'm going to send them a note. Anything else I should add?
> >>>>
> >>>> Rich
> >>>> _______________________________________________
> >>>> Bloat mailing list
> >>>> Bloat@lists.bufferbloat.net
> >>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>
> >>
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 13991 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
  2014-07-25 14:17       ` Sebastian Moeller
  2014-07-25 14:25         ` Martin Geddes
@ 2014-07-25 14:27         ` Neil Davies
  2014-07-25 16:02           ` Sebastian Moeller
  2014-07-25 21:20           ` David Lang
  2014-07-25 15:05         ` David P. Reed
  2 siblings, 2 replies; 24+ messages in thread
From: Neil Davies @ 2014-07-25 14:27 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: cerowrt-devel, bloat

Sebastian

On 25 Jul 2014, at 15:17, Sebastian Moeller <moeller0@gmx.de> wrote:

> 	But how do you propose to measure the (bottleneck) link capacity then? It turns out for current CPE and CMTS/DSLAM equipment one typically can not relay on good QoE out of the box, since typically these devices do not use their (largish) buffers wisely. Instead the current remedy is to take back control over the bottleneck link by shaping the actually sent traffic to stay below the hardware link capacity thereby avoiding feeling the consequences of the over-buffering. But to do this is is quite helpful to get an educated guess what the bottleneck links capacity actually is. And for that purpose a speediest seems useful.


I totally agree that what you are trying to do is to take control "back" for the upstream delay and loss (which is the network level activity that directly influences QoE). Observationally the "constraining link" is the point at which the delay and loss start to grow as the the offered load is increased (there are interesting interactions with the scheduling in the CMTS/3GPP node B - but they are tractable) if we don't have direct access to the constraint (which in the CPE, for ADSL you have) we track that "quality attenuation" inflection point. Saturating the path is a bit of a sledgehammer (and has nasty cost/scaling implications).

I see, as I was replying, Martin has sent you some links to the background.

Cheers

Neil

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] [Cerowrt-devel]  Check out www.speedof.me - no Flash
  2014-07-25 14:17       ` Sebastian Moeller
  2014-07-25 14:25         ` Martin Geddes
  2014-07-25 14:27         ` [Bloat] " Neil Davies
@ 2014-07-25 15:05         ` David P. Reed
  2 siblings, 0 replies; 24+ messages in thread
From: David P. Reed @ 2014-07-25 15:05 UTC (permalink / raw)
  To: Sebastian Moeller, Neil Davies; +Cc: cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 8454 bytes --]

It's important to note that modern browsers have direct access to TCP Up and Down connections using Web sockets from javascript threads. It is quite feasible to drive such connections at near wire speeds in both directions... I've done it in my own experiments. A knowledgeable network testing expert should be able to create a javascript library that can be used by any gui...so beauty and measurement quality can be improved by experts in the relevant fields.

My experiments are a bit dated but if someone wants advice on how please ask. I'm flat out on my day job for the next month.

On Jul 25, 2014, Sebastian Moeller <moeller0@gmx.de> wrote:
>Hi Neil,
>
>
>On Jul 25, 2014, at 14:24 , Neil Davies <Neil.Davies@pnsol.com> wrote:
>
>> Rich
>>
>> I have a deep worry over this style of single point measurement - and
>hence speed - as an appropriate measure.
>
>	But how do you propose to measure the (bottleneck) link capacity then?
>It turns out for current CPE and CMTS/DSLAM equipment one typically can
>not relay on good QoE out of the box, since typically these devices do
>not use their (largish) buffers wisely. Instead the current remedy is
>to take back control over the bottleneck link by shaping the actually
>sent traffic to stay below the hardware link capacity thereby avoiding
>feeling the consequences of the over-buffering. But to do this is is
>quite helpful to get an educated guess what the bottleneck links
>capacity actually is. And for that purpose a speediest seems useful.
>
>
>> We know, and have evidence, that throughput/utilisation is not a good
>proxy for the network delivering suitable quality of experience. We
>work with organisation (Telco’s, large system integrators etc) where we
>spend a lot of time having to “undo” the consequences of “maximising
>speed”. Just like there is more to life than work, there is more to QoE
>than speed.
>>
>> For more specific comments see inline
>>
>> On 25 Jul 2014, at 13:09, Rich Brown <richb.hanover@gmail.com> wrote:
>>
>>> Neil,
>>>
>>> Thanks for the note and the observations. My thoughts:
>>>
>>> 1) I note that speedof.me does seem to overstate the speed results.
>At my home, it reports 5.98mbps down, and 638kbps up, while
>betterspeedtest.sh shows 5.49/0.61 mbps. (speedtest.net gives numbers
>similar to the betterspeedtest.net script.)
>>>
>>> 2) I think we're in agreement about the peak upload rate that you
>point out is too high. Their measurement code runs in the browser. It
>seems likely that the browser pumps out a few big packets before
>getting flow control information, thus giving the impression that they
>can send at a higher rate. This comports with the obvious decay that
>ramps toward the long-term rate.
>>
>> I think that its simpler than that, it is measuring the rate at which
>it can push packets out the interface - its real time rate is precisely
>that - it can not be the rate being reported by the far end, it can
>never exceed the limiting link. The long term average (if it is like
>other speed testers we’ve had to look into) is being measured at the
>TCP/IP SDU level by measuring the difference in time between the first
>and last timestamps of data stream and dividing that into the total
>data sent. Their “over-estimate” is because there are packets buffered
>in the CPE that have left the machine but not arrived at the far end.
>
>	Testing from an openwrt router located at a high-symmetric-bandwidth
>location shows that speedof.me does not scale higher than ~ 130 Mbps
>server to client and ~15Mbps client to server (on the same connection I
>can get 130Mbps S2C and ~80Mbps C2S, so the asymmetry in the speedof.me
>results is not caused by my local environment).
>	@Rich and Dave, this probably means that for the upper end of fiber
>and cable and VDSL connections speed of.me is not going to be a
>reliable speed measure… Side note www.sppedtest.net shows ~100Mbps S2C
>and ~100Mbps C2S, so might be better suited to high-upload links...
>
>>
>>>
>>> 3) But that long-term speed should be at or below the theoretical
>long-term rate, not above it.
>>
>> Agreed, but in this case knowing the sync rate already defines that
>maximum.
>
>	I fully agree, but for ADSL the sync rate also contains a lot of
>encapsulation, so the maximum achievable TCP rate is at best ~90% of
>link rate. Note for cerowrt’s SQM system the link rate is exactly the
>right number to start out with at that system can take the
>encapsulation into account. But even then it is somewhat unintuitive to
>deduce the expected good-put from the link rate.
>
>>
>>>
>>> Two experiments for you to try:
>>>
>>> a) What does betterspeedtest.sh show? (It's in the latest CeroWrt,
>in /usr/lib/CeroWrtScripts, or get it from github:
>https://github.com/richb-hanover/CeroWrtScripts )
>>>
>>> b) What does www.speedtest.net show?
>>>
>>> I will add your question (about the inaccuracy) to the note that I
>want to send out to speedof.me this weekend. I will also ask that they
>include min/max latency measurements to their test, and an option to
>send for > 10 seconds to minimize any effect of PowerBoost…
>
>	I think they do already, at least for the download bandwidth; they
>start with 128Kb and keep doubling the file size until a file takes
>longer than 8 seconds to transfer, they only claim to report the
>numbers from that last transferred file, so worst case with a stable
>link and a bandwidth > 16kbps ;), it has taken at least 12 seconds (4
>plus 8) of measuring before the end of the plot, so the bandwidth of at
>least the last half of the download plot should be representative even
>assuming power boost. Caveat, I assume that power boost will not be
>reset by the transient lack of data transfer between the differently
>sized files (but since it should involve the same IPs and port# why
>should power boost reset itself?).
>
>Best Regards
>	Sebastian
>
>
>
>>>
>>> Best regards,
>>>
>>> Rich
>>>
>>>
>>>
>>> On Jul 25, 2014, at 5:10 AM, Neil Davies <neil.davies@pnsol.com>
>wrote:
>>>
>>>> Rich
>>>>
>>>> You may want to check how accurate they are to start.
>>>>
>>>> I just ran a “speed test” on my line (which I have complete control
>and visibility over the various network elements) and it reports an
>average “speed” (in the up direction) that is in excess of the capacity
>of the line, it reports the maximum rate at nearly twice the best
>possible rate of the ADSL connection.
>>>>
>>>> Doesn’t matter how pretty it is, if its not accurate it is of no
>use. This is rather ironic as the web site claims it is the “smartest
>and most accurate”!
>>>>
>>>> Neil
>>>>
>>>> <speedof_me_14-07-25.png>
>>>>
>>>> PS pretty clear to me what mistake they’ve made in the measurement
>process - its to do with incorrect inference and hence missing the
>buffering effects.
>>>>
>>>> On 20 Jul 2014, at 14:19, Rich Brown <richb.hanover@gmail.com>
>wrote:
>>>>
>>>>> Doc Searls
>(http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/)
>mentioned in passing that he uses a new speed test website. I checked
>it out, and it was very cool…
>>>>>
>>>>> www.speedof.me is an all-HTML5 website that seems to make accurate
>measurements of the up and download speeds of your internet connection.
>It’s also very attractive, and the real-time plots of the speed show
>interesting info. (screen shot at:
>http://richb-hanover.com/speedof-me/)
>>>>>
>>>>> Now if we could get them to a) allow longer/bigger tests to
>circumvent PowerBoost, and b) include a latency measurement so people
>could point out their bufferbloated equipment.
>>>>>
>>>>> I'm going to send them a note. Anything else I should add?
>>>>>
>>>>> Rich
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>_______________________________________________
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.

[-- Attachment #2: Type: text/html, Size: 12350 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
  2014-07-25 12:24     ` Neil Davies
  2014-07-25 14:17       ` Sebastian Moeller
@ 2014-07-25 15:46       ` Rich Brown
  1 sibling, 0 replies; 24+ messages in thread
From: Rich Brown @ 2014-07-25 15:46 UTC (permalink / raw)
  To: Neil Davies; +Cc: cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 2834 bytes --]

Hi Neil,

> I have a deep worry over this style of single point measurement - and hence speed - as an appropriate measure. We know, and have evidence, that throughput/utilisation is not a good proxy for the network delivering suitable quality of experience. We work with organisation (Telco’s, large system integrators etc) where we spend a lot of time having to “undo” the consequences of “maximising speed”. Just like there is more to life than work, there is more to QoE than speed.

I completely agree with this: those of us who have spent the time to ponder the physics of the problem have come to understand it in its full glory (and complexity). We know that a single number ain't the answer.
 
But one of my other goals is to increase the awareness of the bufferbloat problem. People *do* use these speed test services, despite their inaccuracy. If those sites could include some form of measurement about the latency (and its change) under load, it makes it easier to describe the problem to others.

The long-term solution is, of course, to get router vendors to realize there's a problem and then respond to market pressures to fix it. I gained a lot of insight from http://apenwarr.ca/log/?m=201407#11 - it has a description of the difficult economic justification for selling a "good router". But where that author was trying to start a company, we're in a different position. 

Good latency info in a popular speed test website elevates its importance to the general public. It moves you out of the "nutcase" category ("What's this bufferbloat stuff this guy's talking about?"), into a concerned customer who's offering a useful observation. And it gives credibility when you bug providers. Examples:

- A lot of people reflexively check speedtest.net when they check into a hotel, and then post/tweet the results. If the results include min/max latency, then they can begin to comment to the hotel (where they may even have been charged for the service) when things aren't good. 
- I frequently ride a bus to Boston that offers free wifi. I already know they're badly bloated (>20 seconds(!)). With a tool like this, it's easier to begin a conversation with the operations people that lets them put pressure on their own router vendor. 
- Home and commercial users can use these values to tell their ISPs that there's a problem, and the expense of fielding those support calls provides some incentive to address the problem.
- Heaven forbid one vendor/provider snatch up the idea and tout it as a competitive advantage... :-)

We've shown that there is a straightforward fix to the problem. With increased awareness from their customers, I hold out hope that we can begin to change the world.

</utopian rant>

Best,

Rich

PS Thanks to Sebastian for the comments, and Martin for those links.

[-- Attachment #1.2: Type: text/html, Size: 3781 bytes --]

[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 496 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
  2014-07-25 14:25         ` Martin Geddes
@ 2014-07-25 15:58           ` Sebastian Moeller
       [not found]             ` <CAAAY2agBsPWhG9ANXHS6zAxjFgaWuuMAUPAFT9Npgv=SgVN1=g@mail.gmail.com>
  0 siblings, 1 reply; 24+ messages in thread
From: Sebastian Moeller @ 2014-07-25 15:58 UTC (permalink / raw)
  To: Martin Geddes; +Cc: cerowrt-devel, bloat

Hi Martin,

thanks for the pointers,


On Jul 25, 2014, at 16:25 , Martin Geddes <mail@martingeddes.com> wrote:

> You may find the following useful background reading on the state of the art in network measurement, and a primer on ΔQ (which is the property we wish to measure).
> 
> First, start with this presentation: Network performance optimisation using high-fidelity measures
> Then read this one to decompose ΔQ into G, S and V: Fundamentals of network performance engineering
> Then read this one to get a bit more sense on what ΔQ is about: Introduction to ΔQ and Network Performance Science (extracts)
> 
> Then read these essays:
> 
> Foundation of Network Science
> How to do network performance chemistry
> How to X-ray a telecoms network
> There is no quality in averages: IPX case study

	All of this makes intuitively sense, but it is a bit light on how deltaQ is to be computed ;).
	As far as I understand it also has not much bearing on my home network; the only one under my control. Now, following the buffer bloat discussion for some years, I have internalized the idea that bandwidth alone does not suffice to describe the quality of my network connection. I think that the latency increase under load (for unrelated flows) is the best of all the bad single number measures of network dynamics/quality. I should be related to what I understood deltaQ to depend on (as packet loss for non real time flows will cause an increase in latency).  I think that continuous measurements make a to n of sense for ISPs, backbone-operators, mobile carriers … but at home, basically, I operate as my own network quality monitor ;) (that is I try to pin point and debug (transient) anomalies).

> 
> Martin
> 
> For fresh thinking about telecoms sign up for my free newsletter or visit the Geddes Think Tank.
> LinkedIn Twitter Mobile: +44 7957 499219 Skype: mgeddes 
> Martin Geddes Consulting Ltd, Incorporated in Scotland, number SC275827 VAT Number: 859 5634 72 Registered office: 17-19 East London Street, Edinburgh, EH7 4BN
> 
> 
> 
> On 25 July 2014 15:17, Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Neil,
> 
> 
> On Jul 25, 2014, at 14:24 , Neil Davies <Neil.Davies@pnsol.com> wrote:
> 
> > Rich
> >
> > I have a deep worry over this style of single point measurement - and hence speed - as an appropriate measure.
> 
>         But how do you propose to measure the (bottleneck) link capacity then? It turns out for current CPE and CMTS/DSLAM equipment one typically can not relay on good QoE out of the box, since typically these devices do not use their (largish) buffers wisely. Instead the current remedy is to take back control over the bottleneck link by shaping the actually sent traffic to stay below the hardware link capacity thereby avoiding feeling the consequences of the over-buffering. But to do this is is quite helpful to get an educated guess what the bottleneck links capacity actually is. And for that purpose a speediest seems useful.
> 
> 
> > We know, and have evidence, that throughput/utilisation is not a good proxy for the network delivering suitable quality of experience. We work with organisation (Telco’s, large system integrators etc) where we spend a lot of time having to “undo” the consequences of “maximising speed”. Just like there is more to life than work, there is more to QoE than speed.
> >
> > For more specific comments see inline
> >
> > On 25 Jul 2014, at 13:09, Rich Brown <richb.hanover@gmail.com> wrote:
> >
> >> Neil,
> >>
> >> Thanks for the note and the observations. My thoughts:
> >>
> >> 1) I note that speedof.me does seem to overstate the speed results. At my home, it reports 5.98mbps down, and 638kbps up, while betterspeedtest.sh shows 5.49/0.61 mbps. (speedtest.net gives numbers similar to the betterspeedtest.net script.)
> >>
> >> 2) I think we're in agreement about the peak upload rate that you point out is too high. Their measurement code runs in the browser. It seems likely that the browser pumps out a few big packets before getting flow control information, thus giving the impression that they can send at a higher rate. This comports with the obvious decay that ramps toward the long-term rate.
> >
> > I think that its simpler than that, it is measuring the rate at which it can push packets out the interface - its real time rate is precisely that - it can not be the rate being reported by the far end, it can never exceed the limiting link. The long term average (if it is like other speed testers we’ve had to look into) is being measured at the TCP/IP SDU level by measuring the difference in time between the first and last timestamps of data stream and dividing that into the total data sent. Their “over-estimate” is because there are packets buffered in the CPE that have left the machine but not arrived at the far end.
> 
>         Testing from an openwrt router located at a high-symmetric-bandwidth location shows that speedof.me does not scale higher than ~ 130 Mbps server to client and ~15Mbps client to server (on the same connection I can get 130Mbps S2C and ~80Mbps C2S, so the asymmetry in the speedof.me results is not caused by my local environment).
>         @Rich and Dave, this probably means that for the upper end of fiber and cable and VDSL connections speed of.me is not going to be a reliable speed measure… Side note www.sppedtest.net shows ~100Mbps S2C and ~100Mbps C2S, so might be better suited to high-upload links...
> 
> >
> >>
> >> 3) But that long-term speed should be at or below the theoretical long-term rate, not above it.
> >
> > Agreed, but in this case knowing the sync rate already defines that maximum.
> 
>         I fully agree, but for ADSL the sync rate also contains a lot of encapsulation, so the maximum achievable TCP rate is at best ~90% of link rate. Note for cerowrt’s SQM system the link rate is exactly the right number to start out with at that system can take the encapsulation into account. But even then it is somewhat unintuitive to deduce the expected good-put from the link rate.
> 
> >
> >>
> >> Two experiments for you to try:
> >>
> >> a) What does betterspeedtest.sh show? (It's in the latest CeroWrt, in /usr/lib/CeroWrtScripts, or get it from github: https://github.com/richb-hanover/CeroWrtScripts )
> >>
> >> b) What does www.speedtest.net show?
> >>
> >> I will add your question (about the inaccuracy) to the note that I want to send out to speedof.me this weekend. I will also ask that they include min/max latency measurements to their test, and an option to send for > 10 seconds to minimize any effect of PowerBoost…
> 
>         I think they do already, at least for the download bandwidth; they start with 128Kb and keep doubling the file size until a file takes longer than 8 seconds to transfer, they only claim to report the numbers from that last transferred file, so worst case with a stable link and a bandwidth > 16kbps ;), it has taken at least 12 seconds (4 plus 8) of measuring before the end of the plot, so the bandwidth of at least the last half of the download plot should be representative even assuming power boost. Caveat, I assume that power boost will not be reset by the transient lack of data transfer between the differently sized files (but since it should involve the same IPs and port# why should power boost reset itself?).
> 
> Best Regards
>         Sebastian
> 
> 
> 
> >>
> >> Best regards,
> >>
> >> Rich
> >>
> >>
> >>
> >> On Jul 25, 2014, at 5:10 AM, Neil Davies <neil.davies@pnsol.com> wrote:
> >>
> >>> Rich
> >>>
> >>> You may want to check how accurate they are to start.
> >>>
> >>> I just ran a “speed test” on my line (which I have complete control and visibility over the various network elements) and it reports an average “speed” (in the up direction) that is in excess of the capacity of the line, it reports the maximum rate at nearly twice the best possible rate of the ADSL connection.
> >>>
> >>> Doesn’t matter how pretty it is, if its not accurate it is of no use. This is rather ironic as the web site claims it is the “smartest and most accurate”!
> >>>
> >>> Neil
> >>>
> >>> <speedof_me_14-07-25.png>
> >>>
> >>> PS pretty clear to me what mistake they’ve made in the measurement process - its to do with incorrect inference and hence missing the buffering effects.
> >>>
> >>> On 20 Jul 2014, at 14:19, Rich Brown <richb.hanover@gmail.com> wrote:
> >>>
> >>>> Doc Searls (http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/) mentioned in passing that he uses a new speed test website. I checked it out, and it was very cool…
> >>>>
> >>>> www.speedof.me is an all-HTML5 website that seems to make accurate measurements of the up and download speeds of your internet connection. It’s also very attractive, and the real-time plots of the speed show interesting info. (screen shot at: http://richb-hanover.com/speedof-me/)
> >>>>
> >>>> Now if we could get them to a) allow longer/bigger tests to circumvent PowerBoost, and b) include a latency measurement so people could point out their bufferbloated equipment.
> >>>>
> >>>> I'm going to send them a note. Anything else I should add?
> >>>>
> >>>> Rich
> >>>> _______________________________________________
> >>>> Bloat mailing list
> >>>> Bloat@lists.bufferbloat.net
> >>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>
> >>
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
  2014-07-25 14:27         ` [Bloat] " Neil Davies
@ 2014-07-25 16:02           ` Sebastian Moeller
  2014-07-25 21:20           ` David Lang
  1 sibling, 0 replies; 24+ messages in thread
From: Sebastian Moeller @ 2014-07-25 16:02 UTC (permalink / raw)
  To: Neil Davies; +Cc: cerowrt-devel, bloat

Hi Neil,


On Jul 25, 2014, at 16:27 , Neil Davies <neil.davies@pnsol.com> wrote:

> Sebastian
> 
> On 25 Jul 2014, at 15:17, Sebastian Moeller <moeller0@gmx.de> wrote:
> 
>> 	But how do you propose to measure the (bottleneck) link capacity then? It turns out for current CPE and CMTS/DSLAM equipment one typically can not relay on good QoE out of the box, since typically these devices do not use their (largish) buffers wisely. Instead the current remedy is to take back control over the bottleneck link by shaping the actually sent traffic to stay below the hardware link capacity thereby avoiding feeling the consequences of the over-buffering. But to do this is is quite helpful to get an educated guess what the bottleneck links capacity actually is. And for that purpose a speediest seems useful.
> 
> 
> I totally agree that what you are trying to do is to take control "back" for the upstream delay and loss (which is the network level activity that directly influences QoE). Observationally the "constraining link" is the point at which the delay and loss start to grow as the the offered load is increased (there are interesting interactions with the scheduling in the CMTS/3GPP node B - but they are tractable) if we don't have direct access to the constraint (which in the CPE, for ADSL you have) we track that "quality attenuation" inflection point. Saturating the path is a bit of a sledgehammer (and has nasty cost/scaling implications).

	What else can I do to make sure that my network still works satisfactory in the “worst case” than to test and tune it for the worst case? Also, coming from a biology background, I like systems that operate well in the capacity limited state ;)

> 
> I see, as I was replying, Martin has sent you some links to the background.

	Interesting read, do you have a pointer on how to calculate deltaQ though?

best regards
	Sebastian

> 
> Cheers
> 
> Neil


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
       [not found]               ` <C1EA7389-68A4-42FE-A0BA-80E8B137145F@gmx.de>
@ 2014-07-25 17:14                 ` Neil Davies
  2014-07-25 17:17                   ` Sebastian Moeller
  0 siblings, 1 reply; 24+ messages in thread
From: Neil Davies @ 2014-07-25 17:14 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 13494 bytes --]

Try this thesis - Lucian used this for work at CERN, the description is in there.... (see one of the appendicies) Analysis and predictive modeling of the performance of the ATLAS TDAQ network


On 25 Jul 2014, at 18:12, Sebastian Moeller <moeller0@gmx.de> wrote:

> Hello Martin,
> 
> thanks a lot.
> 
> On Jul 25, 2014, at 18:32 , Martin Geddes <mail@martingeddes.com> wrote:
> 
>> So what is ΔQ and how do you "compute" it (to the extent it is a "computed" thing)?
>> 
>> Starting point: the only observable effect of a network is to lose and delay data -- i.e. to "attenuate quality" by adding the toxic effects of time to distributed computations. ΔQ is a morphism that relates the "quality attenuation" that the network imposes to the application performance, and describes the trading spaces at all intermediate layers of abstraction. It is shown in the attached graphic.
>> 
>> Critically, it frames quality as something that can only be lost ("attenuated"), both by the network and the application. Additionally, it is stochastic, and works with random variables and distributions.
>> 
>> At its most concrete level, it is the individual impairment encountered by every packet when the network in operation. But we don't want to have to track every packet - 1:1 scale maps are pretty useless. So we need to abstract that in order to create a model that has value.
>> 
>> Next abstraction: an improper random variable. This unifies loss and delay into a single stochastic object.
>> Next abstraction: received transport, which is a CDF where we are interested in the properties of the "tail".
>> 
>> Next abstraction, that joins network performance and application QoE (as relates to performance): relate the CDF to the application through a Quality Transport Agreement. This "stochastic contract" is both necessary and sufficient to deliver the application outcome.
>> 
>> Next concretisation towards QoE: offered load of demand, as a CDF.
>> Next concretisation towards QoE: breach hazard metric, which abstracts the application performance. Indicates the likelihood of the QTA contract being broken, and how badly.
>> Final concretisation: the individual application performance encountered by every user. Again, a 1:1 map that isn't very helpful.
>> 
>> So as you can see, it's about as far away from a single point average metric as you can possibly get. A far richer model is required in order to achieve robust performance engineering.
>> 
>> It is "computed" using multi-point measurements to capture the distribution. The G/S/V charts you see are based on processing that data to account for various issues, including clock skew.
>> 
>> I hope that helps. We need to document more of this in public, which is an ongoing process. 
> 
> 	You lost me, I think what I should have asked for is a real example with numbers and the formulas ;) I guess that is deep in “secret sauce” territory. Alas, if that should be true it also means that deltaQ is not going to help me understand my network any better …
> 
> Best Regards
> 	Sebastian
> 
> 
>> 
>> Martin
>> 
>> On 25 July 2014 16:58, Sebastian Moeller <moeller0@gmx.de> wrote:
>> Hi Martin,
>> 
>> thanks for the pointers,
>> 
>> 
>> On Jul 25, 2014, at 16:25 , Martin Geddes <mail@martingeddes.com> wrote:
>> 
>> > You may find the following useful background reading on the state of the art in network measurement, and a primer on ΔQ (which is the property we wish to measure).
>> >
>> > First, start with this presentation: Network performance optimisation using high-fidelity measures
>> > Then read this one to decompose ΔQ into G, S and V: Fundamentals of network performance engineering
>> > Then read this one to get a bit more sense on what ΔQ is about: Introduction to ΔQ and Network Performance Science (extracts)
>> >
>> > Then read these essays:
>> >
>> > Foundation of Network Science
>> > How to do network performance chemistry
>> > How to X-ray a telecoms network
>> > There is no quality in averages: IPX case study
>> 
>>         All of this makes intuitively sense, but it is a bit light on how deltaQ is to be computed ;).
>>         As far as I understand it also has not much bearing on my home network; the only one under my control. Now, following the buffer bloat discussion for some years, I have internalized the idea that bandwidth alone does not suffice to describe the quality of my network connection. I think that the latency increase under load (for unrelated flows) is the best of all the bad single number measures of network dynamics/quality. I should be related to what I understood deltaQ to depend on (as packet loss for non real time flows will cause an increase in latency).  I think that continuous measurements make a to n of sense for ISPs, backbone-operators, mobile carriers … but at home, basically, I operate as my own network quality monitor ;) (that is I try to pin point and debug (transient) anomalies).
>> 
>> >
>> > Martin
>> >
>> > For fresh thinking about telecoms sign up for my free newsletter or visit the Geddes Think Tank.
>> > LinkedIn Twitter Mobile: +44 7957 499219 Skype: mgeddes
>> > Martin Geddes Consulting Ltd, Incorporated in Scotland, number SC275827 VAT Number: 859 5634 72 Registered office: 17-19 East London Street, Edinburgh, EH7 4BN
>> >
>> >
>> >
>> > On 25 July 2014 15:17, Sebastian Moeller <moeller0@gmx.de> wrote:
>> > Hi Neil,
>> >
>> >
>> > On Jul 25, 2014, at 14:24 , Neil Davies <Neil.Davies@pnsol.com> wrote:
>> >
>> > > Rich
>> > >
>> > > I have a deep worry over this style of single point measurement - and hence speed - as an appropriate measure.
>> >
>> >         But how do you propose to measure the (bottleneck) link capacity then? It turns out for current CPE and CMTS/DSLAM equipment one typically can not relay on good QoE out of the box, since typically these devices do not use their (largish) buffers wisely. Instead the current remedy is to take back control over the bottleneck link by shaping the actually sent traffic to stay below the hardware link capacity thereby avoiding feeling the consequences of the over-buffering. But to do this is is quite helpful to get an educated guess what the bottleneck links capacity actually is. And for that purpose a speediest seems useful.
>> >
>> >
>> > > We know, and have evidence, that throughput/utilisation is not a good proxy for the network delivering suitable quality of experience. We work with organisation (Telco’s, large system integrators etc) where we spend a lot of time having to “undo” the consequences of “maximising speed”. Just like there is more to life than work, there is more to QoE than speed.
>> > >
>> > > For more specific comments see inline
>> > >
>> > > On 25 Jul 2014, at 13:09, Rich Brown <richb.hanover@gmail.com> wrote:
>> > >
>> > >> Neil,
>> > >>
>> > >> Thanks for the note and the observations. My thoughts:
>> > >>
>> > >> 1) I note that speedof.me does seem to overstate the speed results. At my home, it reports 5.98mbps down, and 638kbps up, while betterspeedtest.sh shows 5.49/0.61 mbps. (speedtest.net gives numbers similar to the betterspeedtest.net script.)
>> > >>
>> > >> 2) I think we're in agreement about the peak upload rate that you point out is too high. Their measurement code runs in the browser. It seems likely that the browser pumps out a few big packets before getting flow control information, thus giving the impression that they can send at a higher rate. This comports with the obvious decay that ramps toward the long-term rate.
>> > >
>> > > I think that its simpler than that, it is measuring the rate at which it can push packets out the interface - its real time rate is precisely that - it can not be the rate being reported by the far end, it can never exceed the limiting link. The long term average (if it is like other speed testers we’ve had to look into) is being measured at the TCP/IP SDU level by measuring the difference in time between the first and last timestamps of data stream and dividing that into the total data sent. Their “over-estimate” is because there are packets buffered in the CPE that have left the machine but not arrived at the far end.
>> >
>> >         Testing from an openwrt router located at a high-symmetric-bandwidth location shows that speedof.me does not scale higher than ~ 130 Mbps server to client and ~15Mbps client to server (on the same connection I can get 130Mbps S2C and ~80Mbps C2S, so the asymmetry in the speedof.me results is not caused by my local environment).
>> >         @Rich and Dave, this probably means that for the upper end of fiber and cable and VDSL connections speed of.me is not going to be a reliable speed measure… Side note www.sppedtest.net shows ~100Mbps S2C and ~100Mbps C2S, so might be better suited to high-upload links...
>> >
>> > >
>> > >>
>> > >> 3) But that long-term speed should be at or below the theoretical long-term rate, not above it.
>> > >
>> > > Agreed, but in this case knowing the sync rate already defines that maximum.
>> >
>> >         I fully agree, but for ADSL the sync rate also contains a lot of encapsulation, so the maximum achievable TCP rate is at best ~90% of link rate. Note for cerowrt’s SQM system the link rate is exactly the right number to start out with at that system can take the encapsulation into account. But even then it is somewhat unintuitive to deduce the expected good-put from the link rate.
>> >
>> > >
>> > >>
>> > >> Two experiments for you to try:
>> > >>
>> > >> a) What does betterspeedtest.sh show? (It's in the latest CeroWrt, in /usr/lib/CeroWrtScripts, or get it from github: https://github.com/richb-hanover/CeroWrtScripts )
>> > >>
>> > >> b) What does www.speedtest.net show?
>> > >>
>> > >> I will add your question (about the inaccuracy) to the note that I want to send out to speedof.me this weekend. I will also ask that they include min/max latency measurements to their test, and an option to send for > 10 seconds to minimize any effect of PowerBoost…
>> >
>> >         I think they do already, at least for the download bandwidth; they start with 128Kb and keep doubling the file size until a file takes longer than 8 seconds to transfer, they only claim to report the numbers from that last transferred file, so worst case with a stable link and a bandwidth > 16kbps ;), it has taken at least 12 seconds (4 plus 8) of measuring before the end of the plot, so the bandwidth of at least the last half of the download plot should be representative even assuming power boost. Caveat, I assume that power boost will not be reset by the transient lack of data transfer between the differently sized files (but since it should involve the same IPs and port# why should power boost reset itself?).
>> >
>> > Best Regards
>> >         Sebastian
>> >
>> >
>> >
>> > >>
>> > >> Best regards,
>> > >>
>> > >> Rich
>> > >>
>> > >>
>> > >>
>> > >> On Jul 25, 2014, at 5:10 AM, Neil Davies <neil.davies@pnsol.com> wrote:
>> > >>
>> > >>> Rich
>> > >>>
>> > >>> You may want to check how accurate they are to start.
>> > >>>
>> > >>> I just ran a “speed test” on my line (which I have complete control and visibility over the various network elements) and it reports an average “speed” (in the up direction) that is in excess of the capacity of the line, it reports the maximum rate at nearly twice the best possible rate of the ADSL connection.
>> > >>>
>> > >>> Doesn’t matter how pretty it is, if its not accurate it is of no use. This is rather ironic as the web site claims it is the “smartest and most accurate”!
>> > >>>
>> > >>> Neil
>> > >>>
>> > >>> <speedof_me_14-07-25.png>
>> > >>>
>> > >>> PS pretty clear to me what mistake they’ve made in the measurement process - its to do with incorrect inference and hence missing the buffering effects.
>> > >>>
>> > >>> On 20 Jul 2014, at 14:19, Rich Brown <richb.hanover@gmail.com> wrote:
>> > >>>
>> > >>>> Doc Searls (http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/) mentioned in passing that he uses a new speed test website. I checked it out, and it was very cool…
>> > >>>>
>> > >>>> www.speedof.me is an all-HTML5 website that seems to make accurate measurements of the up and download speeds of your internet connection. It’s also very attractive, and the real-time plots of the speed show interesting info. (screen shot at: http://richb-hanover.com/speedof-me/)
>> > >>>>
>> > >>>> Now if we could get them to a) allow longer/bigger tests to circumvent PowerBoost, and b) include a latency measurement so people could point out their bufferbloated equipment.
>> > >>>>
>> > >>>> I'm going to send them a note. Anything else I should add?
>> > >>>>
>> > >>>> Rich
>> > >>>> _______________________________________________
>> > >>>> Bloat mailing list
>> > >>>> Bloat@lists.bufferbloat.net
>> > >>>> https://lists.bufferbloat.net/listinfo/bloat
>> > >>>
>> > >>
>> > >
>> > > _______________________________________________
>> > > Bloat mailing list
>> > > Bloat@lists.bufferbloat.net
>> > > https://lists.bufferbloat.net/listinfo/bloat
>> >
>> > _______________________________________________
>> > Bloat mailing list
>> > Bloat@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/bloat
>> >
>> 
>> 
>> <ΔQ morphism.png>
> 


[-- Attachment #2: Type: text/html, Size: 16770 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
  2014-07-25 17:14                 ` Neil Davies
@ 2014-07-25 17:17                   ` Sebastian Moeller
  0 siblings, 0 replies; 24+ messages in thread
From: Sebastian Moeller @ 2014-07-25 17:17 UTC (permalink / raw)
  To: Neil Davies; +Cc: cerowrt-devel, bloat

Hi Neil,


On Jul 25, 2014, at 19:14 , Neil Davies <neil.davies@pnsol.com> wrote:

> Try this thesis - Lucian used this for work at CERN, the description is in there.... (see one of the appendicies) Analysis and predictive modeling of the performance of the ATLAS TDAQ network

	Sweet, thanks a lot.

best regards
	sebastian

> 
> 
> On 25 Jul 2014, at 18:12, Sebastian Moeller <moeller0@gmx.de> wrote:
> 
>> Hello Martin,
>> 
>> thanks a lot.
>> 
>> On Jul 25, 2014, at 18:32 , Martin Geddes <mail@martingeddes.com> wrote:
>> 
>>> So what is ΔQ and how do you "compute" it (to the extent it is a "computed" thing)?
>>> 
>>> Starting point: the only observable effect of a network is to lose and delay data -- i.e. to "attenuate quality" by adding the toxic effects of time to distributed computations. ΔQ is a morphism that relates the "quality attenuation" that the network imposes to the application performance, and describes the trading spaces at all intermediate layers of abstraction. It is shown in the attached graphic.
>>> 
>>> Critically, it frames quality as something that can only be lost ("attenuated"), both by the network and the application. Additionally, it is stochastic, and works with random variables and distributions.
>>> 
>>> At its most concrete level, it is the individual impairment encountered by every packet when the network in operation. But we don't want to have to track every packet - 1:1 scale maps are pretty useless. So we need to abstract that in order to create a model that has value.
>>> 
>>> Next abstraction: an improper random variable. This unifies loss and delay into a single stochastic object.
>>> Next abstraction: received transport, which is a CDF where we are interested in the properties of the "tail".
>>> 
>>> Next abstraction, that joins network performance and application QoE (as relates to performance): relate the CDF to the application through a Quality Transport Agreement. This "stochastic contract" is both necessary and sufficient to deliver the application outcome.
>>> 
>>> Next concretisation towards QoE: offered load of demand, as a CDF.
>>> Next concretisation towards QoE: breach hazard metric, which abstracts the application performance. Indicates the likelihood of the QTA contract being broken, and how badly.
>>> Final concretisation: the individual application performance encountered by every user. Again, a 1:1 map that isn't very helpful.
>>> 
>>> So as you can see, it's about as far away from a single point average metric as you can possibly get. A far richer model is required in order to achieve robust performance engineering.
>>> 
>>> It is "computed" using multi-point measurements to capture the distribution. The G/S/V charts you see are based on processing that data to account for various issues, including clock skew.
>>> 
>>> I hope that helps. We need to document more of this in public, which is an ongoing process. 
>> 
>> 	You lost me, I think what I should have asked for is a real example with numbers and the formulas ;) I guess that is deep in “secret sauce” territory. Alas, if that should be true it also means that deltaQ is not going to help me understand my network any better …
>> 
>> Best Regards
>> 	Sebastian
>> 
>> 
>>> 
>>> Martin
>>> 
>>> On 25 July 2014 16:58, Sebastian Moeller <moeller0@gmx.de> wrote:
>>> Hi Martin,
>>> 
>>> thanks for the pointers,
>>> 
>>> 
>>> On Jul 25, 2014, at 16:25 , Martin Geddes <mail@martingeddes.com> wrote:
>>> 
>>> > You may find the following useful background reading on the state of the art in network measurement, and a primer on ΔQ (which is the property we wish to measure).
>>> >
>>> > First, start with this presentation: Network performance optimisation using high-fidelity measures
>>> > Then read this one to decompose ΔQ into G, S and V: Fundamentals of network performance engineering
>>> > Then read this one to get a bit more sense on what ΔQ is about: Introduction to ΔQ and Network Performance Science (extracts)
>>> >
>>> > Then read these essays:
>>> >
>>> > Foundation of Network Science
>>> > How to do network performance chemistry
>>> > How to X-ray a telecoms network
>>> > There is no quality in averages: IPX case study
>>> 
>>>         All of this makes intuitively sense, but it is a bit light on how deltaQ is to be computed ;).
>>>         As far as I understand it also has not much bearing on my home network; the only one under my control. Now, following the buffer bloat discussion for some years, I have internalized the idea that bandwidth alone does not suffice to describe the quality of my network connection. I think that the latency increase under load (for unrelated flows) is the best of all the bad single number measures of network dynamics/quality. I should be related to what I understood deltaQ to depend on (as packet loss for non real time flows will cause an increase in latency).  I think that continuous measurements make a to n of sense for ISPs, backbone-operators, mobile carriers … but at home, basically, I operate as my own network quality monitor ;) (that is I try to pin point and debug (transient) anomalies).
>>> 
>>> >
>>> > Martin
>>> >
>>> > For fresh thinking about telecoms sign up for my free newsletter or visit the Geddes Think Tank.
>>> > LinkedIn Twitter Mobile: +44 7957 499219 Skype: mgeddes
>>> > Martin Geddes Consulting Ltd, Incorporated in Scotland, number SC275827 VAT Number: 859 5634 72 Registered office: 17-19 East London Street, Edinburgh, EH7 4BN
>>> >
>>> >
>>> >
>>> > On 25 July 2014 15:17, Sebastian Moeller <moeller0@gmx.de> wrote:
>>> > Hi Neil,
>>> >
>>> >
>>> > On Jul 25, 2014, at 14:24 , Neil Davies <Neil.Davies@pnsol.com> wrote:
>>> >
>>> > > Rich
>>> > >
>>> > > I have a deep worry over this style of single point measurement - and hence speed - as an appropriate measure.
>>> >
>>> >         But how do you propose to measure the (bottleneck) link capacity then? It turns out for current CPE and CMTS/DSLAM equipment one typically can not relay on good QoE out of the box, since typically these devices do not use their (largish) buffers wisely. Instead the current remedy is to take back control over the bottleneck link by shaping the actually sent traffic to stay below the hardware link capacity thereby avoiding feeling the consequences of the over-buffering. But to do this is is quite helpful to get an educated guess what the bottleneck links capacity actually is. And for that purpose a speediest seems useful.
>>> >
>>> >
>>> > > We know, and have evidence, that throughput/utilisation is not a good proxy for the network delivering suitable quality of experience. We work with organisation (Telco’s, large system integrators etc) where we spend a lot of time having to “undo” the consequences of “maximising speed”. Just like there is more to life than work, there is more to QoE than speed.
>>> > >
>>> > > For more specific comments see inline
>>> > >
>>> > > On 25 Jul 2014, at 13:09, Rich Brown <richb.hanover@gmail.com> wrote:
>>> > >
>>> > >> Neil,
>>> > >>
>>> > >> Thanks for the note and the observations. My thoughts:
>>> > >>
>>> > >> 1) I note that speedof.me does seem to overstate the speed results. At my home, it reports 5.98mbps down, and 638kbps up, while betterspeedtest.sh shows 5.49/0.61 mbps. (speedtest.net gives numbers similar to the betterspeedtest.net script.)
>>> > >>
>>> > >> 2) I think we're in agreement about the peak upload rate that you point out is too high. Their measurement code runs in the browser. It seems likely that the browser pumps out a few big packets before getting flow control information, thus giving the impression that they can send at a higher rate. This comports with the obvious decay that ramps toward the long-term rate.
>>> > >
>>> > > I think that its simpler than that, it is measuring the rate at which it can push packets out the interface - its real time rate is precisely that - it can not be the rate being reported by the far end, it can never exceed the limiting link. The long term average (if it is like other speed testers we’ve had to look into) is being measured at the TCP/IP SDU level by measuring the difference in time between the first and last timestamps of data stream and dividing that into the total data sent. Their “over-estimate” is because there are packets buffered in the CPE that have left the machine but not arrived at the far end.
>>> >
>>> >         Testing from an openwrt router located at a high-symmetric-bandwidth location shows that speedof.me does not scale higher than ~ 130 Mbps server to client and ~15Mbps client to server (on the same connection I can get 130Mbps S2C and ~80Mbps C2S, so the asymmetry in the speedof.me results is not caused by my local environment).
>>> >         @Rich and Dave, this probably means that for the upper end of fiber and cable and VDSL connections speed of.me is not going to be a reliable speed measure… Side note www.sppedtest.net shows ~100Mbps S2C and ~100Mbps C2S, so might be better suited to high-upload links...
>>> >
>>> > >
>>> > >>
>>> > >> 3) But that long-term speed should be at or below the theoretical long-term rate, not above it.
>>> > >
>>> > > Agreed, but in this case knowing the sync rate already defines that maximum.
>>> >
>>> >         I fully agree, but for ADSL the sync rate also contains a lot of encapsulation, so the maximum achievable TCP rate is at best ~90% of link rate. Note for cerowrt’s SQM system the link rate is exactly the right number to start out with at that system can take the encapsulation into account. But even then it is somewhat unintuitive to deduce the expected good-put from the link rate.
>>> >
>>> > >
>>> > >>
>>> > >> Two experiments for you to try:
>>> > >>
>>> > >> a) What does betterspeedtest.sh show? (It's in the latest CeroWrt, in /usr/lib/CeroWrtScripts, or get it from github: https://github.com/richb-hanover/CeroWrtScripts )
>>> > >>
>>> > >> b) What does www.speedtest.net show?
>>> > >>
>>> > >> I will add your question (about the inaccuracy) to the note that I want to send out to speedof.me this weekend. I will also ask that they include min/max latency measurements to their test, and an option to send for > 10 seconds to minimize any effect of PowerBoost…
>>> >
>>> >         I think they do already, at least for the download bandwidth; they start with 128Kb and keep doubling the file size until a file takes longer than 8 seconds to transfer, they only claim to report the numbers from that last transferred file, so worst case with a stable link and a bandwidth > 16kbps ;), it has taken at least 12 seconds (4 plus 8) of measuring before the end of the plot, so the bandwidth of at least the last half of the download plot should be representative even assuming power boost. Caveat, I assume that power boost will not be reset by the transient lack of data transfer between the differently sized files (but since it should involve the same IPs and port# why should power boost reset itself?).
>>> >
>>> > Best Regards
>>> >         Sebastian
>>> >
>>> >
>>> >
>>> > >>
>>> > >> Best regards,
>>> > >>
>>> > >> Rich
>>> > >>
>>> > >>
>>> > >>
>>> > >> On Jul 25, 2014, at 5:10 AM, Neil Davies <neil.davies@pnsol.com> wrote:
>>> > >>
>>> > >>> Rich
>>> > >>>
>>> > >>> You may want to check how accurate they are to start.
>>> > >>>
>>> > >>> I just ran a “speed test” on my line (which I have complete control and visibility over the various network elements) and it reports an average “speed” (in the up direction) that is in excess of the capacity of the line, it reports the maximum rate at nearly twice the best possible rate of the ADSL connection.
>>> > >>>
>>> > >>> Doesn’t matter how pretty it is, if its not accurate it is of no use. This is rather ironic as the web site claims it is the “smartest and most accurate”!
>>> > >>>
>>> > >>> Neil
>>> > >>>
>>> > >>> <speedof_me_14-07-25.png>
>>> > >>>
>>> > >>> PS pretty clear to me what mistake they’ve made in the measurement process - its to do with incorrect inference and hence missing the buffering effects.
>>> > >>>
>>> > >>> On 20 Jul 2014, at 14:19, Rich Brown <richb.hanover@gmail.com> wrote:
>>> > >>>
>>> > >>>> Doc Searls (http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/) mentioned in passing that he uses a new speed test website. I checked it out, and it was very cool…
>>> > >>>>
>>> > >>>> www.speedof.me is an all-HTML5 website that seems to make accurate measurements of the up and download speeds of your internet connection. It’s also very attractive, and the real-time plots of the speed show interesting info. (screen shot at: http://richb-hanover.com/speedof-me/)
>>> > >>>>
>>> > >>>> Now if we could get them to a) allow longer/bigger tests to circumvent PowerBoost, and b) include a latency measurement so people could point out their bufferbloated equipment.
>>> > >>>>
>>> > >>>> I'm going to send them a note. Anything else I should add?
>>> > >>>>
>>> > >>>> Rich
>>> > >>>> _______________________________________________
>>> > >>>> Bloat mailing list
>>> > >>>> Bloat@lists.bufferbloat.net
>>> > >>>> https://lists.bufferbloat.net/listinfo/bloat
>>> > >>>
>>> > >>
>>> > >
>>> > > _______________________________________________
>>> > > Bloat mailing list
>>> > > Bloat@lists.bufferbloat.net
>>> > > https://lists.bufferbloat.net/listinfo/bloat
>>> >
>>> > _______________________________________________
>>> > Bloat mailing list
>>> > Bloat@lists.bufferbloat.net
>>> > https://lists.bufferbloat.net/listinfo/bloat
>>> >
>>> 
>>> 
>>> <ΔQ morphism.png>
>> 
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] Check out www.speedof.me - no Flash
  2014-07-25 14:27         ` [Bloat] " Neil Davies
  2014-07-25 16:02           ` Sebastian Moeller
@ 2014-07-25 21:20           ` David Lang
       [not found]             ` <25037.1406327367@turing-police.cc.vt.edu>
  1 sibling, 1 reply; 24+ messages in thread
From: David Lang @ 2014-07-25 21:20 UTC (permalink / raw)
  To: Neil Davies; +Cc: cerowrt-devel, bloat

On Fri, 25 Jul 2014, Neil Davies wrote:

> Sebastian
>
> On 25 Jul 2014, at 15:17, Sebastian Moeller <moeller0@gmx.de> wrote:
>
>> 	But how do you propose to measure the (bottleneck) link capacity then? 
>> It turns out for current CPE and CMTS/DSLAM equipment one typically can not 
>> relay on good QoE out of the box, since typically these devices do not use 
>> their (largish) buffers wisely. Instead the current remedy is to take back 
>> control over the bottleneck link by shaping the actually sent traffic to stay 
>> below the hardware link capacity thereby avoiding feeling the consequences of 
>> the over-buffering. But to do this is is quite helpful to get an educated 
>> guess what the bottleneck links capacity actually is. And for that purpose a 
>> speediest seems useful.
>
>
> I totally agree that what you are trying to do is to take control "back" for 
> the upstream delay and loss (which is the network level activity that directly 
> influences QoE). Observationally the "constraining link" is the point at which 
> the delay and loss start to grow as the the offered load is increased (there 
> are interesting interactions with the scheduling in the CMTS/3GPP node B - but 
> they are tractable) if we don't have direct access to the constraint (which in 
> the CPE, for ADSL you have) we track that "quality attenuation" inflection 
> point. Saturating the path is a bit of a sledgehammer (and has nasty 
> cost/scaling implications).

The thing is that there is little effect on latency until the congestion starts, 
so we can only measure the problem when there is congestion.

Saturating the link is a bit of a sledgehammer, but there really isn't any other 
way to get to the worst case situation.

In terms of scaling, have the server detect that all the requests have combined 
to saturate it's link, and have it tell the clients that it's overloaded, wait a 
random amount of time and retry (or try another location)

cost of bandwidth for this is just something to get someone to pay for (ideally 
someone with tons of bandwidth already who won't notice this sort of test, even 
if there are a few going on at once.)

David Lang

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] [Cerowrt-devel]  Check out www.speedof.me - no Flash
       [not found]                 ` <1406326625.225312181@apps.rackspace.com>
@ 2014-07-25 23:26                   ` David Lang
  2014-07-26 13:02                     ` Sebastian Moeller
  0 siblings, 1 reply; 24+ messages in thread
From: David Lang @ 2014-07-25 23:26 UTC (permalink / raw)
  To: dpreed; +Cc: cerowrt-devel, bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 19634 bytes --]

But I think that what we are seeing from teh results of the bufferbloat work is 
that a properly configured network doesn't degrade badly as it gets busy.

Individual services will degrade as they need more bandwidth than is available, 
but that sort of degredation is easy for the user to understand.

The current status-quo is where good throughput at 80% utilization may be 80Mb, 
at 90% utilization it may be 85Mb, at 95% utilization it is 60Mb, and at 100% 
utilization it pulses between 10Mb and 80Mb averaging around 20Mb and latency 
goes from 10ms to multiple seconds over this range.

With BQL and fw_codel, 80% utilization would still be 80Mb, 90% utilization 
would be 89Mb, 95% utilization would be 93Mb with latency only going to 20ms

so there is a real problem to solve in the current status-quo, and the question 
is if there is a way to quantify the problem and test for it in ways that are 
repeatable, meaningful and understandable.

This is a place to avoid letting perfect be the enemy of good enough.

If you ask even relatively technical people about the quality of a network 
connection, they will talk to you about bandwidth and latency.

But if you talk to a networking expert, they don't even mention that, they talk 
about signal strength, waveform distortion, bit error rates, error correction 
mechanisms, signal regeneration, and probably many other things that I don't 
know enough to even mention :-)


Everyone is already measuring peak bandwidth today, and that is always going to 
be an important factor, so it will stay around.

So we need to show the degredation of the network, and I think that either 
ping(loaded)-ping(unloaded) or ping(loaded)/ping(unloaded) will give us 
meaningful numbers that people can understand and talk about, while still being 
meaningful in the real world.

which of the two is more useful is something that we would need to get a bunch 
of people with different speed lines to report and see which is affected less by 
line differences and distance to target.

David Lang

On Fri, 25 Jul 2014, dpreed@reed.com wrote:

> I think what is being discussed is "how to measure the quality of one 
> endpoint's experience of the entire Internet over all time or over a specific 
> interval of time".
> 
> Yet the systems that are built on top of the Internet transport do not have 
> any kind of uniform dependence on the underlying transport behavior in terms 
> of their quality.  Even something like VoIP's quality as experienced by two 
> humans talking over it has a dependency on the Internet's behavior, but one 
> that is hardly simple.
> 
> As an extreme, if one endpoint experiences a direct DDoS attack or is 
> indirectly affected by one somewhere in the path, the quality of the 
> experience might be dramatically reduced.
> 
> So any attempt to define a delta-Q that has meaning in terms of user 
> experience appears pointless and even silly - the endpoint experience is 
> adequate under a very wide variety of conditions, but degrades terribly under 
> certain kinds of conditions.
> 
> As a different point, let's assume that the last-mile is 80% utilized, but the 
> latency variation in that utilization is not larger than 50 msec.  This is a 
> feasible-to-imagine operating point, but it requires a certain degreee of 
> tight control that may be very hard to achieve over thousands of independent 
> application services through that point, so its feasibility is contingent on 
> lots of factors. Then if the 20% capacity is far larger than 64 kb/sec we know 
> that toll-quality audio can be produced with a small endpoint "jitter buffer". 
> There's no "delta-Q" there at all - quality is great.
> 
> So the point is: a single number or even a single "morphism" (whatever that 
> is) to a specific algebraic domain element (a mapping to a semi-lattice with 
> non-Abelian operators?) does not allow one to define a "measure" of an 
> endpoint of the Internet that can be used to compute "quality" of all 
> applications.
> 
> Or in purely non-abstract terms: if there were a delta-Q it would be useless 
> for most network applications, but might be useful for a single network 
> application.
> 
> So I submit that delta-Q is a *metaphor* and not a very useful one at that. 
> It's probably as useful as providing a "funkiness" measure for an Internat 
> access point.  We can certainly talk about and make claims about the relative 
> "funkiness" of different connections and different providers.  We might even 
> claim that cable providers make funkier network providers than cellular 
> providers.
> 
> But to what end?
>
>
> On Friday, July 25, 2014 5:13pm, "David Lang" <david@lang.hm> said:
>
>
>
>> On Fri, 25 Jul 2014, Martin Geddes wrote:
>> 
>> > So what is ΔQ and how do you "compute" it (to the extent it is a
>> "computed"
>> > thing)?
>> 
>> don't try to reduce it to a single number, we have two numbers that seem to
>> matter
>> 
>> 1. throughput (each direction)
>> 
>> 2. latency under load
>> 
>> Currently the speed test sites report throughput in each direction and ping time
>> while not under load
>> 
>> If they could just add a ping time under load measurement, then we could talk
>> meaningfully about either the delta or ratio of the ping times as the
>> "bufferbloat factor"
>> 
>> no, it wouldn't account for absolutly every nuance, but it would come pretty
>> close.
>> 
>> If a connection has good throughput and a low bufferbloat factor, it should be
>> good for any type of use.
>> 
>> If it has good throughput, but a horrid bufferbloat factor, then you need to
>> artifically limit your traffic to stay clear of saturating the bandwith
>> (sacraficing throughput)
>> 
>> David Lang
>> 
>> > Starting point: the only observable effect of a network is to lose and
>> > delay data -- i.e. to "attenuate quality" by adding the toxic effects of
>> > time to distributed computations. ΔQ is a *morphism* that relates the
>> > "quality attenuation" that the network imposes to the application
>> > performance, and describes the trading spaces at all intermediate layers of
>> > abstraction. It is shown in the attached graphic.
>> >
>> > Critically, it frames quality as something that can only be lost
>> > ("attenuated"), both by the network and the application. Additionally, it
>> > is stochastic, and works with random variables and distributions.
>> >
>> > At its most concrete level, it is the individual impairment encountered by
>> > every packet when the network in operation. But we don't want to have to
>> > track every packet - 1:1 scale maps are pretty useless. So we need to
>> > abstract that in order to create a model that has value.
>> >
>> > Next abstraction: an improper random variable. This unifies loss and delay
>> > into a single stochastic object.
>> > Next abstraction: received transport, which is a CDF where we are
>> > interested in the properties of the "tail".
>> >
>> > Next abstraction, that joins network performance and application QoE (as
>> > relates to performance): relate the CDF to the application through a
>> > Quality Transport Agreement. This "stochastic contract" is both necessary
>> > and sufficient to deliver the application outcome.
>> >
>> > Next concretisation towards QoE: offered load of demand, as a CDF.
>> > Next concretisation towards QoE: breach hazard metric, which abstracts the
>> > application performance. Indicates the likelihood of the QTA contract being
>> > broken, and how badly.
>> > Final concretisation: the individual application performance encountered by
>> > every user. Again, a 1:1 map that isn't very helpful.
>> >
>> > So as you can see, it's about as far away from a single point average
>> > metric as you can possibly get. A far richer model is required in order to
>> > achieve robust performance engineering.
>> >
>> > It is "computed" using multi-point measurements to capture the
>> > distribution. The G/S/V charts you see are based on processing that data to
>> > account for various issues, including clock skew.
>> >
>> > I hope that helps. We need to document more of this in public, which is an
>> > ongoing process.
>> >
>> > Martin
>> >
>> > On 25 July 2014 16:58, Sebastian Moeller <moeller0@gmx.de> wrote:
>> >
>> >> Hi Martin,
>> >>
>> >> thanks for the pointers,
>> >>
>> >>
>> >> On Jul 25, 2014, at 16:25 , Martin Geddes <mail@martingeddes.com>
>> wrote:
>> >>
>> >>> You may find the following useful background reading on the state of
>> the
>> >> art in network measurement, and a primer on ΔQ (which is the
>> property we
>> >> wish to measure).
>> >>>
>> >>> First, start with this presentation: Network performance
>> optimisation
>> >> using high-fidelity measures
>> >>> Then read this one to decompose ΔQ into G, S and V:
>> Fundamentals of
>> >> network performance engineering
>> >>> Then read this one to get a bit more sense on what ΔQ is
>> about:
>> >> Introduction to ΔQ and Network Performance Science (extracts)
>> >>>
>> >>> Then read these essays:
>> >>>
>> >>> Foundation of Network Science
>> >>> How to do network performance chemistry
>> >>> How to X-ray a telecoms network
>> >>> There is no quality in averages: IPX case study
>> >>
>> >> All of this makes intuitively sense, but it is a bit light on
>> how
>> >> deltaQ is to be computed ;).
>> >> As far as I understand it also has not much bearing on my home
>> >> network; the only one under my control. Now, following the buffer bloat
>> >> discussion for some years, I have internalized the idea that bandwidth
>> >> alone does not suffice to describe the quality of my network connection.
>> I
>> >> think that the latency increase under load (for unrelated flows) is the
>> >> best of all the bad single number measures of network dynamics/quality.
>> I
>> >> should be related to what I understood deltaQ to depend on (as packet
>> loss
>> >> for non real time flows will cause an increase in latency). I think
>> that
>> >> continuous measurements make a to n of sense for ISPs,
>> backbone-operators,
>> >> mobile carriers … but at home, basically, I operate as my own
>> network
>> >> quality monitor ;) (that is I try to pin point and debug (transient)
>> >> anomalies).
>> >>
>> >>>
>> >>> Martin
>> >>>
>> >>> For fresh thinking about telecoms sign up for my free newsletter or
>> >> visit the Geddes Think Tank.
>> >>> LinkedIn Twitter Mobile: +44 7957 499219 Skype: mgeddes
>> >>> Martin Geddes Consulting Ltd, Incorporated in Scotland, number
>> SC275827
>> >> VAT Number: 859 5634 72 Registered office: 17-19 East London Street,
>> >> Edinburgh, EH7 4BN
>> >>>
>> >>>
>> >>>
>> >>> On 25 July 2014 15:17, Sebastian Moeller <moeller0@gmx.de>
>> wrote:
>> >>> Hi Neil,
>> >>>
>> >>>
>> >>> On Jul 25, 2014, at 14:24 , Neil Davies <Neil.Davies@pnsol.com>
>> wrote:
>> >>>
>> >>>> Rich
>> >>>>
>> >>>> I have a deep worry over this style of single point measurement -
>> and
>> >> hence speed - as an appropriate measure.
>> >>>
>> >>> But how do you propose to measure the (bottleneck) link
>> capacity
>> >> then? It turns out for current CPE and CMTS/DSLAM equipment one
>> typically
>> >> can not relay on good QoE out of the box, since typically these devices
>> do
>> >> not use their (largish) buffers wisely. Instead the current remedy is to
>> >> take back control over the bottleneck link by shaping the actually sent
>> >> traffic to stay below the hardware link capacity thereby avoiding
>> feeling
>> >> the consequences of the over-buffering. But to do this is is quite
>> helpful
>> >> to get an educated guess what the bottleneck links capacity actually is.
>> >> And for that purpose a speediest seems useful.
>> >>>
>> >>>
>> >>>> We know, and have evidence, that throughput/utilisation is not a
>> good
>> >> proxy for the network delivering suitable quality of experience. We work
>> >> with organisation (Telco’s, large system integrators etc) where we
>> spend a
>> >> lot of time having to “undo” the consequences of
>> “maximising speed”. Just
>> >> like there is more to life than work, there is more to QoE than speed.
>> >>>>
>> >>>> For more specific comments see inline
>> >>>>
>> >>>> On 25 Jul 2014, at 13:09, Rich Brown
>> <richb.hanover@gmail.com> wrote:
>> >>>>
>> >>>>> Neil,
>> >>>>>
>> >>>>> Thanks for the note and the observations. My thoughts:
>> >>>>>
>> >>>>> 1) I note that speedof.me does seem to overstate the speed
>> results.
>> >> At my home, it reports 5.98mbps down, and 638kbps up, while
>> >> betterspeedtest.sh shows 5.49/0.61 mbps. (speedtest.net gives numbers
>> >> similar to the betterspeedtest.net script.)
>> >>>>>
>> >>>>> 2) I think we're in agreement about the peak upload rate that
>> you
>> >> point out is too high. Their measurement code runs in the browser. It
>> seems
>> >> likely that the browser pumps out a few big packets before getting flow
>> >> control information, thus giving the impression that they can send at a
>> >> higher rate. This comports with the obvious decay that ramps toward the
>> >> long-term rate.
>> >>>>
>> >>>> I think that its simpler than that, it is measuring the rate at
>> which
>> >> it can push packets out the interface - its real time rate is precisely
>> >> that - it can not be the rate being reported by the far end, it can
>> never
>> >> exceed the limiting link. The long term average (if it is like other
>> speed
>> >> testers we’ve had to look into) is being measured at the TCP/IP SDU
>> level
>> >> by measuring the difference in time between the first and last
>> timestamps
>> >> of data stream and dividing that into the total data sent. Their
>> >> “over-estimate” is because there are packets buffered in the
>> CPE that have
>> >> left the machine but not arrived at the far end.
>> >>>
>> >>> Testing from an openwrt router located at a
>> >> high-symmetric-bandwidth location shows that speedof.me does not scale
>> >> higher than ~ 130 Mbps server to client and ~15Mbps client to server (on
>> >> the same connection I can get 130Mbps S2C and ~80Mbps C2S, so the
>> asymmetry
>> >> in the speedof.me results is not caused by my local environment).
>> >>> @Rich and Dave, this probably means that for the upper end
>> of
>> >> fiber and cable and VDSL connections speed of.me is not going to be a
>> >> reliable speed measure… Side note www.sppedtest.net shows ~100Mbps
>> S2C
>> >> and ~100Mbps C2S, so might be better suited to high-upload links...
>> >>>
>> >>>>
>> >>>>>
>> >>>>> 3) But that long-term speed should be at or below the
>> theoretical
>> >> long-term rate, not above it.
>> >>>>
>> >>>> Agreed, but in this case knowing the sync rate already defines
>> that
>> >> maximum.
>> >>>
>> >>> I fully agree, but for ADSL the sync rate also contains a lot
>> of
>> >> encapsulation, so the maximum achievable TCP rate is at best ~90% of
>> link
>> >> rate. Note for cerowrt’s SQM system the link rate is exactly the
>> right
>> >> number to start out with at that system can take the encapsulation into
>> >> account. But even then it is somewhat unintuitive to deduce the expected
>> >> good-put from the link rate.
>> >>>
>> >>>>
>> >>>>>
>> >>>>> Two experiments for you to try:
>> >>>>>
>> >>>>> a) What does betterspeedtest.sh show? (It's in the latest
>> CeroWrt, in
>> >> /usr/lib/CeroWrtScripts, or get it from github:
>> >> https://github.com/richb-hanover/CeroWrtScripts )
>> >>>>>
>> >>>>> b) What does www.speedtest.net show?
>> >>>>>
>> >>>>> I will add your question (about the inaccuracy) to the note
>> that I
>> >> want to send out to speedof.me this weekend. I will also ask that they
>> >> include min/max latency measurements to their test, and an option to
>> send
>> >> for > 10 seconds to minimize any effect of PowerBoost…
>> >>>
>> >>> I think they do already, at least for the download
>> bandwidth;
>> >> they start with 128Kb and keep doubling the file size until a file takes
>> >> longer than 8 seconds to transfer, they only claim to report the numbers
>> >> from that last transferred file, so worst case with a stable link and a
>> >> bandwidth > 16kbps ;), it has taken at least 12 seconds (4 plus 8) of
>> >> measuring before the end of the plot, so the bandwidth of at least the
>> last
>> >> half of the download plot should be representative even assuming power
>> >> boost. Caveat, I assume that power boost will not be reset by the
>> transient
>> >> lack of data transfer between the differently sized files (but since it
>> >> should involve the same IPs and port# why should power boost reset
>> itself?).
>> >>>
>> >>> Best Regards
>> >>> Sebastian
>> >>>
>> >>>
>> >>>
>> >>>>>
>> >>>>> Best regards,
>> >>>>>
>> >>>>> Rich
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> On Jul 25, 2014, at 5:10 AM, Neil Davies
>> <neil.davies@pnsol.com>
>> >> wrote:
>> >>>>>
>> >>>>>> Rich
>> >>>>>>
>> >>>>>> You may want to check how accurate they are to start.
>> >>>>>>
>> >>>>>> I just ran a “speed test” on my line (which I
>> have complete control
>> >> and visibility over the various network elements) and it reports an
>> average
>> >> “speed” (in the up direction) that is in excess of the
>> capacity of the
>> >> line, it reports the maximum rate at nearly twice the best possible rate
>> of
>> >> the ADSL connection.
>> >>>>>>
>> >>>>>> Doesn’t matter how pretty it is, if its not
>> accurate it is of no
>> >> use. This is rather ironic as the web site claims it is the
>> “smartest and
>> >> most accurate”!
>> >>>>>>
>> >>>>>> Neil
>> >>>>>>
>> >>>>>> <speedof_me_14-07-25.png>
>> >>>>>>
>> >>>>>> PS pretty clear to me what mistake they’ve made in
>> the measurement
>> >> process - its to do with incorrect inference and hence missing the
>> >> buffering effects.
>> >>>>>>
>> >>>>>> On 20 Jul 2014, at 14:19, Rich Brown
>> <richb.hanover@gmail.com>
>> >> wrote:
>> >>>>>>
>> >>>>>>> Doc Searls (
>> >>
>> http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/)
>> >> mentioned in passing that he uses a new speed test website. I checked it
>> >> out, and it was very cool…
>> >>>>>>>
>> >>>>>>> www.speedof.me is an all-HTML5 website that seems to
>> make accurate
>> >> measurements of the up and download speeds of your internet connection.
>> >> It’s also very attractive, and the real-time plots of the speed
>> show
>> >> interesting info. (screen shot at: http://richb-hanover.com/speedof-me/)
>> >>>>>>>
>> >>>>>>> Now if we could get them to a) allow longer/bigger
>> tests to
>> >> circumvent PowerBoost, and b) include a latency measurement so people
>> could
>> >> point out their bufferbloated equipment.
>> >>>>>>>
>> >>>>>>> I'm going to send them a note. Anything else I should
>> add?
>> >>>>>>>
>> >>>>>>> Rich
>> >>>>>>> _______________________________________________
>> >>>>>>> Bloat mailing list
>> >>>>>>> Bloat@lists.bufferbloat.net
>> >>>>>>> https://lists.bufferbloat.net/listinfo/bloat
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>> _______________________________________________
>> >>>> Bloat mailing list
>> >>>> Bloat@lists.bufferbloat.net
>> >>>> https://lists.bufferbloat.net/listinfo/bloat
>> >>>
>> >>> _______________________________________________
>> >>> Bloat mailing list
>> >>> Bloat@lists.bufferbloat.net
>> >>> https://lists.bufferbloat.net/listinfo/bloat
>> >>>
>> >>
>> >>
>> >_______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] [Cerowrt-devel]  Check out www.speedof.me - no Flash
       [not found]             ` <25037.1406327367@turing-police.cc.vt.edu>
@ 2014-07-25 23:30               ` David Lang
  2014-07-26 12:53               ` Sebastian Moeller
  1 sibling, 0 replies; 24+ messages in thread
From: David Lang @ 2014-07-25 23:30 UTC (permalink / raw)
  To: Valdis.Kletnieks; +Cc: cerowrt-devel, bloat

On Fri, 25 Jul 2014, Valdis.Kletnieks@vt.edu wrote:

> On Fri, 25 Jul 2014 14:20:53 -0700, David Lang said:
>
>> cost of bandwidth for this is just something to get someone to pay for (ideally
>> someone with tons of bandwidth already who won't notice this sort of test, even
>> if there are a few going on at once.)
>
> Ask U of Wisconsin how that worked out for them when Netgear shipped some
> new boxes....
>
> http://en.wikipedia.org/wiki/NTP_server_misuse_and_abuse#NETGEAR_and_the_University_of_Wisconsin.E2.80.93Madison
>
> It's one thing to come up with a solution that works for the 300 (total wild guess) people
> on this list.  But to be useful for field deployable, it really needs to be able
> to handle a "every Comcast customer in a major metro area", because we *do*
> want this stuff to work well when this makes it into the CPE that Comcast
> gives every new customer, right? :)

the key thing here is to stagger the work.

Don't have every device do the test immediatly at startup. As you noted, that 
will collapse as they all start up at once.

Have them operate from their saved stats and test at (largish) random offsets.

ideally, have this test be against an ANYCAST address, so that the ISP can run a 
copy internally without having to reconfigure the clients.

David Lang

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] [Cerowrt-devel]  Check out www.speedof.me - no Flash
       [not found]             ` <25037.1406327367@turing-police.cc.vt.edu>
  2014-07-25 23:30               ` [Bloat] [Cerowrt-devel] " David Lang
@ 2014-07-26 12:53               ` Sebastian Moeller
  1 sibling, 0 replies; 24+ messages in thread
From: Sebastian Moeller @ 2014-07-26 12:53 UTC (permalink / raw)
  To: Valdis.Kletnieks; +Cc: cerowrt-devel, bloat

Hi Valdis,


On Jul 26, 2014, at 00:29 , Valdis.Kletnieks@vt.edu wrote:

> On Fri, 25 Jul 2014 14:20:53 -0700, David Lang said:
> 
>> cost of bandwidth for this is just something to get someone to pay for (ideally
>> someone with tons of bandwidth already who won't notice this sort of test, even
>> if there are a few going on at once.)
> 
> Ask U of Wisconsin how that worked out for them when Netgear shipped some
> new boxes....
> 
> http://en.wikipedia.org/wiki/NTP_server_misuse_and_abuse#NETGEAR_and_the_University_of_Wisconsin.E2.80.93Madison
> 
> It's one thing to come up with a solution that works for the 300 (total wild guess) people
> on this list.  But to be useful for field deployable, it really needs to be able
> to handle a "every Comcast customer in a major metro area", because we *do*
> want this stuff to work well when this makes it into the CPE that Comcast
> gives every new customer, right? :)

	Hence it would be really sweet if the link speed test could be perfumed against the CMTS/DSLAM as these should be powerful enough to handle load from the number of subscribed “homes”. All we need is a fairly cheap way to measure one-way delays to and from the home router… (I think I am just going to try how far supported and how useful ICMP timestamp requests are, I also wonder why these are considered a security risk and seem to be missing in ICMPv6)

Best Regards
	sebastian

> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] [Cerowrt-devel]  Check out www.speedof.me - no Flash
  2014-07-25 23:26                   ` [Bloat] [Cerowrt-devel] " David Lang
@ 2014-07-26 13:02                     ` Sebastian Moeller
  2014-07-26 20:53                       ` David Lang
  0 siblings, 1 reply; 24+ messages in thread
From: Sebastian Moeller @ 2014-07-26 13:02 UTC (permalink / raw)
  To: David Lang; +Cc: dpreed, cerowrt-devel, bloat

Hi David,


On Jul 26, 2014, at 01:26 , David Lang <david@lang.hm> wrote:

> But I think that what we are seeing from teh results of the bufferbloat work is that a properly configured network doesn't degrade badly as it gets busy.
> 
> Individual services will degrade as they need more bandwidth than is available, but that sort of degredation is easy for the user to understand.
> 
> The current status-quo is where good throughput at 80% utilization may be 80Mb, at 90% utilization it may be 85Mb, at 95% utilization it is 60Mb, and at 100% utilization it pulses between 10Mb and 80Mb averaging around 20Mb and latency goes from 10ms to multiple seconds over this range.
> 
> With BQL and fw_codel, 80% utilization would still be 80Mb, 90% utilization would be 89Mb, 95% utilization would be 93Mb with latency only going to 20ms
> 
> so there is a real problem to solve in the current status-quo, and the question is if there is a way to quantify the problem and test for it in ways that are repeatable, meaningful and understandable.
> 
> This is a place to avoid letting perfect be the enemy of good enough.
> 
> If you ask even relatively technical people about the quality of a network connection, they will talk to you about bandwidth and latency.
> 
> But if you talk to a networking expert, they don't even mention that, they talk about signal strength, waveform distortion, bit error rates, error correction mechanisms, signal regeneration, and probably many other things that I don't know enough to even mention :-)
> 
> 
> Everyone is already measuring peak bandwidth today, and that is always going to be an important factor, so it will stay around.
> 
> So we need to show the degredation of the network, and I think that either ping(loaded)-ping(unloaded) or ping(loaded)/ping(unloaded) will give us meaningful numbers that people can understand and talk about, while still being meaningful in the real world.

	Maybe we should follow Neil and Martin’s lead and consider either ping(unloaded)-ping(loaded) or ping(unloaded)/ping(loaded) and call the whole thing quality estimator or factor (as negative quality or a factor < 0 intuitively shows a degradation). Also my bet is on the difference not on the ratio, why should people with bad latency to begin with (satellite?) be more tolerant to further degradation? I would assume that on a high latency link if at all the “budget” for further degradation might be smaller than on a low latency link (reasoning: there might be a fixed latency budget for acceptable latency for voip).

> 
> which of the two is more useful is something that we would need to get a bunch of people with different speed lines to report and see which is affected less by line differences and distance to target.

	Or make sure we always measure against the closest target (which with satellite might still be far away)?

Best Regards
	sebastian


> 
> David Lang
> 
> On Fri, 25 Jul 2014, dpreed@reed.com wrote:
> 
>> I think what is being discussed is "how to measure the quality of one endpoint's experience of the entire Internet over all time or over a specific interval of time".
>> Yet the systems that are built on top of the Internet transport do not have any kind of uniform dependence on the underlying transport behavior in terms of their quality.  Even something like VoIP's quality as experienced by two humans talking over it has a dependency on the Internet's behavior, but one that is hardly simple.
>> As an extreme, if one endpoint experiences a direct DDoS attack or is indirectly affected by one somewhere in the path, the quality of the experience might be dramatically reduced.
>> So any attempt to define a delta-Q that has meaning in terms of user experience appears pointless and even silly - the endpoint experience is adequate under a very wide variety of conditions, but degrades terribly under certain kinds of conditions.
>> As a different point, let's assume that the last-mile is 80% utilized, but the latency variation in that utilization is not larger than 50 msec.  This is a feasible-to-imagine operating point, but it requires a certain degreee of tight control that may be very hard to achieve over thousands of independent application services through that point, so its feasibility is contingent on lots of factors. Then if the 20% capacity is far larger than 64 kb/sec we know that toll-quality audio can be produced with a small endpoint "jitter buffer". There's no "delta-Q" there at all - quality is great.
>> So the point is: a single number or even a single "morphism" (whatever that is) to a specific algebraic domain element (a mapping to a semi-lattice with non-Abelian operators?) does not allow one to define a "measure" of an endpoint of the Internet that can be used to compute "quality" of all applications.
>> Or in purely non-abstract terms: if there were a delta-Q it would be useless for most network applications, but might be useful for a single network application.
>> So I submit that delta-Q is a *metaphor* and not a very useful one at that. It's probably as useful as providing a "funkiness" measure for an Internat access point.  We can certainly talk about and make claims about the relative "funkiness" of different connections and different providers.  We might even claim that cable providers make funkier network providers than cellular providers.
>> But to what end?
>> 
>> 
>> On Friday, July 25, 2014 5:13pm, "David Lang" <david@lang.hm> said:
>> 
>> 
>> 
>>> On Fri, 25 Jul 2014, Martin Geddes wrote:
>>> > So what is ΔQ and how do you "compute" it (to the extent it is a
>>> "computed"
>>> > thing)?
>>> don't try to reduce it to a single number, we have two numbers that seem to
>>> matter
>>> 1. throughput (each direction)
>>> 2. latency under load
>>> Currently the speed test sites report throughput in each direction and ping time
>>> while not under load
>>> If they could just add a ping time under load measurement, then we could talk
>>> meaningfully about either the delta or ratio of the ping times as the
>>> "bufferbloat factor"
>>> no, it wouldn't account for absolutly every nuance, but it would come pretty
>>> close.
>>> If a connection has good throughput and a low bufferbloat factor, it should be
>>> good for any type of use.
>>> If it has good throughput, but a horrid bufferbloat factor, then you need to
>>> artifically limit your traffic to stay clear of saturating the bandwith
>>> (sacraficing throughput)
>>> David Lang
>>> > Starting point: the only observable effect of a network is to lose and
>>> > delay data -- i.e. to "attenuate quality" by adding the toxic effects of
>>> > time to distributed computations. ΔQ is a *morphism* that relates the
>>> > "quality attenuation" that the network imposes to the application
>>> > performance, and describes the trading spaces at all intermediate layers of
>>> > abstraction. It is shown in the attached graphic.
>>> >
>>> > Critically, it frames quality as something that can only be lost
>>> > ("attenuated"), both by the network and the application. Additionally, it
>>> > is stochastic, and works with random variables and distributions.
>>> >
>>> > At its most concrete level, it is the individual impairment encountered by
>>> > every packet when the network in operation. But we don't want to have to
>>> > track every packet - 1:1 scale maps are pretty useless. So we need to
>>> > abstract that in order to create a model that has value.
>>> >
>>> > Next abstraction: an improper random variable. This unifies loss and delay
>>> > into a single stochastic object.
>>> > Next abstraction: received transport, which is a CDF where we are
>>> > interested in the properties of the "tail".
>>> >
>>> > Next abstraction, that joins network performance and application QoE (as
>>> > relates to performance): relate the CDF to the application through a
>>> > Quality Transport Agreement. This "stochastic contract" is both necessary
>>> > and sufficient to deliver the application outcome.
>>> >
>>> > Next concretisation towards QoE: offered load of demand, as a CDF.
>>> > Next concretisation towards QoE: breach hazard metric, which abstracts the
>>> > application performance. Indicates the likelihood of the QTA contract being
>>> > broken, and how badly.
>>> > Final concretisation: the individual application performance encountered by
>>> > every user. Again, a 1:1 map that isn't very helpful.
>>> >
>>> > So as you can see, it's about as far away from a single point average
>>> > metric as you can possibly get. A far richer model is required in order to
>>> > achieve robust performance engineering.
>>> >
>>> > It is "computed" using multi-point measurements to capture the
>>> > distribution. The G/S/V charts you see are based on processing that data to
>>> > account for various issues, including clock skew.
>>> >
>>> > I hope that helps. We need to document more of this in public, which is an
>>> > ongoing process.
>>> >
>>> > Martin
>>> >
>>> > On 25 July 2014 16:58, Sebastian Moeller <moeller0@gmx.de> wrote:
>>> >
>>> >> Hi Martin,
>>> >>
>>> >> thanks for the pointers,
>>> >>
>>> >>
>>> >> On Jul 25, 2014, at 16:25 , Martin Geddes <mail@martingeddes.com>
>>> wrote:
>>> >>
>>> >>> You may find the following useful background reading on the state of
>>> the
>>> >> art in network measurement, and a primer on ΔQ (which is the
>>> property we
>>> >> wish to measure).
>>> >>>
>>> >>> First, start with this presentation: Network performance
>>> optimisation
>>> >> using high-fidelity measures
>>> >>> Then read this one to decompose ΔQ into G, S and V:
>>> Fundamentals of
>>> >> network performance engineering
>>> >>> Then read this one to get a bit more sense on what ΔQ is
>>> about:
>>> >> Introduction to ΔQ and Network Performance Science (extracts)
>>> >>>
>>> >>> Then read these essays:
>>> >>>
>>> >>> Foundation of Network Science
>>> >>> How to do network performance chemistry
>>> >>> How to X-ray a telecoms network
>>> >>> There is no quality in averages: IPX case study
>>> >>
>>> >> All of this makes intuitively sense, but it is a bit light on
>>> how
>>> >> deltaQ is to be computed ;).
>>> >> As far as I understand it also has not much bearing on my home
>>> >> network; the only one under my control. Now, following the buffer bloat
>>> >> discussion for some years, I have internalized the idea that bandwidth
>>> >> alone does not suffice to describe the quality of my network connection.
>>> I
>>> >> think that the latency increase under load (for unrelated flows) is the
>>> >> best of all the bad single number measures of network dynamics/quality.
>>> I
>>> >> should be related to what I understood deltaQ to depend on (as packet
>>> loss
>>> >> for non real time flows will cause an increase in latency). I think
>>> that
>>> >> continuous measurements make a to n of sense for ISPs,
>>> backbone-operators,
>>> >> mobile carriers … but at home, basically, I operate as my own
>>> network
>>> >> quality monitor ;) (that is I try to pin point and debug (transient)
>>> >> anomalies).
>>> >>
>>> >>>
>>> >>> Martin
>>> >>>
>>> >>> For fresh thinking about telecoms sign up for my free newsletter or
>>> >> visit the Geddes Think Tank.
>>> >>> LinkedIn Twitter Mobile: +44 7957 499219 Skype: mgeddes
>>> >>> Martin Geddes Consulting Ltd, Incorporated in Scotland, number
>>> SC275827
>>> >> VAT Number: 859 5634 72 Registered office: 17-19 East London Street,
>>> >> Edinburgh, EH7 4BN
>>> >>>
>>> >>>
>>> >>>
>>> >>> On 25 July 2014 15:17, Sebastian Moeller <moeller0@gmx.de>
>>> wrote:
>>> >>> Hi Neil,
>>> >>>
>>> >>>
>>> >>> On Jul 25, 2014, at 14:24 , Neil Davies <Neil.Davies@pnsol.com>
>>> wrote:
>>> >>>
>>> >>>> Rich
>>> >>>>
>>> >>>> I have a deep worry over this style of single point measurement -
>>> and
>>> >> hence speed - as an appropriate measure.
>>> >>>
>>> >>> But how do you propose to measure the (bottleneck) link
>>> capacity
>>> >> then? It turns out for current CPE and CMTS/DSLAM equipment one
>>> typically
>>> >> can not relay on good QoE out of the box, since typically these devices
>>> do
>>> >> not use their (largish) buffers wisely. Instead the current remedy is to
>>> >> take back control over the bottleneck link by shaping the actually sent
>>> >> traffic to stay below the hardware link capacity thereby avoiding
>>> feeling
>>> >> the consequences of the over-buffering. But to do this is is quite
>>> helpful
>>> >> to get an educated guess what the bottleneck links capacity actually is.
>>> >> And for that purpose a speediest seems useful.
>>> >>>
>>> >>>
>>> >>>> We know, and have evidence, that throughput/utilisation is not a
>>> good
>>> >> proxy for the network delivering suitable quality of experience. We work
>>> >> with organisation (Telco’s, large system integrators etc) where we
>>> spend a
>>> >> lot of time having to “undo” the consequences of
>>> “maximising speed”. Just
>>> >> like there is more to life than work, there is more to QoE than speed.
>>> >>>>
>>> >>>> For more specific comments see inline
>>> >>>>
>>> >>>> On 25 Jul 2014, at 13:09, Rich Brown
>>> <richb.hanover@gmail.com> wrote:
>>> >>>>
>>> >>>>> Neil,
>>> >>>>>
>>> >>>>> Thanks for the note and the observations. My thoughts:
>>> >>>>>
>>> >>>>> 1) I note that speedof.me does seem to overstate the speed
>>> results.
>>> >> At my home, it reports 5.98mbps down, and 638kbps up, while
>>> >> betterspeedtest.sh shows 5.49/0.61 mbps. (speedtest.net gives numbers
>>> >> similar to the betterspeedtest.net script.)
>>> >>>>>
>>> >>>>> 2) I think we're in agreement about the peak upload rate that
>>> you
>>> >> point out is too high. Their measurement code runs in the browser. It
>>> seems
>>> >> likely that the browser pumps out a few big packets before getting flow
>>> >> control information, thus giving the impression that they can send at a
>>> >> higher rate. This comports with the obvious decay that ramps toward the
>>> >> long-term rate.
>>> >>>>
>>> >>>> I think that its simpler than that, it is measuring the rate at
>>> which
>>> >> it can push packets out the interface - its real time rate is precisely
>>> >> that - it can not be the rate being reported by the far end, it can
>>> never
>>> >> exceed the limiting link. The long term average (if it is like other
>>> speed
>>> >> testers we’ve had to look into) is being measured at the TCP/IP SDU
>>> level
>>> >> by measuring the difference in time between the first and last
>>> timestamps
>>> >> of data stream and dividing that into the total data sent. Their
>>> >> “over-estimate” is because there are packets buffered in the
>>> CPE that have
>>> >> left the machine but not arrived at the far end.
>>> >>>
>>> >>> Testing from an openwrt router located at a
>>> >> high-symmetric-bandwidth location shows that speedof.me does not scale
>>> >> higher than ~ 130 Mbps server to client and ~15Mbps client to server (on
>>> >> the same connection I can get 130Mbps S2C and ~80Mbps C2S, so the
>>> asymmetry
>>> >> in the speedof.me results is not caused by my local environment).
>>> >>> @Rich and Dave, this probably means that for the upper end
>>> of
>>> >> fiber and cable and VDSL connections speed of.me is not going to be a
>>> >> reliable speed measure… Side note www.sppedtest.net shows ~100Mbps
>>> S2C
>>> >> and ~100Mbps C2S, so might be better suited to high-upload links...
>>> >>>
>>> >>>>
>>> >>>>>
>>> >>>>> 3) But that long-term speed should be at or below the
>>> theoretical
>>> >> long-term rate, not above it.
>>> >>>>
>>> >>>> Agreed, but in this case knowing the sync rate already defines
>>> that
>>> >> maximum.
>>> >>>
>>> >>> I fully agree, but for ADSL the sync rate also contains a lot
>>> of
>>> >> encapsulation, so the maximum achievable TCP rate is at best ~90% of
>>> link
>>> >> rate. Note for cerowrt’s SQM system the link rate is exactly the
>>> right
>>> >> number to start out with at that system can take the encapsulation into
>>> >> account. But even then it is somewhat unintuitive to deduce the expected
>>> >> good-put from the link rate.
>>> >>>
>>> >>>>
>>> >>>>>
>>> >>>>> Two experiments for you to try:
>>> >>>>>
>>> >>>>> a) What does betterspeedtest.sh show? (It's in the latest
>>> CeroWrt, in
>>> >> /usr/lib/CeroWrtScripts, or get it from github:
>>> >> https://github.com/richb-hanover/CeroWrtScripts )
>>> >>>>>
>>> >>>>> b) What does www.speedtest.net show?
>>> >>>>>
>>> >>>>> I will add your question (about the inaccuracy) to the note
>>> that I
>>> >> want to send out to speedof.me this weekend. I will also ask that they
>>> >> include min/max latency measurements to their test, and an option to
>>> send
>>> >> for > 10 seconds to minimize any effect of PowerBoost…
>>> >>>
>>> >>> I think they do already, at least for the download
>>> bandwidth;
>>> >> they start with 128Kb and keep doubling the file size until a file takes
>>> >> longer than 8 seconds to transfer, they only claim to report the numbers
>>> >> from that last transferred file, so worst case with a stable link and a
>>> >> bandwidth > 16kbps ;), it has taken at least 12 seconds (4 plus 8) of
>>> >> measuring before the end of the plot, so the bandwidth of at least the
>>> last
>>> >> half of the download plot should be representative even assuming power
>>> >> boost. Caveat, I assume that power boost will not be reset by the
>>> transient
>>> >> lack of data transfer between the differently sized files (but since it
>>> >> should involve the same IPs and port# why should power boost reset
>>> itself?).
>>> >>>
>>> >>> Best Regards
>>> >>> Sebastian
>>> >>>
>>> >>>
>>> >>>
>>> >>>>>
>>> >>>>> Best regards,
>>> >>>>>
>>> >>>>> Rich
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>> On Jul 25, 2014, at 5:10 AM, Neil Davies
>>> <neil.davies@pnsol.com>
>>> >> wrote:
>>> >>>>>
>>> >>>>>> Rich
>>> >>>>>>
>>> >>>>>> You may want to check how accurate they are to start.
>>> >>>>>>
>>> >>>>>> I just ran a “speed test” on my line (which I
>>> have complete control
>>> >> and visibility over the various network elements) and it reports an
>>> average
>>> >> “speed” (in the up direction) that is in excess of the
>>> capacity of the
>>> >> line, it reports the maximum rate at nearly twice the best possible rate
>>> of
>>> >> the ADSL connection.
>>> >>>>>>
>>> >>>>>> Doesn’t matter how pretty it is, if its not
>>> accurate it is of no
>>> >> use. This is rather ironic as the web site claims it is the
>>> “smartest and
>>> >> most accurate”!
>>> >>>>>>
>>> >>>>>> Neil
>>> >>>>>>
>>> >>>>>> <speedof_me_14-07-25.png>
>>> >>>>>>
>>> >>>>>> PS pretty clear to me what mistake they’ve made in
>>> the measurement
>>> >> process - its to do with incorrect inference and hence missing the
>>> >> buffering effects.
>>> >>>>>>
>>> >>>>>> On 20 Jul 2014, at 14:19, Rich Brown
>>> <richb.hanover@gmail.com>
>>> >> wrote:
>>> >>>>>>
>>> >>>>>>> Doc Searls (
>>> >>
>>> http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/)
>>> >> mentioned in passing that he uses a new speed test website. I checked it
>>> >> out, and it was very cool…
>>> >>>>>>>
>>> >>>>>>> www.speedof.me is an all-HTML5 website that seems to
>>> make accurate
>>> >> measurements of the up and download speeds of your internet connection.
>>> >> It’s also very attractive, and the real-time plots of the speed
>>> show
>>> >> interesting info. (screen shot at: http://richb-hanover.com/speedof-me/)
>>> >>>>>>>
>>> >>>>>>> Now if we could get them to a) allow longer/bigger
>>> tests to
>>> >> circumvent PowerBoost, and b) include a latency measurement so people
>>> could
>>> >> point out their bufferbloated equipment.
>>> >>>>>>>
>>> >>>>>>> I'm going to send them a note. Anything else I should
>>> add?
>>> >>>>>>>
>>> >>>>>>> Rich
>>> >>>>>>> _______________________________________________
>>> >>>>>>> Bloat mailing list
>>> >>>>>>> Bloat@lists.bufferbloat.net
>>> >>>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>> >>>>>>
>>> >>>>>
>>> >>>>
>>> >>>> _______________________________________________
>>> >>>> Bloat mailing list
>>> >>>> Bloat@lists.bufferbloat.net
>>> >>>> https://lists.bufferbloat.net/listinfo/bloat
>>> >>>
>>> >>> _______________________________________________
>>> >>> Bloat mailing list
>>> >>> Bloat@lists.bufferbloat.net
>>> >>> https://lists.bufferbloat.net/listinfo/bloat
>>> >>>
>>> >>
>>> >>
>>> >_______________________________________________
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>> _______________________________________________
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] [Cerowrt-devel]  Check out www.speedof.me - no Flash
  2014-07-26 13:02                     ` Sebastian Moeller
@ 2014-07-26 20:53                       ` David Lang
  2014-07-26 22:00                         ` Sebastian Moeller
  0 siblings, 1 reply; 24+ messages in thread
From: David Lang @ 2014-07-26 20:53 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: dpreed, cerowrt-devel, bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 4720 bytes --]

On Sat, 26 Jul 2014, Sebastian Moeller wrote:

> On Jul 26, 2014, at 01:26 , David Lang <david@lang.hm> wrote:
>
>> But I think that what we are seeing from teh results of the bufferbloat work 
>> is that a properly configured network doesn't degrade badly as it gets busy.
>>
>> Individual services will degrade as they need more bandwidth than is 
>> available, but that sort of degredation is easy for the user to understand.
>>
>> The current status-quo is where good throughput at 80% utilization may be 
>> 80Mb, at 90% utilization it may be 85Mb, at 95% utilization it is 60Mb, and 
>> at 100% utilization it pulses between 10Mb and 80Mb averaging around 20Mb and 
>> latency goes from 10ms to multiple seconds over this range.
>>
>> With BQL and fw_codel, 80% utilization would still be 80Mb, 90% utilization 
>> would be 89Mb, 95% utilization would be 93Mb with latency only going to 20ms
>>
>> so there is a real problem to solve in the current status-quo, and the 
>> question is if there is a way to quantify the problem and test for it in ways 
>> that are repeatable, meaningful and understandable.
>>
>> This is a place to avoid letting perfect be the enemy of good enough.
>>
>> If you ask even relatively technical people about the quality of a network 
>> connection, they will talk to you about bandwidth and latency.
>>
>> But if you talk to a networking expert, they don't even mention that, they 
>> talk about signal strength, waveform distortion, bit error rates, error 
>> correction mechanisms, signal regeneration, and probably many other things 
>> that I don't know enough to even mention :-)
>>
>>
>> Everyone is already measuring peak bandwidth today, and that is always going 
>> to be an important factor, so it will stay around.
>>
>> So we need to show the degredation of the network, and I think that either 
>> ping(loaded)-ping(unloaded) or ping(loaded)/ping(unloaded) will give us 
>> meaningful numbers that people can understand and talk about, while still 
>> being meaningful in the real world.
>
> 	Maybe we should follow Neil and Martin’s lead and consider either 
> ping(unloaded)-ping(loaded) or ping(unloaded)/ping(loaded) and call the whole 
> thing quality estimator or factor (as negative quality or a factor < 0 
> intuitively shows a degradation).

That's debatable, if we call this a bufferbloat factor, the higher the number 
the more bloat you suffer.

there's also the fact that the numeric differences if you do small/large vs 
small/larger aren't impressive while large/small vs larger/small look 
substantially different. This is a psychology question.

> Also my bet is on the difference not on the ratio, why should people with bad 
> latency to begin with (satellite?) be more tolerant to further degradation? I 
> would assume that on a high latency link if at all the “budget” for further 
> degradation might be smaller than on a low latency link (reasoning: there 
> might be a fixed latency budget for acceptable latency for voip).

we'd need to check. The problem with difference is that it's far more affected 
by the bandwidth of the connection than a ratio is. If your measurement packets 
end up behind one extra data packet, your absolute number will grow based on the 
transmission time required for that data packet.

so I'm leaning towards the ratio making more sense when comparing vastly 
different types of lines.

As for th elatency budget idea, I don't buy that, if it was the case then we 
would have no problems until latency exceeding the magic value and then the 
service would fail entirely. What we have in practice is that buffering covers 
up a lot of latency, as long as the jitter isn't bad. You may have a lag between 
what you say and when someone on the other end interrupts you without much 
trouble (as long as echo cancellation takes it into account)

>> which of the two is more useful is something that we would need to get a 
>> bunch of people with different speed lines to report and see which is 
>> affected less by line differences and distance to target.
>
> 	Or make sure we always measure against the closest target (which with 
> satellite might still be far away)?

It's desirable to test against the closest target to reduce the impact on the 
Internet overall, but ideally the quality measurement would not depend on how 
far away the target is.

If you live in Silicon Valley, you are very close to a lot of good targets, if 
you live in outer mongolia (or on a farm in the midwestern US) you are a long 
way from any target, but we don't want the measurement to change a lot, because 
the problem is probably in the first couple of hops (absent a Verizon/Level3 
type peering problem :-)

David Lang

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] [Cerowrt-devel]  Check out www.speedof.me - no Flash
  2014-07-26 20:53                       ` David Lang
@ 2014-07-26 22:00                         ` Sebastian Moeller
  2014-07-26 22:30                           ` David Lang
  0 siblings, 1 reply; 24+ messages in thread
From: Sebastian Moeller @ 2014-07-26 22:00 UTC (permalink / raw)
  To: David Lang; +Cc: dpreed, cerowrt-devel, bloat

Hi David,


On Jul 26, 2014, at 22:53 , David Lang <david@lang.hm> wrote:

> On Sat, 26 Jul 2014, Sebastian Moeller wrote:
> 
>> On Jul 26, 2014, at 01:26 , David Lang <david@lang.hm> wrote:
>> 
>>> But I think that what we are seeing from teh results of the bufferbloat work is that a properly configured network doesn't degrade badly as it gets busy.
>>> 
>>> Individual services will degrade as they need more bandwidth than is available, but that sort of degredation is easy for the user to understand.
>>> 
>>> The current status-quo is where good throughput at 80% utilization may be 80Mb, at 90% utilization it may be 85Mb, at 95% utilization it is 60Mb, and at 100% utilization it pulses between 10Mb and 80Mb averaging around 20Mb and latency goes from 10ms to multiple seconds over this range.
>>> 
>>> With BQL and fw_codel, 80% utilization would still be 80Mb, 90% utilization would be 89Mb, 95% utilization would be 93Mb with latency only going to 20ms
>>> 
>>> so there is a real problem to solve in the current status-quo, and the question is if there is a way to quantify the problem and test for it in ways that are repeatable, meaningful and understandable.
>>> 
>>> This is a place to avoid letting perfect be the enemy of good enough.
>>> 
>>> If you ask even relatively technical people about the quality of a network connection, they will talk to you about bandwidth and latency.
>>> 
>>> But if you talk to a networking expert, they don't even mention that, they talk about signal strength, waveform distortion, bit error rates, error correction mechanisms, signal regeneration, and probably many other things that I don't know enough to even mention :-)
>>> 
>>> 
>>> Everyone is already measuring peak bandwidth today, and that is always going to be an important factor, so it will stay around.
>>> 
>>> So we need to show the degredation of the network, and I think that either ping(loaded)-ping(unloaded) or ping(loaded)/ping(unloaded) will give us meaningful numbers that people can understand and talk about, while still being meaningful in the real world.
>> 
>> 	Maybe we should follow Neil and Martin’s lead and consider either ping(unloaded)-ping(loaded) or ping(unloaded)/ping(loaded) and call the whole thing quality estimator or factor (as negative quality or a factor < 0 intuitively shows a degradation).
> 
> That's debatable, if we call this a bufferbloat factor, the higher the number the more bloat you suffer.
> 
> there's also the fact that the numeric differences if you do small/large vs small/larger aren't impressive while large/small vs larger/small look substantially different. This is a psychology question.

	I am not in this for marketing ;) so I am not out for impressive numbers ;)

> 
>> Also my bet is on the difference not on the ratio, why should people with bad latency to begin with (satellite?) be more tolerant to further degradation? I would assume that on a high latency link if at all the “budget” for further degradation might be smaller than on a low latency link (reasoning: there might be a fixed latency budget for acceptable latency for voip).
> 
> we'd need to check. The problem with difference is that it's far more affected by the bandwidth of the connection than a ratio is. If your measurement packets end up behind one extra data packet, your absolute number will grow based on the transmission time required for that data packet.
> 
> so I'm leaning towards the ratio making more sense when comparing vastly different types of lines.

	But for a satellite link with hight 1st hop RTT the buffer bloat factor is always going to look minuscule…. (I still think the difference is better)

> 
> As for th elatency budget idea, I don't buy that, if it was the case then we would have no problems until latency exceeding the magic value and then the service would fail entirely.

	No rather think of it that with increases latency pain increases, not a threshold but a gradual change from good over acceptable into painful...


> What we have in practice is that buffering covers up a lot of latency, as long as the jitter isn't bad. You may have a lag between what you say and when someone on the other end interrupts you without much trouble (as long as echo cancellation takes it into account)

	Remember transcontinental long distance calls? If the delay gets too long communication suffers especially in real time applications like voip.

> 
>>> which of the two is more useful is something that we would need to get a bunch of people with different speed lines to report and see which is affected less by line differences and distance to target.
>> 
>> 	Or make sure we always measure against the closest target (which with satellite might still be far away)?
> 
> It's desirable to test against the closest target to reduce the impact on the Internet overall, but ideally the quality measurement would not depend on how far away the target is.

	No the “quality” will be most affected by the bottleneck link, but the more hops we accumulate the more variance we pick up and the more measurements we need to reach an acceptable confidence in our data...

Best Regards	
	Sebastian

> 
> If you live in Silicon Valley, you are very close to a lot of good targets, if you live in outer mongolia (or on a farm in the midwestern US) you are a long way from any target, but we don't want the measurement to change a lot, because the problem is probably in the first couple of hops (absent a Verizon/Level3 type peering problem :-)
> 
> David Lang


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] [Cerowrt-devel]  Check out www.speedof.me - no Flash
  2014-07-26 22:00                         ` Sebastian Moeller
@ 2014-07-26 22:30                           ` David Lang
  2014-07-26 23:14                             ` Sebastian Moeller
  0 siblings, 1 reply; 24+ messages in thread
From: David Lang @ 2014-07-26 22:30 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: dpreed, cerowrt-devel, bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 6059 bytes --]

On Sun, 27 Jul 2014, Sebastian Moeller wrote:

> On Jul 26, 2014, at 22:53 , David Lang <david@lang.hm> wrote:
>
>> On Sat, 26 Jul 2014, Sebastian Moeller wrote:
>>
>>> On Jul 26, 2014, at 01:26 , David Lang <david@lang.hm> wrote:
>>>
>>>> But I think that what we are seeing from teh results of the bufferbloat work is that a properly configured network doesn't degrade badly as it gets busy.
>>>>
>>>> Individual services will degrade as they need more bandwidth than is available, but that sort of degredation is easy for the user to understand.
>>>>
>>>> The current status-quo is where good throughput at 80% utilization may be 80Mb, at 90% utilization it may be 85Mb, at 95% utilization it is 60Mb, and at 100% utilization it pulses between 10Mb and 80Mb averaging around 20Mb and latency goes from 10ms to multiple seconds over this range.
>>>>
>>>> With BQL and fw_codel, 80% utilization would still be 80Mb, 90% utilization would be 89Mb, 95% utilization would be 93Mb with latency only going to 20ms
>>>>
>>>> so there is a real problem to solve in the current status-quo, and the question is if there is a way to quantify the problem and test for it in ways that are repeatable, meaningful and understandable.
>>>>
>>>> This is a place to avoid letting perfect be the enemy of good enough.
>>>>
>>>> If you ask even relatively technical people about the quality of a network connection, they will talk to you about bandwidth and latency.
>>>>
>>>> But if you talk to a networking expert, they don't even mention that, they talk about signal strength, waveform distortion, bit error rates, error correction mechanisms, signal regeneration, and probably many other things that I don't know enough to even mention :-)
>>>>
>>>>
>>>> Everyone is already measuring peak bandwidth today, and that is always going to be an important factor, so it will stay around.
>>>>
>>>> So we need to show the degredation of the network, and I think that either ping(loaded)-ping(unloaded) or ping(loaded)/ping(unloaded) will give us meaningful numbers that people can understand and talk about, while still being meaningful in the real world.
>>>
>>> 	Maybe we should follow Neil and Martin’s lead and consider either ping(unloaded)-ping(loaded) or ping(unloaded)/ping(loaded) and call the whole thing quality estimator or factor (as negative quality or a factor < 0 intuitively shows a degradation).
>>
>> That's debatable, if we call this a bufferbloat factor, the higher the number the more bloat you suffer.
>>
>> there's also the fact that the numeric differences if you do small/large vs small/larger aren't impressive while large/small vs larger/small look substantially different. This is a psychology question.
>
> 	I am not in this for marketing ;) so I am not out for impressive numbers ;)

well, part of the problem we have is exactly marketing, so we do need to take 
that into account.

This is one of the things that has come up in multiple forums after the EFF 
announcement, people saying that they've heard of bufferbloat but don't have any 
way of measuring it or comparing notes.

getting a marketing number here would be a huge help.

>>> Also my bet is on the difference not on the ratio, why should people with bad latency to begin with (satellite?) be more tolerant to further degradation? I would assume that on a high latency link if at all the “budget” for further degradation might be smaller than on a low latency link (reasoning: there might be a fixed latency budget for acceptable latency for voip).
>>
>> we'd need to check. The problem with difference is that it's far more affected by the bandwidth of the connection than a ratio is. If your measurement packets end up behind one extra data packet, your absolute number will grow based on the transmission time required for that data packet.
>>
>> so I'm leaning towards the ratio making more sense when comparing vastly different types of lines.
>
> 	But for a satellite link with hight 1st hop RTT the buffer bloat factor 
> is always going to look minuscule…. (I still think the difference is better)
>
>>
>> As for th elatency budget idea, I don't buy that, if it was the case then we would have no problems until latency exceeding the magic value and then the service would fail entirely.
>
> 	No rather think of it that with increases latency pain increases, not a 
> threshold but a gradual change from good over acceptable into painful...
>
>
>> What we have in practice is that buffering covers up a lot of latency, as long as the jitter isn't bad. You may have a lag between what you say and when someone on the other end interrupts you without much trouble (as long as echo cancellation takes it into account)
>
> 	Remember transcontinental long distance calls? If the delay gets too 
> long communication suffers especially in real time applications like voip.

how much of that was due to echo cancellation issues compared to the raw 
latency? the speed of light across the country hasn't changed. and I'd actually 
bet that the signalling speed of a direct analog connection across the country 
was actually faster than the current mic to speaker signalling speed

    analog -> digital -> many routers -> digital -> analog

but the echo cancellation is so much more sophisticated that we don't notice 
the delay as much

>>
>>>> which of the two is more useful is something that we would need to get a bunch of people with different speed lines to report and see which is affected less by line differences and distance to target.
>>>
>>> 	Or make sure we always measure against the closest target (which with satellite might still be far away)?
>>
>> It's desirable to test against the closest target to reduce the impact on the Internet overall, but ideally the quality measurement would not depend on how far away the target is.
>
> 	No the “quality” will be most affected by the bottleneck link, but the 
> more hops we accumulate the more variance we pick up and the more measurements 
> we need to reach an acceptable confidence in our data...

true

David Lang

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Bloat] [Cerowrt-devel]  Check out www.speedof.me - no Flash
  2014-07-26 22:30                           ` David Lang
@ 2014-07-26 23:14                             ` Sebastian Moeller
  0 siblings, 0 replies; 24+ messages in thread
From: Sebastian Moeller @ 2014-07-26 23:14 UTC (permalink / raw)
  To: David Lang; +Cc: dpreed, cerowrt-devel, bloat

Hi David,

On Jul 27, 2014, at 00:30 , David Lang <david@lang.hm> wrote:

> On Sun, 27 Jul 2014, Sebastian Moeller wrote:
> 
>> On Jul 26, 2014, at 22:53 , David Lang <david@lang.hm> wrote:
>> 
>>> On Sat, 26 Jul 2014, Sebastian Moeller wrote:
>>> 
>>>> On Jul 26, 2014, at 01:26 , David Lang <david@lang.hm> wrote:
>>>> 
>>>>> But I think that what we are seeing from teh results of the bufferbloat work is that a properly configured network doesn't degrade badly as it gets busy.
>>>>> 
>>>>> Individual services will degrade as they need more bandwidth than is available, but that sort of degredation is easy for the user to understand.
>>>>> 
>>>>> The current status-quo is where good throughput at 80% utilization may be 80Mb, at 90% utilization it may be 85Mb, at 95% utilization it is 60Mb, and at 100% utilization it pulses between 10Mb and 80Mb averaging around 20Mb and latency goes from 10ms to multiple seconds over this range.
>>>>> 
>>>>> With BQL and fw_codel, 80% utilization would still be 80Mb, 90% utilization would be 89Mb, 95% utilization would be 93Mb with latency only going to 20ms
>>>>> 
>>>>> so there is a real problem to solve in the current status-quo, and the question is if there is a way to quantify the problem and test for it in ways that are repeatable, meaningful and understandable.
>>>>> 
>>>>> This is a place to avoid letting perfect be the enemy of good enough.
>>>>> 
>>>>> If you ask even relatively technical people about the quality of a network connection, they will talk to you about bandwidth and latency.
>>>>> 
>>>>> But if you talk to a networking expert, they don't even mention that, they talk about signal strength, waveform distortion, bit error rates, error correction mechanisms, signal regeneration, and probably many other things that I don't know enough to even mention :-)
>>>>> 
>>>>> 
>>>>> Everyone is already measuring peak bandwidth today, and that is always going to be an important factor, so it will stay around.
>>>>> 
>>>>> So we need to show the degredation of the network, and I think that either ping(loaded)-ping(unloaded) or ping(loaded)/ping(unloaded) will give us meaningful numbers that people can understand and talk about, while still being meaningful in the real world.
>>>> 
>>>> 	Maybe we should follow Neil and Martin’s lead and consider either ping(unloaded)-ping(loaded) or ping(unloaded)/ping(loaded) and call the whole thing quality estimator or factor (as negative quality or a factor < 0 intuitively shows a degradation).
>>> 
>>> That's debatable, if we call this a bufferbloat factor, the higher the number the more bloat you suffer.
>>> 
>>> there's also the fact that the numeric differences if you do small/large vs small/larger aren't impressive while large/small vs larger/small look substantially different. This is a psychology question.
>> 
>> 	I am not in this for marketing ;) so I am not out for impressive numbers ;)
> 
> well, part of the problem we have is exactly marketing, so we do need to take that into account.
> 
> This is one of the things that has come up in multiple forums after the EFF announcement, people saying that they've heard of bufferbloat but don't have any way of measuring it or comparing notes.
> 
> getting a marketing number here would be a huge help.

	Good point.

> 
>>>> Also my bet is on the difference not on the ratio, why should people with bad latency to begin with (satellite?) be more tolerant to further degradation? I would assume that on a high latency link if at all the “budget” for further degradation might be smaller than on a low latency link (reasoning: there might be a fixed latency budget for acceptable latency for voip).
>>> 
>>> we'd need to check. The problem with difference is that it's far more affected by the bandwidth of the connection than a ratio is. If your measurement packets end up behind one extra data packet, your absolute number will grow based on the transmission time required for that data packet.
>>> 
>>> so I'm leaning towards the ratio making more sense when comparing vastly different types of lines.
>> 
>> 	But for a satellite link with hight 1st hop RTT the buffer bloat factor is always going to look minuscule…. (I still think the difference is better)
>> 
>>> 
>>> As for th elatency budget idea, I don't buy that, if it was the case then we would have no problems until latency exceeding the magic value and then the service would fail entirely.
>> 
>> 	No rather think of it that with increases latency pain increases, not a threshold but a gradual change from good over acceptable into painful...
>> 
>> 
>>> What we have in practice is that buffering covers up a lot of latency, as long as the jitter isn't bad. You may have a lag between what you say and when someone on the other end interrupts you without much trouble (as long as echo cancellation takes it into account)
>> 
>> 	Remember transcontinental long distance calls? If the delay gets too long communication suffers especially in real time applications like voip.
> 
> how much of that was due to echo cancellation issues compared to the raw latency? the speed of light across the country hasn't changed. and I'd actually bet that the signalling speed of a direct analog connection across the country was actually faster than the current mic to speaker signalling speed
> 
>   analog -> digital -> many routers -> digital -> analog
> 
> but the echo cancellation is so much more sophisticated that we don't notice the delay as much

	It is more the fact that if you stop speaking in the US it will take say 150 ms to reach my ear and even if I respond immediately it will take the same time to come back to you. 300ms is an awkward long pause and you might have already started your next sentence interfering with my signal reaching your speaker… Now I am not sure about the exact numbers but the longer the transmission delay the more awkward the conversation...

> 
>>> 
>>>>> which of the two is more useful is something that we would need to get a bunch of people with different speed lines to report and see which is affected less by line differences and distance to target.
>>>> 
>>>> 	Or make sure we always measure against the closest target (which with satellite might still be far away)?
>>> 
>>> It's desirable to test against the closest target to reduce the impact on the Internet overall, but ideally the quality measurement would not depend on how far away the target is.
>> 
>> 	No the “quality” will be most affected by the bottleneck link, but the more hops we accumulate the more variance we pick up and the more measurements we need to reach an acceptable confidence in our data...
> 
> true
> 
> David Lang


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2014-07-26 23:15 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-20 13:19 [Bloat] Check out www.speedof.me - no Flash Rich Brown
2014-07-20 13:27 ` [Bloat] [Cerowrt-devel] " David P. Reed
2014-07-20 18:41 ` [Bloat] SIMET: nationwide bw/latency/jitter test effort in Brazil Henrique de Moraes Holschuh
2014-07-23  5:36 ` [Bloat] Check out www.speedof.me - no Flash Alex Elsayed
     [not found] ` <03292B76-5273-4912-BB18-90E95C16A9F5@pnsol.com>
2014-07-25 12:09   ` Rich Brown
2014-07-25 12:24     ` Neil Davies
2014-07-25 14:17       ` Sebastian Moeller
2014-07-25 14:25         ` Martin Geddes
2014-07-25 15:58           ` Sebastian Moeller
     [not found]             ` <CAAAY2agBsPWhG9ANXHS6zAxjFgaWuuMAUPAFT9Npgv=SgVN1=g@mail.gmail.com>
     [not found]               ` <C1EA7389-68A4-42FE-A0BA-80E8B137145F@gmx.de>
2014-07-25 17:14                 ` Neil Davies
2014-07-25 17:17                   ` Sebastian Moeller
     [not found]               ` <alpine.DEB.2.02.1407251409120.21739@nftneq.ynat.uz>
     [not found]                 ` <1406326625.225312181@apps.rackspace.com>
2014-07-25 23:26                   ` [Bloat] [Cerowrt-devel] " David Lang
2014-07-26 13:02                     ` Sebastian Moeller
2014-07-26 20:53                       ` David Lang
2014-07-26 22:00                         ` Sebastian Moeller
2014-07-26 22:30                           ` David Lang
2014-07-26 23:14                             ` Sebastian Moeller
2014-07-25 14:27         ` [Bloat] " Neil Davies
2014-07-25 16:02           ` Sebastian Moeller
2014-07-25 21:20           ` David Lang
     [not found]             ` <25037.1406327367@turing-police.cc.vt.edu>
2014-07-25 23:30               ` [Bloat] [Cerowrt-devel] " David Lang
2014-07-26 12:53               ` Sebastian Moeller
2014-07-25 15:05         ` David P. Reed
2014-07-25 15:46       ` [Bloat] " Rich Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox