* Re: [Bloat] Updated Bufferbloat Test
2021-02-24 18:22 [Bloat] Updated Bufferbloat Test Sina Khanifar
@ 2021-02-24 22:10 ` Dave Taht
2021-02-24 23:39 ` Simon Barber
2021-02-25 5:56 ` Sina Khanifar
2021-02-24 22:15 ` Kenneth Porter
` (6 subsequent siblings)
7 siblings, 2 replies; 41+ messages in thread
From: Dave Taht @ 2021-02-24 22:10 UTC (permalink / raw)
To: Sina Khanifar; +Cc: bloat, sam
So I've taken a tiny amount of time to run a few tests. For starters,
thank you very much
for your dedication and time into creating such a usable website, and faq.
I have several issues though I really haven't had time to delve deep
into the packet captures. (others, please try taking em, and put them
somewhere?)
0) "average" jitter is a meaningless number. In the case of a
videoconferencing application,
what matters most is max jitter, where the app will choose to ride the
top edge of that, rather than follow it. I'd prefer using a 98%
number, rather than 75% number, to weight where the typical delay in a
videoconfernce might end up.
1) The worst case scenario of bloat affecting a users experience is
during a simultaneous up and download, and I'd rather you did that
rather than test them separately. Also you get
a more realistic figure for the actual achievable bandwidth under
contention and can expose problems like strict priority queuing in one
direction or another locking out further flows.
2) I get absurdly great results from it with or without sqm on on a
reasonably modern cablemodem (buffercontrol and pie and a cmts doing
the right things)
This points to any of number of problems (features!) It's certainly my
hope that all the cdn makers at this point have installed bufferbloat
mitigations. Testing a cdn's tcp IS a great idea, but as a
bufferbloated test, maybe not so much.
The packet capture of the tcp flows DOES show about 60ms jitter... but
no loss. Your test shows:
https://www.waveform.com/tools/bufferbloat?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
And is very jittery in the beginning of the test on its estimates. I
really should be overjoyed at knowing a cdn is doing more of the right
things, but in terms of a test... and linux also has got a ton of
mitigations on the client side.
3) As a side note, ecn actually is negotiated on the upload, if it's
enabled on your system.
Are you tracking an ecn statistics at this point (ecnseen)? It is not
negotiated on the download (which is fine by me).
I regrettable at this precise moment am unable to test a native
cablemodem at the same speed as a sqm box, hope to get further on this
tomorrow.
Again, GREAT work so far, and I do think a test tool for all these
cdns - heck, one that tested all of them at the same time, is very,
very useful.
On Wed, Feb 24, 2021 at 10:22 AM Sina Khanifar <sina@waveform.com> wrote:
>
> Hi all,
>
> A couple of months ago my co-founder Sam posted an early beta of the
> Bufferbloat test that we’ve been working on, and Dave also linked to
> it a couple of weeks ago.
>
> Thank you all so much for your feedback - we almost entirely
> redesigned the tool and the UI based on the comments we received.
> We’re almost ready to launch the tool officially today at this URL,
> but wanted to show it to the list in case anyone finds any last bugs
> that we might have overlooked:
>
> https://www.waveform.com/tools/bufferbloat
>
> If you find a bug, please share the "Share Your Results" link with us
> along with what happened. We capture some debugging information on the
> backend, and having a share link allows us to diagnose any issues.
>
> This is really more of a passion project than anything else for us –
> we don’t anticipate we’ll try to commercialize it or anything like
> that. We're very thankful for all the work the folks on this list have
> done to identify and fix bufferbloat, and hope this is a useful
> contribution. I’ve personally been very frustrated by bufferbloat on a
> range of devices, and decided it might be helpful to build another
> bufferbloat test when the DSLReports test was down at some point last
> year.
>
> Our goals with this project were:
> * To build a second solid bufferbloat test in case DSLReports goes down again.
> * Build a test where bufferbloat is front and center as the primary
> purpose of the test, rather than just a feature.
> * Try to explain bufferbloat and its effect on a user's connection
> as clearly as possible for a lay audience.
>
> A few notes:
> * On the backend, we’re using Cloudflare’s CDN to perform the actual
> download and upload speed test. I know John Graham-Cunning has posted
> to this list in the past; if he or anyone from Cloudflare sees this,
> we’d love some help. Our Cloudflare Workers are being
> bandwidth-throttled due to having a non-enterprise grade account.
> We’ve worked around this in a kludgy way, but we’d love to get it
> resolved.
> * We have lots of ideas for improvements, e.g. simultaneous
> upload/downloads, trying different file size chunks, time-series
> latency graphs, using WebRTC to test UDP traffic etc, but in the
> interest of getting things launched we're sticking with the current
> featureset.
> * There are a lot of browser-specific workarounds that we had to
> implement, and latency itself is measured in different ways on
> Safari/Webkit vs Chromium/Firefox due to limitations of the
> PerformanceTiming APIs. You may notice that latency is different on
> different browsers, however the actual bufferbloat (relative increase
> in latency) should be pretty consistent.
>
> In terms of some of the changes we made based on the feedback we
> receive on this list:
>
> Based on Toke’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
> * We changed the way the speed tests run to show an instantaneous
> speed as the test is being run.
> * We moved the bufferbloat grade into the main results box.
> * We tried really hard to get as close to saturating gigabit
> connections as possible. We redesigned completely the way we chunk
> files, added a “warming up” period, and spent quite a bit optimizing
> our code to minimize CPU usage, as we found that was often the
> limiting factor to our speed test results.
> * We changed the shield grades altogether and went through a few
> different iterations of how to show the effect of bufferbloat on
> connectivity, and ended up with a “table view” to try to show the
> effect that bufferbloat specifically is having on the connection
> (compared to when the connection is unloaded).
> * We now link from the results table view to the FAQ where the
> conditions for each type of connection are explained.
> * We also changed the way we measure latency and now use the faster
> of either Google’s CDN or Cloudflare at any given location. We’re also
> using the WebTiming APIs to get a more accurate latency number, though
> this does not work on some mobile browsers (e.g. iOS Safari) and as a
> result we show a higher latency on mobile devices. Since our test is
> less a test of absolute latency and more a test of relative latency
> with and without load, we felt this was workable.
> * Our jitter is now an average (was previously RMS).
> * The “before you start” text was rewritten and moved above the start button.
> * We now spell out upload and download instead of having arrows.
> * We hugely reduced the number of cross-site scripts. I was a bit
> embarrassed by this if I’m honest - I spent a long time building web
> tools for the EFF, where we almost never allowed any cross-site
> scripts. * Our site is hosted on Shopify, and adding any features via
> their app store ends up adding a whole lot of gunk. But we uninstalled
> some apps, rewrote our template, and ended up removing a whole lot of
> the gunk. There’s still plenty of room for improvement, but it should
> be a lot better than before.
>
> Based on Dave Collier-Brown’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
> * We replaced the “unloaded” and “loaded” language with “unloaded”
> and then “download active” and “upload active.” In the grade box we
> indicate that, for example, “Your latency increased moderately under
> load.”
> * We tried to generally make it easier for non-techie folks to
> understand by emphasizing the grade and adding the table showing how
> bufferbloat affects some commonly-used services.
> * We didn’t really change the candle charts too much - they’re
> mostly just to give a basic visual - we focused more on the actual
> meat of the results above that.
>
> Based on Sebastian Moeller’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
> * We considered doing a bidirectional saturating load, but decided
> to skip on implementing it for now. * It’s definitely something we’d
> like to experiment with more in the future.
> * We added a “warming up” period as well as a “draining” period to
> help fill and empty the buffer. We haven’t added the option for an
> extended test, but have this on our list of backlog changes to make in
> the future.
>
> Based on Y’s feedback (link):
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
> * We actually ended up removing the grades, but we explained our
> criteria for the new table in the FAQ.
>
> Based on Greg White's feedback (shared privately):
> * We added an FAQ answer explaining jitter and how we measure it.
>
> We’d love for you all to play with the new version of the tool and
> send over any feedback you might have. We’re going to be in a feature
> freeze before launch but we'd love to get any bugs sorted out. We'll
> likely put this project aside after we iron out a last round of bugs
> and launch, and turn back to working on projects that help us pay the
> bills, but we definitely hope to revisit and improve the tool over
> time.
>
> Best,
>
> Sina, Arshan, and Sam.
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
"For a successful technology, reality must take precedence over public
relations, for Mother Nature cannot be fooled" - Richard Feynman
dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-24 22:10 ` Dave Taht
@ 2021-02-24 23:39 ` Simon Barber
2021-02-25 5:56 ` Sina Khanifar
1 sibling, 0 replies; 41+ messages in thread
From: Simon Barber @ 2021-02-24 23:39 UTC (permalink / raw)
To: Dave Taht, Sina Khanifar; +Cc: sam, bloat
[-- Attachment #1: Type: text/plain, Size: 10650 bytes --]
Agreed - average jitter is useless, needs to be peak/maximum. Disallowing a
very small number of exception points is ok.
Simon
On February 24, 2021 2:10:55 PM Dave Taht <dave.taht@gmail.com> wrote:
> So I've taken a tiny amount of time to run a few tests. For starters,
> thank you very much
> for your dedication and time into creating such a usable website, and faq.
>
> I have several issues though I really haven't had time to delve deep
> into the packet captures. (others, please try taking em, and put them
> somewhere?)
>
> 0) "average" jitter is a meaningless number. In the case of a
> videoconferencing application,
> what matters most is max jitter, where the app will choose to ride the
> top edge of that, rather than follow it. I'd prefer using a 98%
> number, rather than 75% number, to weight where the typical delay in a
> videoconfernce might end up.
>
> 1) The worst case scenario of bloat affecting a users experience is
> during a simultaneous up and download, and I'd rather you did that
> rather than test them separately. Also you get
> a more realistic figure for the actual achievable bandwidth under
> contention and can expose problems like strict priority queuing in one
> direction or another locking out further flows.
>
> 2) I get absurdly great results from it with or without sqm on on a
> reasonably modern cablemodem (buffercontrol and pie and a cmts doing
> the right things)
>
> This points to any of number of problems (features!) It's certainly my
> hope that all the cdn makers at this point have installed bufferbloat
> mitigations. Testing a cdn's tcp IS a great idea, but as a
> bufferbloated test, maybe not so much.
>
> The packet capture of the tcp flows DOES show about 60ms jitter... but
> no loss. Your test shows:
>
> https://www.waveform.com/tools/bufferbloat?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>
> And is very jittery in the beginning of the test on its estimates. I
> really should be overjoyed at knowing a cdn is doing more of the right
> things, but in terms of a test... and linux also has got a ton of
> mitigations on the client side.
>
> 3) As a side note, ecn actually is negotiated on the upload, if it's
> enabled on your system.
> Are you tracking an ecn statistics at this point (ecnseen)? It is not
> negotiated on the download (which is fine by me).
>
> I regrettable at this precise moment am unable to test a native
> cablemodem at the same speed as a sqm box, hope to get further on this
> tomorrow.
>
> Again, GREAT work so far, and I do think a test tool for all these
> cdns - heck, one that tested all of them at the same time, is very,
> very useful.
>
> On Wed, Feb 24, 2021 at 10:22 AM Sina Khanifar <sina@waveform.com> wrote:
>>
>> Hi all,
>>
>> A couple of months ago my co-founder Sam posted an early beta of the
>> Bufferbloat test that we’ve been working on, and Dave also linked to
>> it a couple of weeks ago.
>>
>> Thank you all so much for your feedback - we almost entirely
>> redesigned the tool and the UI based on the comments we received.
>> We’re almost ready to launch the tool officially today at this URL,
>> but wanted to show it to the list in case anyone finds any last bugs
>> that we might have overlooked:
>>
>> https://www.waveform.com/tools/bufferbloat
>>
>> If you find a bug, please share the "Share Your Results" link with us
>> along with what happened. We capture some debugging information on the
>> backend, and having a share link allows us to diagnose any issues.
>>
>> This is really more of a passion project than anything else for us –
>> we don’t anticipate we’ll try to commercialize it or anything like
>> that. We're very thankful for all the work the folks on this list have
>> done to identify and fix bufferbloat, and hope this is a useful
>> contribution. I’ve personally been very frustrated by bufferbloat on a
>> range of devices, and decided it might be helpful to build another
>> bufferbloat test when the DSLReports test was down at some point last
>> year.
>>
>> Our goals with this project were:
>> * To build a second solid bufferbloat test in case DSLReports goes down again.
>> * Build a test where bufferbloat is front and center as the primary
>> purpose of the test, rather than just a feature.
>> * Try to explain bufferbloat and its effect on a user's connection
>> as clearly as possible for a lay audience.
>>
>> A few notes:
>> * On the backend, we’re using Cloudflare’s CDN to perform the actual
>> download and upload speed test. I know John Graham-Cunning has posted
>> to this list in the past; if he or anyone from Cloudflare sees this,
>> we’d love some help. Our Cloudflare Workers are being
>> bandwidth-throttled due to having a non-enterprise grade account.
>> We’ve worked around this in a kludgy way, but we’d love to get it
>> resolved.
>> * We have lots of ideas for improvements, e.g. simultaneous
>> upload/downloads, trying different file size chunks, time-series
>> latency graphs, using WebRTC to test UDP traffic etc, but in the
>> interest of getting things launched we're sticking with the current
>> featureset.
>> * There are a lot of browser-specific workarounds that we had to
>> implement, and latency itself is measured in different ways on
>> Safari/Webkit vs Chromium/Firefox due to limitations of the
>> PerformanceTiming APIs. You may notice that latency is different on
>> different browsers, however the actual bufferbloat (relative increase
>> in latency) should be pretty consistent.
>>
>> In terms of some of the changes we made based on the feedback we
>> receive on this list:
>>
>> Based on Toke’s feedback:
>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
>> * We changed the way the speed tests run to show an instantaneous
>> speed as the test is being run.
>> * We moved the bufferbloat grade into the main results box.
>> * We tried really hard to get as close to saturating gigabit
>> connections as possible. We redesigned completely the way we chunk
>> files, added a “warming up” period, and spent quite a bit optimizing
>> our code to minimize CPU usage, as we found that was often the
>> limiting factor to our speed test results.
>> * We changed the shield grades altogether and went through a few
>> different iterations of how to show the effect of bufferbloat on
>> connectivity, and ended up with a “table view” to try to show the
>> effect that bufferbloat specifically is having on the connection
>> (compared to when the connection is unloaded).
>> * We now link from the results table view to the FAQ where the
>> conditions for each type of connection are explained.
>> * We also changed the way we measure latency and now use the faster
>> of either Google’s CDN or Cloudflare at any given location. We’re also
>> using the WebTiming APIs to get a more accurate latency number, though
>> this does not work on some mobile browsers (e.g. iOS Safari) and as a
>> result we show a higher latency on mobile devices. Since our test is
>> less a test of absolute latency and more a test of relative latency
>> with and without load, we felt this was workable.
>> * Our jitter is now an average (was previously RMS).
>> * The “before you start” text was rewritten and moved above the start button.
>> * We now spell out upload and download instead of having arrows.
>> * We hugely reduced the number of cross-site scripts. I was a bit
>> embarrassed by this if I’m honest - I spent a long time building web
>> tools for the EFF, where we almost never allowed any cross-site
>> scripts. * Our site is hosted on Shopify, and adding any features via
>> their app store ends up adding a whole lot of gunk. But we uninstalled
>> some apps, rewrote our template, and ended up removing a whole lot of
>> the gunk. There’s still plenty of room for improvement, but it should
>> be a lot better than before.
>>
>> Based on Dave Collier-Brown’s feedback:
>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
>> * We replaced the “unloaded” and “loaded” language with “unloaded”
>> and then “download active” and “upload active.” In the grade box we
>> indicate that, for example, “Your latency increased moderately under
>> load.”
>> * We tried to generally make it easier for non-techie folks to
>> understand by emphasizing the grade and adding the table showing how
>> bufferbloat affects some commonly-used services.
>> * We didn’t really change the candle charts too much - they’re
>> mostly just to give a basic visual - we focused more on the actual
>> meat of the results above that.
>>
>> Based on Sebastian Moeller’s feedback:
>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
>> * We considered doing a bidirectional saturating load, but decided
>> to skip on implementing it for now. * It’s definitely something we’d
>> like to experiment with more in the future.
>> * We added a “warming up” period as well as a “draining” period to
>> help fill and empty the buffer. We haven’t added the option for an
>> extended test, but have this on our list of backlog changes to make in
>> the future.
>>
>> Based on Y’s feedback (link):
>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
>> * We actually ended up removing the grades, but we explained our
>> criteria for the new table in the FAQ.
>>
>> Based on Greg White's feedback (shared privately):
>> * We added an FAQ answer explaining jitter and how we measure it.
>>
>> We’d love for you all to play with the new version of the tool and
>> send over any feedback you might have. We’re going to be in a feature
>> freeze before launch but we'd love to get any bugs sorted out. We'll
>> likely put this project aside after we iron out a last round of bugs
>> and launch, and turn back to working on projects that help us pay the
>> bills, but we definitely hope to revisit and improve the tool over
>> time.
>>
>> Best,
>>
>> Sina, Arshan, and Sam.
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> "For a successful technology, reality must take precedence over public
> relations, for Mother Nature cannot be fooled" - Richard Feynman
>
> dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
[-- Attachment #2: Type: text/html, Size: 15691 bytes --]
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-24 22:10 ` Dave Taht
2021-02-24 23:39 ` Simon Barber
@ 2021-02-25 5:56 ` Sina Khanifar
2021-02-25 7:15 ` Simon Barber
` (2 more replies)
1 sibling, 3 replies; 41+ messages in thread
From: Sina Khanifar @ 2021-02-25 5:56 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat, sam
Thanks for the feedback, Dave!
> 0) "average" jitter is a meaningless number. In the case of a videoconferencing application, what matters most is max jitter, where the app will choose to ride the top edge of that, rather than follow it. I'd prefer using a 98% number, rather than 75% number, to weight where the typical delay in a videoconfernce might end up.
Both DSLReports and Ookla's desktop app report jitter as an average
rather than as a max number, so I'm a little hesitant to go against
the norm - users might find it a bit surprising to see much larger
jitter numbers reported. We're also not taking a whole ton of latency
tests in each phase, so the 98% will often end up being the max
number.
With regards to the videoconferencing, we actually ran some real-world
tests of Zoom with various levels of bufferbloat/jitter/latency, and
calibrated our "real-world results" table on that basis. We used
average jitter in those tests ... I think if we used 98% or even 95%
the allowable number would be quite high.
> 1) The worst case scenario of bloat affecting a users experience is during a simultaneous up and download, and I'd rather you did that rather than test them separately. Also you get a more realistic figure for the actual achievable bandwidth under contention and can expose problems like strict priority queuing in one direction or another locking out further flows.
We did consider this based on another user's feedback, but didn't
implement it. Perhaps we can do this next time we revisit, though!
> This points to any of number of problems (features!) It's certainly my hope that all the cdn makers at this point have installed bufferbloat mitigations. Testing a cdn's tcp IS a great idea, but as a bufferbloated test, maybe not so much.
We chose to use a CDN because it seemed like the only feasible way to
saturate gigabit links at least somewhat consistently for a meaningful
part of the globe, without setting up a whole lot of servers at quite
high cost.
But we weren't aware that bufferbloat could be abated from the CDN's
end. This is a bit surprising to me, as our test results indicate that
bufferbloat is regularly an issue even though we're using a CDN for
the speed and latency tests. For example, these are the results on my
own connection here (Cox, in Southern California), showing meaningful
bufferbloat:
https://www.waveform.com/tools/bufferbloat?test-id=ece467bd-e07a-45ea-9db6-e64d8da2c1d2
I get even larger bufferbloat effects when running the test on a 4G LTE network:
https://www.waveform.com/tools/bufferbloat?test-id=e99ae561-88e0-4e1e-bafd-90fe1de298ac
If the CDN was abating bufferbloat, surely I wouldn't see results like these?
> 3) Are you tracking an ecn statistics at this point (ecnseen)?
We are not, no. I'd definitely be curious to see if we can add this in
the future, though!
Best,
On Wed, Feb 24, 2021 at 2:10 PM Dave Taht <dave.taht@gmail.com> wrote:
>
> So I've taken a tiny amount of time to run a few tests. For starters,
> thank you very much
> for your dedication and time into creating such a usable website, and faq.
>
> I have several issues though I really haven't had time to delve deep
> into the packet captures. (others, please try taking em, and put them
> somewhere?)
>
> 0) "average" jitter is a meaningless number. In the case of a
> videoconferencing application,
> what matters most is max jitter, where the app will choose to ride the
> top edge of that, rather than follow it. I'd prefer using a 98%
> number, rather than 75% number, to weight where the typical delay in a
> videoconfernce might end up.
>
> 1) The worst case scenario of bloat affecting a users experience is
> during a simultaneous up and download, and I'd rather you did that
> rather than test them separately. Also you get
> a more realistic figure for the actual achievable bandwidth under
> contention and can expose problems like strict priority queuing in one
> direction or another locking out further flows.
>
> 2) I get absurdly great results from it with or without sqm on on a
> reasonably modern cablemodem (buffercontrol and pie and a cmts doing
> the right things)
>
> This points to any of number of problems (features!) It's certainly my
> hope that all the cdn makers at this point have installed bufferbloat
> mitigations. Testing a cdn's tcp IS a great idea, but as a
> bufferbloated test, maybe not so much.
>
> The packet capture of the tcp flows DOES show about 60ms jitter... but
> no loss. Your test shows:
>
> https://www.waveform.com/tools/bufferbloat?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>
> And is very jittery in the beginning of the test on its estimates. I
> really should be overjoyed at knowing a cdn is doing more of the right
> things, but in terms of a test... and linux also has got a ton of
> mitigations on the client side.
>
> 3) As a side note, ecn actually is negotiated on the upload, if it's
> enabled on your system.
> Are you tracking an ecn statistics at this point (ecnseen)? It is not
> negotiated on the download (which is fine by me).
>
> I regrettable at this precise moment am unable to test a native
> cablemodem at the same speed as a sqm box, hope to get further on this
> tomorrow.
>
> Again, GREAT work so far, and I do think a test tool for all these
> cdns - heck, one that tested all of them at the same time, is very,
> very useful.
>
> On Wed, Feb 24, 2021 at 10:22 AM Sina Khanifar <sina@waveform.com> wrote:
> >
> > Hi all,
> >
> > A couple of months ago my co-founder Sam posted an early beta of the
> > Bufferbloat test that we’ve been working on, and Dave also linked to
> > it a couple of weeks ago.
> >
> > Thank you all so much for your feedback - we almost entirely
> > redesigned the tool and the UI based on the comments we received.
> > We’re almost ready to launch the tool officially today at this URL,
> > but wanted to show it to the list in case anyone finds any last bugs
> > that we might have overlooked:
> >
> > https://www.waveform.com/tools/bufferbloat
> >
> > If you find a bug, please share the "Share Your Results" link with us
> > along with what happened. We capture some debugging information on the
> > backend, and having a share link allows us to diagnose any issues.
> >
> > This is really more of a passion project than anything else for us –
> > we don’t anticipate we’ll try to commercialize it or anything like
> > that. We're very thankful for all the work the folks on this list have
> > done to identify and fix bufferbloat, and hope this is a useful
> > contribution. I’ve personally been very frustrated by bufferbloat on a
> > range of devices, and decided it might be helpful to build another
> > bufferbloat test when the DSLReports test was down at some point last
> > year.
> >
> > Our goals with this project were:
> > * To build a second solid bufferbloat test in case DSLReports goes down again.
> > * Build a test where bufferbloat is front and center as the primary
> > purpose of the test, rather than just a feature.
> > * Try to explain bufferbloat and its effect on a user's connection
> > as clearly as possible for a lay audience.
> >
> > A few notes:
> > * On the backend, we’re using Cloudflare’s CDN to perform the actual
> > download and upload speed test. I know John Graham-Cunning has posted
> > to this list in the past; if he or anyone from Cloudflare sees this,
> > we’d love some help. Our Cloudflare Workers are being
> > bandwidth-throttled due to having a non-enterprise grade account.
> > We’ve worked around this in a kludgy way, but we’d love to get it
> > resolved.
> > * We have lots of ideas for improvements, e.g. simultaneous
> > upload/downloads, trying different file size chunks, time-series
> > latency graphs, using WebRTC to test UDP traffic etc, but in the
> > interest of getting things launched we're sticking with the current
> > featureset.
> > * There are a lot of browser-specific workarounds that we had to
> > implement, and latency itself is measured in different ways on
> > Safari/Webkit vs Chromium/Firefox due to limitations of the
> > PerformanceTiming APIs. You may notice that latency is different on
> > different browsers, however the actual bufferbloat (relative increase
> > in latency) should be pretty consistent.
> >
> > In terms of some of the changes we made based on the feedback we
> > receive on this list:
> >
> > Based on Toke’s feedback:
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
> > * We changed the way the speed tests run to show an instantaneous
> > speed as the test is being run.
> > * We moved the bufferbloat grade into the main results box.
> > * We tried really hard to get as close to saturating gigabit
> > connections as possible. We redesigned completely the way we chunk
> > files, added a “warming up” period, and spent quite a bit optimizing
> > our code to minimize CPU usage, as we found that was often the
> > limiting factor to our speed test results.
> > * We changed the shield grades altogether and went through a few
> > different iterations of how to show the effect of bufferbloat on
> > connectivity, and ended up with a “table view” to try to show the
> > effect that bufferbloat specifically is having on the connection
> > (compared to when the connection is unloaded).
> > * We now link from the results table view to the FAQ where the
> > conditions for each type of connection are explained.
> > * We also changed the way we measure latency and now use the faster
> > of either Google’s CDN or Cloudflare at any given location. We’re also
> > using the WebTiming APIs to get a more accurate latency number, though
> > this does not work on some mobile browsers (e.g. iOS Safari) and as a
> > result we show a higher latency on mobile devices. Since our test is
> > less a test of absolute latency and more a test of relative latency
> > with and without load, we felt this was workable.
> > * Our jitter is now an average (was previously RMS).
> > * The “before you start” text was rewritten and moved above the start button.
> > * We now spell out upload and download instead of having arrows.
> > * We hugely reduced the number of cross-site scripts. I was a bit
> > embarrassed by this if I’m honest - I spent a long time building web
> > tools for the EFF, where we almost never allowed any cross-site
> > scripts. * Our site is hosted on Shopify, and adding any features via
> > their app store ends up adding a whole lot of gunk. But we uninstalled
> > some apps, rewrote our template, and ended up removing a whole lot of
> > the gunk. There’s still plenty of room for improvement, but it should
> > be a lot better than before.
> >
> > Based on Dave Collier-Brown’s feedback:
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
> > * We replaced the “unloaded” and “loaded” language with “unloaded”
> > and then “download active” and “upload active.” In the grade box we
> > indicate that, for example, “Your latency increased moderately under
> > load.”
> > * We tried to generally make it easier for non-techie folks to
> > understand by emphasizing the grade and adding the table showing how
> > bufferbloat affects some commonly-used services.
> > * We didn’t really change the candle charts too much - they’re
> > mostly just to give a basic visual - we focused more on the actual
> > meat of the results above that.
> >
> > Based on Sebastian Moeller’s feedback:
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
> > * We considered doing a bidirectional saturating load, but decided
> > to skip on implementing it for now. * It’s definitely something we’d
> > like to experiment with more in the future.
> > * We added a “warming up” period as well as a “draining” period to
> > help fill and empty the buffer. We haven’t added the option for an
> > extended test, but have this on our list of backlog changes to make in
> > the future.
> >
> > Based on Y’s feedback (link):
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
> > * We actually ended up removing the grades, but we explained our
> > criteria for the new table in the FAQ.
> >
> > Based on Greg White's feedback (shared privately):
> > * We added an FAQ answer explaining jitter and how we measure it.
> >
> > We’d love for you all to play with the new version of the tool and
> > send over any feedback you might have. We’re going to be in a feature
> > freeze before launch but we'd love to get any bugs sorted out. We'll
> > likely put this project aside after we iron out a last round of bugs
> > and launch, and turn back to working on projects that help us pay the
> > bills, but we definitely hope to revisit and improve the tool over
> > time.
> >
> > Best,
> >
> > Sina, Arshan, and Sam.
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> "For a successful technology, reality must take precedence over public
> relations, for Mother Nature cannot be fooled" - Richard Feynman
>
> dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 5:56 ` Sina Khanifar
@ 2021-02-25 7:15 ` Simon Barber
2021-02-25 7:32 ` Sina Khanifar
2021-02-25 7:20 ` Simon Barber
2021-02-25 10:51 ` Sebastian Moeller
2 siblings, 1 reply; 41+ messages in thread
From: Simon Barber @ 2021-02-25 7:15 UTC (permalink / raw)
To: Sina Khanifar, Dave Taht; +Cc: sam, bloat
[-- Attachment #1: Type: text/plain, Size: 14402 bytes --]
On February 24, 2021 9:57:13 PM Sina Khanifar <sina@waveform.com> wrote:
> Thanks for the feedback, Dave!
>
>> 0) "average" jitter is a meaningless number. In the case of a
>> videoconferencing application, what matters most is max jitter, where the
>> app will choose to ride the top edge of that, rather than follow it. I'd
>> prefer using a 98% number, rather than 75% number, to weight where the
>> typical delay in a videoconfernce might end up.
>
> Both DSLReports and Ookla's desktop app report jitter as an average
> rather than as a max number, so I'm a little hesitant to go against
> the norm - users might find it a bit surprising to see much larger
> jitter numbers reported. We're also not taking a whole ton of latency
> tests in each phase, so the 98% will often end up being the max
> number.
>
> With regards to the videoconferencing, we actually ran some real-world
> tests of Zoom with various levels of bufferbloat/jitter/latency, and
> calibrated our "real-world results" table on that basis. We used
> average jitter in those tests ... I think if we used 98% or even 95%
> the allowable number would be quite high.
Video and audio cannot be played out until the packets have arrived, so
late packets are effectively dropped, or the playback buffer must expand to
accommodate the most late packets. If the playback buffer expands to
accommodate the most late packets then the result is that the whole
conversation is delayed by that amount. More than a fraction of a percent
of dropped packets results in a very poor video or audio experience, this
is why average jitter is irrelevant and peak or maximum latency is the
correct measure to use.
Yes, humans can tolerate quite a bit of delay. The conversation is
significantly less fluid though.
Simon
>
>
>> 1) The worst case scenario of bloat affecting a users experience is during
>> a simultaneous up and download, and I'd rather you did that rather than
>> test them separately. Also you get a more realistic figure for the actual
>> achievable bandwidth under contention and can expose problems like strict
>> priority queuing in one direction or another locking out further flows.
>
> We did consider this based on another user's feedback, but didn't
> implement it. Perhaps we can do this next time we revisit, though!
>
>> This points to any of number of problems (features!) It's certainly my hope
>> that all the cdn makers at this point have installed bufferbloat
>> mitigations. Testing a cdn's tcp IS a great idea, but as a bufferbloated
>> test, maybe not so much.
>
> We chose to use a CDN because it seemed like the only feasible way to
> saturate gigabit links at least somewhat consistently for a meaningful
> part of the globe, without setting up a whole lot of servers at quite
> high cost.
>
> But we weren't aware that bufferbloat could be abated from the CDN's
> end. This is a bit surprising to me, as our test results indicate that
> bufferbloat is regularly an issue even though we're using a CDN for
> the speed and latency tests. For example, these are the results on my
> own connection here (Cox, in Southern California), showing meaningful
> bufferbloat:
>
> https://www.waveform.com/tools/bufferbloat?test-id=ece467bd-e07a-45ea-9db6-e64d8da2c1d2
>
> I get even larger bufferbloat effects when running the test on a 4G LTE
> network:
>
> https://www.waveform.com/tools/bufferbloat?test-id=e99ae561-88e0-4e1e-bafd-90fe1de298ac
>
> If the CDN was abating bufferbloat, surely I wouldn't see results like these?
>
>> 3) Are you tracking an ecn statistics at this point (ecnseen)?
>
> We are not, no. I'd definitely be curious to see if we can add this in
> the future, though!
>
> Best,
>
> On Wed, Feb 24, 2021 at 2:10 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>> So I've taken a tiny amount of time to run a few tests. For starters,
>> thank you very much
>> for your dedication and time into creating such a usable website, and faq.
>>
>> I have several issues though I really haven't had time to delve deep
>> into the packet captures. (others, please try taking em, and put them
>> somewhere?)
>>
>> 0) "average" jitter is a meaningless number. In the case of a
>> videoconferencing application,
>> what matters most is max jitter, where the app will choose to ride the
>> top edge of that, rather than follow it. I'd prefer using a 98%
>> number, rather than 75% number, to weight where the typical delay in a
>> videoconfernce might end up.
>>
>> 1) The worst case scenario of bloat affecting a users experience is
>> during a simultaneous up and download, and I'd rather you did that
>> rather than test them separately. Also you get
>> a more realistic figure for the actual achievable bandwidth under
>> contention and can expose problems like strict priority queuing in one
>> direction or another locking out further flows.
>>
>> 2) I get absurdly great results from it with or without sqm on on a
>> reasonably modern cablemodem (buffercontrol and pie and a cmts doing
>> the right things)
>>
>> This points to any of number of problems (features!) It's certainly my
>> hope that all the cdn makers at this point have installed bufferbloat
>> mitigations. Testing a cdn's tcp IS a great idea, but as a
>> bufferbloated test, maybe not so much.
>>
>> The packet capture of the tcp flows DOES show about 60ms jitter... but
>> no loss. Your test shows:
>>
>> https://www.waveform.com/tools/bufferbloat?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>>
>> And is very jittery in the beginning of the test on its estimates. I
>> really should be overjoyed at knowing a cdn is doing more of the right
>> things, but in terms of a test... and linux also has got a ton of
>> mitigations on the client side.
>>
>> 3) As a side note, ecn actually is negotiated on the upload, if it's
>> enabled on your system.
>> Are you tracking an ecn statistics at this point (ecnseen)? It is not
>> negotiated on the download (which is fine by me).
>>
>> I regrettable at this precise moment am unable to test a native
>> cablemodem at the same speed as a sqm box, hope to get further on this
>> tomorrow.
>>
>> Again, GREAT work so far, and I do think a test tool for all these
>> cdns - heck, one that tested all of them at the same time, is very,
>> very useful.
>>
>> On Wed, Feb 24, 2021 at 10:22 AM Sina Khanifar <sina@waveform.com> wrote:
>>>
>>> Hi all,
>>>
>>> A couple of months ago my co-founder Sam posted an early beta of the
>>> Bufferbloat test that we’ve been working on, and Dave also linked to
>>> it a couple of weeks ago.
>>>
>>> Thank you all so much for your feedback - we almost entirely
>>> redesigned the tool and the UI based on the comments we received.
>>> We’re almost ready to launch the tool officially today at this URL,
>>> but wanted to show it to the list in case anyone finds any last bugs
>>> that we might have overlooked:
>>>
>>> https://www.waveform.com/tools/bufferbloat
>>>
>>> If you find a bug, please share the "Share Your Results" link with us
>>> along with what happened. We capture some debugging information on the
>>> backend, and having a share link allows us to diagnose any issues.
>>>
>>> This is really more of a passion project than anything else for us –
>>> we don’t anticipate we’ll try to commercialize it or anything like
>>> that. We're very thankful for all the work the folks on this list have
>>> done to identify and fix bufferbloat, and hope this is a useful
>>> contribution. I’ve personally been very frustrated by bufferbloat on a
>>> range of devices, and decided it might be helpful to build another
>>> bufferbloat test when the DSLReports test was down at some point last
>>> year.
>>>
>>> Our goals with this project were:
>>> * To build a second solid bufferbloat test in case DSLReports goes down again.
>>> * Build a test where bufferbloat is front and center as the primary
>>> purpose of the test, rather than just a feature.
>>> * Try to explain bufferbloat and its effect on a user's connection
>>> as clearly as possible for a lay audience.
>>>
>>> A few notes:
>>> * On the backend, we’re using Cloudflare’s CDN to perform the actual
>>> download and upload speed test. I know John Graham-Cunning has posted
>>> to this list in the past; if he or anyone from Cloudflare sees this,
>>> we’d love some help. Our Cloudflare Workers are being
>>> bandwidth-throttled due to having a non-enterprise grade account.
>>> We’ve worked around this in a kludgy way, but we’d love to get it
>>> resolved.
>>> * We have lots of ideas for improvements, e.g. simultaneous
>>> upload/downloads, trying different file size chunks, time-series
>>> latency graphs, using WebRTC to test UDP traffic etc, but in the
>>> interest of getting things launched we're sticking with the current
>>> featureset.
>>> * There are a lot of browser-specific workarounds that we had to
>>> implement, and latency itself is measured in different ways on
>>> Safari/Webkit vs Chromium/Firefox due to limitations of the
>>> PerformanceTiming APIs. You may notice that latency is different on
>>> different browsers, however the actual bufferbloat (relative increase
>>> in latency) should be pretty consistent.
>>>
>>> In terms of some of the changes we made based on the feedback we
>>> receive on this list:
>>>
>>> Based on Toke’s feedback:
>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
>>> * We changed the way the speed tests run to show an instantaneous
>>> speed as the test is being run.
>>> * We moved the bufferbloat grade into the main results box.
>>> * We tried really hard to get as close to saturating gigabit
>>> connections as possible. We redesigned completely the way we chunk
>>> files, added a “warming up” period, and spent quite a bit optimizing
>>> our code to minimize CPU usage, as we found that was often the
>>> limiting factor to our speed test results.
>>> * We changed the shield grades altogether and went through a few
>>> different iterations of how to show the effect of bufferbloat on
>>> connectivity, and ended up with a “table view” to try to show the
>>> effect that bufferbloat specifically is having on the connection
>>> (compared to when the connection is unloaded).
>>> * We now link from the results table view to the FAQ where the
>>> conditions for each type of connection are explained.
>>> * We also changed the way we measure latency and now use the faster
>>> of either Google’s CDN or Cloudflare at any given location. We’re also
>>> using the WebTiming APIs to get a more accurate latency number, though
>>> this does not work on some mobile browsers (e.g. iOS Safari) and as a
>>> result we show a higher latency on mobile devices. Since our test is
>>> less a test of absolute latency and more a test of relative latency
>>> with and without load, we felt this was workable.
>>> * Our jitter is now an average (was previously RMS).
>>> * The “before you start” text was rewritten and moved above the start button.
>>> * We now spell out upload and download instead of having arrows.
>>> * We hugely reduced the number of cross-site scripts. I was a bit
>>> embarrassed by this if I’m honest - I spent a long time building web
>>> tools for the EFF, where we almost never allowed any cross-site
>>> scripts. * Our site is hosted on Shopify, and adding any features via
>>> their app store ends up adding a whole lot of gunk. But we uninstalled
>>> some apps, rewrote our template, and ended up removing a whole lot of
>>> the gunk. There’s still plenty of room for improvement, but it should
>>> be a lot better than before.
>>>
>>> Based on Dave Collier-Brown’s feedback:
>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
>>> * We replaced the “unloaded” and “loaded” language with “unloaded”
>>> and then “download active” and “upload active.” In the grade box we
>>> indicate that, for example, “Your latency increased moderately under
>>> load.”
>>> * We tried to generally make it easier for non-techie folks to
>>> understand by emphasizing the grade and adding the table showing how
>>> bufferbloat affects some commonly-used services.
>>> * We didn’t really change the candle charts too much - they’re
>>> mostly just to give a basic visual - we focused more on the actual
>>> meat of the results above that.
>>>
>>> Based on Sebastian Moeller’s feedback:
>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
>>> * We considered doing a bidirectional saturating load, but decided
>>> to skip on implementing it for now. * It’s definitely something we’d
>>> like to experiment with more in the future.
>>> * We added a “warming up” period as well as a “draining” period to
>>> help fill and empty the buffer. We haven’t added the option for an
>>> extended test, but have this on our list of backlog changes to make in
>>> the future.
>>>
>>> Based on Y’s feedback (link):
>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
>>> * We actually ended up removing the grades, but we explained our
>>> criteria for the new table in the FAQ.
>>>
>>> Based on Greg White's feedback (shared privately):
>>> * We added an FAQ answer explaining jitter and how we measure it.
>>>
>>> We’d love for you all to play with the new version of the tool and
>>> send over any feedback you might have. We’re going to be in a feature
>>> freeze before launch but we'd love to get any bugs sorted out. We'll
>>> likely put this project aside after we iron out a last round of bugs
>>> and launch, and turn back to working on projects that help us pay the
>>> bills, but we definitely hope to revisit and improve the tool over
>>> time.
>>>
>>> Best,
>>>
>>> Sina, Arshan, and Sam.
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>>
>> --
>> "For a successful technology, reality must take precedence over public
>> relations, for Mother Nature cannot be fooled" - Richard Feynman
>>
>> dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
[-- Attachment #2: Type: text/html, Size: 21591 bytes --]
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 7:15 ` Simon Barber
@ 2021-02-25 7:32 ` Sina Khanifar
2021-02-25 13:38 ` Simon Barber
2021-02-25 13:46 ` Simon Barber
0 siblings, 2 replies; 41+ messages in thread
From: Sina Khanifar @ 2021-02-25 7:32 UTC (permalink / raw)
To: simon; +Cc: Dave Taht, sam, bloat
Thanks for the explanation.
Right now, our criteria for phone/audio is "latency < 300 ms" and
"jitter < 40 ms".
It seems like something along the lines of "95th percentile latency <
300 ms" might be advisable in place of the two existing criteria?
Sina.
On Wed, Feb 24, 2021 at 11:15 PM Simon Barber <simon@superduper.net> wrote:
>
>
>
> On February 24, 2021 9:57:13 PM Sina Khanifar <sina@waveform.com> wrote:
>
>> Thanks for the feedback, Dave!
>>
>>> 0) "average" jitter is a meaningless number. In the case of a videoconferencing application, what matters most is max jitter, where the app will choose to ride the top edge of that, rather than follow it. I'd prefer using a 98% number, rather than 75% number, to weight where the typical delay in a videoconfernce might end up.
>>
>>
>> Both DSLReports and Ookla's desktop app report jitter as an average
>> rather than as a max number, so I'm a little hesitant to go against
>> the norm - users might find it a bit surprising to see much larger
>> jitter numbers reported. We're also not taking a whole ton of latency
>> tests in each phase, so the 98% will often end up being the max
>> number.
>>
>> With regards to the videoconferencing, we actually ran some real-world
>> tests of Zoom with various levels of bufferbloat/jitter/latency, and
>> calibrated our "real-world results" table on that basis. We used
>> average jitter in those tests ... I think if we used 98% or even 95%
>> the allowable number would be quite high.
>
>
> Video and audio cannot be played out until the packets have arrived, so late packets are effectively dropped, or the playback buffer must expand to accommodate the most late packets. If the playback buffer expands to accommodate the most late packets then the result is that the whole conversation is delayed by that amount. More than a fraction of a percent of dropped packets results in a very poor video or audio experience, this is why average jitter is irrelevant and peak or maximum latency is the correct measure to use.
>
> Yes, humans can tolerate quite a bit of delay. The conversation is significantly less fluid though.
>
> Simon
>
>
>
>
>
>>
>>> 1) The worst case scenario of bloat affecting a users experience is during a simultaneous up and download, and I'd rather you did that rather than test them separately. Also you get a more realistic figure for the actual achievable bandwidth under contention and can expose problems like strict priority queuing in one direction or another locking out further flows.
>>
>>
>> We did consider this based on another user's feedback, but didn't
>> implement it. Perhaps we can do this next time we revisit, though!
>>
>>> This points to any of number of problems (features!) It's certainly my hope that all the cdn makers at this point have installed bufferbloat mitigations. Testing a cdn's tcp IS a great idea, but as a bufferbloated test, maybe not so much.
>>
>>
>> We chose to use a CDN because it seemed like the only feasible way to
>> saturate gigabit links at least somewhat consistently for a meaningful
>> part of the globe, without setting up a whole lot of servers at quite
>> high cost.
>>
>> But we weren't aware that bufferbloat could be abated from the CDN's
>> end. This is a bit surprising to me, as our test results indicate that
>> bufferbloat is regularly an issue even though we're using a CDN for
>> the speed and latency tests. For example, these are the results on my
>> own connection here (Cox, in Southern California), showing meaningful
>> bufferbloat:
>>
>> https://www.waveform.com/tools/bufferbloat?test-id=ece467bd-e07a-45ea-9db6-e64d8da2c1d2
>>
>> I get even larger bufferbloat effects when running the test on a 4G LTE network:
>>
>> https://www.waveform.com/tools/bufferbloat?test-id=e99ae561-88e0-4e1e-bafd-90fe1de298ac
>>
>> If the CDN was abating bufferbloat, surely I wouldn't see results like these?
>>
>>> 3) Are you tracking an ecn statistics at this point (ecnseen)?
>>
>>
>> We are not, no. I'd definitely be curious to see if we can add this in
>> the future, though!
>>
>> Best,
>>
>> On Wed, Feb 24, 2021 at 2:10 PM Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>>
>>> So I've taken a tiny amount of time to run a few tests. For starters,
>>> thank you very much
>>> for your dedication and time into creating such a usable website, and faq.
>>>
>>> I have several issues though I really haven't had time to delve deep
>>> into the packet captures. (others, please try taking em, and put them
>>> somewhere?)
>>>
>>> 0) "average" jitter is a meaningless number. In the case of a
>>> videoconferencing application,
>>> what matters most is max jitter, where the app will choose to ride the
>>> top edge of that, rather than follow it. I'd prefer using a 98%
>>> number, rather than 75% number, to weight where the typical delay in a
>>> videoconfernce might end up.
>>>
>>> 1) The worst case scenario of bloat affecting a users experience is
>>> during a simultaneous up and download, and I'd rather you did that
>>> rather than test them separately. Also you get
>>> a more realistic figure for the actual achievable bandwidth under
>>> contention and can expose problems like strict priority queuing in one
>>> direction or another locking out further flows.
>>>
>>> 2) I get absurdly great results from it with or without sqm on on a
>>> reasonably modern cablemodem (buffercontrol and pie and a cmts doing
>>> the right things)
>>>
>>> This points to any of number of problems (features!) It's certainly my
>>> hope that all the cdn makers at this point have installed bufferbloat
>>> mitigations. Testing a cdn's tcp IS a great idea, but as a
>>> bufferbloated test, maybe not so much.
>>>
>>> The packet capture of the tcp flows DOES show about 60ms jitter... but
>>> no loss. Your test shows:
>>>
>>> https://www.waveform.com/tools/bufferbloat?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>>>
>>> And is very jittery in the beginning of the test on its estimates. I
>>> really should be overjoyed at knowing a cdn is doing more of the right
>>> things, but in terms of a test... and linux also has got a ton of
>>> mitigations on the client side.
>>>
>>> 3) As a side note, ecn actually is negotiated on the upload, if it's
>>> enabled on your system.
>>> Are you tracking an ecn statistics at this point (ecnseen)? It is not
>>> negotiated on the download (which is fine by me).
>>>
>>> I regrettable at this precise moment am unable to test a native
>>> cablemodem at the same speed as a sqm box, hope to get further on this
>>> tomorrow.
>>>
>>> Again, GREAT work so far, and I do think a test tool for all these
>>> cdns - heck, one that tested all of them at the same time, is very,
>>> very useful.
>>>
>>> On Wed, Feb 24, 2021 at 10:22 AM Sina Khanifar <sina@waveform.com> wrote:
>>>>
>>>>
>>>> Hi all,
>>>>
>>>> A couple of months ago my co-founder Sam posted an early beta of the
>>>> Bufferbloat test that we’ve been working on, and Dave also linked to
>>>> it a couple of weeks ago.
>>>>
>>>> Thank you all so much for your feedback - we almost entirely
>>>> redesigned the tool and the UI based on the comments we received.
>>>> We’re almost ready to launch the tool officially today at this URL,
>>>> but wanted to show it to the list in case anyone finds any last bugs
>>>> that we might have overlooked:
>>>>
>>>> https://www.waveform.com/tools/bufferbloat
>>>>
>>>> If you find a bug, please share the "Share Your Results" link with us
>>>> along with what happened. We capture some debugging information on the
>>>> backend, and having a share link allows us to diagnose any issues.
>>>>
>>>> This is really more of a passion project than anything else for us –
>>>> we don’t anticipate we’ll try to commercialize it or anything like
>>>> that. We're very thankful for all the work the folks on this list have
>>>> done to identify and fix bufferbloat, and hope this is a useful
>>>> contribution. I’ve personally been very frustrated by bufferbloat on a
>>>> range of devices, and decided it might be helpful to build another
>>>> bufferbloat test when the DSLReports test was down at some point last
>>>> year.
>>>>
>>>> Our goals with this project were:
>>>> * To build a second solid bufferbloat test in case DSLReports goes down again.
>>>> * Build a test where bufferbloat is front and center as the primary
>>>> purpose of the test, rather than just a feature.
>>>> * Try to explain bufferbloat and its effect on a user's connection
>>>> as clearly as possible for a lay audience.
>>>>
>>>> A few notes:
>>>> * On the backend, we’re using Cloudflare’s CDN to perform the actual
>>>> download and upload speed test. I know John Graham-Cunning has posted
>>>> to this list in the past; if he or anyone from Cloudflare sees this,
>>>> we’d love some help. Our Cloudflare Workers are being
>>>> bandwidth-throttled due to having a non-enterprise grade account.
>>>> We’ve worked around this in a kludgy way, but we’d love to get it
>>>> resolved.
>>>> * We have lots of ideas for improvements, e.g. simultaneous
>>>> upload/downloads, trying different file size chunks, time-series
>>>> latency graphs, using WebRTC to test UDP traffic etc, but in the
>>>> interest of getting things launched we're sticking with the current
>>>> featureset.
>>>> * There are a lot of browser-specific workarounds that we had to
>>>> implement, and latency itself is measured in different ways on
>>>> Safari/Webkit vs Chromium/Firefox due to limitations of the
>>>> PerformanceTiming APIs. You may notice that latency is different on
>>>> different browsers, however the actual bufferbloat (relative increase
>>>> in latency) should be pretty consistent.
>>>>
>>>> In terms of some of the changes we made based on the feedback we
>>>> receive on this list:
>>>>
>>>> Based on Toke’s feedback:
>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
>>>> * We changed the way the speed tests run to show an instantaneous
>>>> speed as the test is being run.
>>>> * We moved the bufferbloat grade into the main results box.
>>>> * We tried really hard to get as close to saturating gigabit
>>>> connections as possible. We redesigned completely the way we chunk
>>>> files, added a “warming up” period, and spent quite a bit optimizing
>>>> our code to minimize CPU usage, as we found that was often the
>>>> limiting factor to our speed test results.
>>>> * We changed the shield grades altogether and went through a few
>>>> different iterations of how to show the effect of bufferbloat on
>>>> connectivity, and ended up with a “table view” to try to show the
>>>> effect that bufferbloat specifically is having on the connection
>>>> (compared to when the connection is unloaded).
>>>> * We now link from the results table view to the FAQ where the
>>>> conditions for each type of connection are explained.
>>>> * We also changed the way we measure latency and now use the faster
>>>> of either Google’s CDN or Cloudflare at any given location. We’re also
>>>> using the WebTiming APIs to get a more accurate latency number, though
>>>> this does not work on some mobile browsers (e.g. iOS Safari) and as a
>>>> result we show a higher latency on mobile devices. Since our test is
>>>> less a test of absolute latency and more a test of relative latency
>>>> with and without load, we felt this was workable.
>>>> * Our jitter is now an average (was previously RMS).
>>>> * The “before you start” text was rewritten and moved above the start button.
>>>> * We now spell out upload and download instead of having arrows.
>>>> * We hugely reduced the number of cross-site scripts. I was a bit
>>>> embarrassed by this if I’m honest - I spent a long time building web
>>>> tools for the EFF, where we almost never allowed any cross-site
>>>> scripts. * Our site is hosted on Shopify, and adding any features via
>>>> their app store ends up adding a whole lot of gunk. But we uninstalled
>>>> some apps, rewrote our template, and ended up removing a whole lot of
>>>> the gunk. There’s still plenty of room for improvement, but it should
>>>> be a lot better than before.
>>>>
>>>> Based on Dave Collier-Brown’s feedback:
>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
>>>> * We replaced the “unloaded” and “loaded” language with “unloaded”
>>>> and then “download active” and “upload active.” In the grade box we
>>>> indicate that, for example, “Your latency increased moderately under
>>>> load.”
>>>> * We tried to generally make it easier for non-techie folks to
>>>> understand by emphasizing the grade and adding the table showing how
>>>> bufferbloat affects some commonly-used services.
>>>> * We didn’t really change the candle charts too much - they’re
>>>> mostly just to give a basic visual - we focused more on the actual
>>>> meat of the results above that.
>>>>
>>>> Based on Sebastian Moeller’s feedback:
>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
>>>> * We considered doing a bidirectional saturating load, but decided
>>>> to skip on implementing it for now. * It’s definitely something we’d
>>>> like to experiment with more in the future.
>>>> * We added a “warming up” period as well as a “draining” period to
>>>> help fill and empty the buffer. We haven’t added the option for an
>>>> extended test, but have this on our list of backlog changes to make in
>>>> the future.
>>>>
>>>> Based on Y’s feedback (link):
>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
>>>> * We actually ended up removing the grades, but we explained our
>>>> criteria for the new table in the FAQ.
>>>>
>>>> Based on Greg White's feedback (shared privately):
>>>> * We added an FAQ answer explaining jitter and how we measure it.
>>>>
>>>> We’d love for you all to play with the new version of the tool and
>>>> send over any feedback you might have. We’re going to be in a feature
>>>> freeze before launch but we'd love to get any bugs sorted out. We'll
>>>> likely put this project aside after we iron out a last round of bugs
>>>> and launch, and turn back to working on projects that help us pay the
>>>> bills, but we definitely hope to revisit and improve the tool over
>>>> time.
>>>>
>>>> Best,
>>>>
>>>> Sina, Arshan, and Sam.
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>>
>>>
>>> --
>>> "For a successful technology, reality must take precedence over public
>>> relations, for Mother Nature cannot be fooled" - Richard Feynman
>>>
>>> dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 7:32 ` Sina Khanifar
@ 2021-02-25 13:38 ` Simon Barber
2021-02-25 13:43 ` Dave Taht
2021-02-25 13:49 ` Mikael Abrahamsson
2021-02-25 13:46 ` Simon Barber
1 sibling, 2 replies; 41+ messages in thread
From: Simon Barber @ 2021-02-25 13:38 UTC (permalink / raw)
To: Sina Khanifar; +Cc: Dave Taht, sam, bloat
[-- Attachment #1: Type: text/plain, Size: 574 bytes --]
The ITU say voice should be <150mS, however in the real world people are a
lot more tolerant. A GSM -> GSM phone call is ~350mS, and very few people
complain about that. That said the quality of the conversation is affected,
and staying under 150mS is better for a fast free flowing conversation.
Most people won't have a problem at 600mS and will have a problem at
1000mS. That is for a 2 party voice call. A large group presentation over
video can tolerate more, but may have issues with talking over when
switching from presenter to questioner for example.
Simon
[-- Attachment #2: Type: text/html, Size: 778 bytes --]
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 13:38 ` Simon Barber
@ 2021-02-25 13:43 ` Dave Taht
2021-02-25 13:49 ` Mikael Abrahamsson
1 sibling, 0 replies; 41+ messages in thread
From: Dave Taht @ 2021-02-25 13:43 UTC (permalink / raw)
To: Simon Barber; +Cc: Sina Khanifar, sam, bloat
On Thu, Feb 25, 2021 at 5:38 AM Simon Barber <simon@superduper.net> wrote:
>
> The ITU say voice should be <150mS, however in the real world people are a lot more tolerant.
Because they have to be. A call with 5ms latency is amazing.
>A GSM -> GSM phone call is ~350mS, and very few people complain about that. That said the quality of the conversation is affected, and staying under 150mS is better for a fast free flowing conversation. Most people won't have a problem at 600mS and will have a problem at 1000mS. That is for a 2 party voice call. A large group presentation over video can tolerate more, but may have issues with talking over when switching from presenter to questioner for example.
>
> Simon
--
"For a successful technology, reality must take precedence over public
relations, for Mother Nature cannot be fooled" - Richard Feynman
dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 13:38 ` Simon Barber
2021-02-25 13:43 ` Dave Taht
@ 2021-02-25 13:49 ` Mikael Abrahamsson
2021-02-25 13:53 ` Simon Barber
1 sibling, 1 reply; 41+ messages in thread
From: Mikael Abrahamsson @ 2021-02-25 13:49 UTC (permalink / raw)
To: Simon Barber; +Cc: Sina Khanifar, sam, bloat
On Thu, 25 Feb 2021, Simon Barber wrote:
> The ITU say voice should be <150mS, however in the real world people are
> a lot more tolerant. A GSM -> GSM phone call is ~350mS, and very few
> people complain about that. That said the quality of the conversation is
> affected, and staying under 150mS is better for a fast free flowing
> conversation. Most people won't have a problem at 600mS and will have a
> problem at 1000mS. That is for a 2 party voice call. A large group
> presentation over video can tolerate more, but may have issues with
> talking over when switching from presenter to questioner for example.
I worked at a phone company 10+ years ago. We had some equipment that
internally was ATM based and each "hop" added 7ms. This in combination
with IP based telephony at the end points that added 40ms one-way per
end-point (PDV buffer) caused people to complain when RTT started creeping
up to 300-400ms. This was for PSTN calls.
Yes, people might have more tolerance with mobile phone calls because they
have lower expectations when out and about, but my experience is that
people will definitely notice 300-400ms RTT but they might not get upset
enough to open a support ticket until 600ms or more.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 13:49 ` Mikael Abrahamsson
@ 2021-02-25 13:53 ` Simon Barber
2021-02-25 19:47 ` Sina Khanifar
0 siblings, 1 reply; 41+ messages in thread
From: Simon Barber @ 2021-02-25 13:53 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: Sina Khanifar, sam, bloat
[-- Attachment #1: Type: text/plain, Size: 1525 bytes --]
So perhaps this can feed into the rating system, total latency < 50mS is an
A, < 150mS is a B, 600mS is a C or something like that.
Simon
On February 25, 2021 5:49:26 AM Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 25 Feb 2021, Simon Barber wrote:
>
>> The ITU say voice should be <150mS, however in the real world people are
>> a lot more tolerant. A GSM -> GSM phone call is ~350mS, and very few
>> people complain about that. That said the quality of the conversation is
>> affected, and staying under 150mS is better for a fast free flowing
>> conversation. Most people won't have a problem at 600mS and will have a
>> problem at 1000mS. That is for a 2 party voice call. A large group
>> presentation over video can tolerate more, but may have issues with
>> talking over when switching from presenter to questioner for example.
>
> I worked at a phone company 10+ years ago. We had some equipment that
> internally was ATM based and each "hop" added 7ms. This in combination
> with IP based telephony at the end points that added 40ms one-way per
> end-point (PDV buffer) caused people to complain when RTT started creeping
> up to 300-400ms. This was for PSTN calls.
>
> Yes, people might have more tolerance with mobile phone calls because they
> have lower expectations when out and about, but my experience is that
> people will definitely notice 300-400ms RTT but they might not get upset
> enough to open a support ticket until 600ms or more.
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
[-- Attachment #2: Type: text/html, Size: 2823 bytes --]
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 13:53 ` Simon Barber
@ 2021-02-25 19:47 ` Sina Khanifar
2021-02-25 19:56 ` Simon Barber
0 siblings, 1 reply; 41+ messages in thread
From: Sina Khanifar @ 2021-02-25 19:47 UTC (permalink / raw)
To: simon; +Cc: Mikael Abrahamsson, sam, bloat
> So perhaps this can feed into the rating system, total latency < 50mS is an A, < 150mS is a B, 600mS is a C or something like that.
The "grade" we give is purely a measure of bufferbloat. If you start
with a latency of 500 ms on your connection, it wouldn't be fair for
us to give you an F grade even if there is no increase in latency due
to bufferbloat.
This is why we added the "Real-World Impact" table below the grade -
in many cases people may start with a connection that is already
problematic for video conferencing, VoIP, and gaming.
I think we're going to change the conditions on that table to have
high 95%ile latency trigger the degraded performance shield warnings.
In the future it might be neat for us to move to grades on the table
as well.
On Thu, Feb 25, 2021 at 5:53 AM Simon Barber <simon@superduper.net> wrote:
>
> So perhaps this can feed into the rating system, total latency < 50mS is an A, < 150mS is a B, 600mS is a C or something like that.
>
> Simon
>
> On February 25, 2021 5:49:26 AM Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>
>> On Thu, 25 Feb 2021, Simon Barber wrote:
>>
>>> The ITU say voice should be <150mS, however in the real world people are
>>> a lot more tolerant. A GSM -> GSM phone call is ~350mS, and very few
>>> people complain about that. That said the quality of the conversation is
>>> affected, and staying under 150mS is better for a fast free flowing
>>> conversation. Most people won't have a problem at 600mS and will have a
>>> problem at 1000mS. That is for a 2 party voice call. A large group
>>> presentation over video can tolerate more, but may have issues with
>>> talking over when switching from presenter to questioner for example.
>>
>>
>> I worked at a phone company 10+ years ago. We had some equipment that
>> internally was ATM based and each "hop" added 7ms. This in combination
>> with IP based telephony at the end points that added 40ms one-way per
>> end-point (PDV buffer) caused people to complain when RTT started creeping
>> up to 300-400ms. This was for PSTN calls.
>>
>> Yes, people might have more tolerance with mobile phone calls because they
>> have lower expectations when out and about, but my experience is that
>> people will definitely notice 300-400ms RTT but they might not get upset
>> enough to open a support ticket until 600ms or more.
>>
>> --
>> Mikael Abrahamsson email: swmike@swm.pp.se
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 19:47 ` Sina Khanifar
@ 2021-02-25 19:56 ` Simon Barber
2021-02-25 20:10 ` Sina Khanifar
0 siblings, 1 reply; 41+ messages in thread
From: Simon Barber @ 2021-02-25 19:56 UTC (permalink / raw)
To: Sina Khanifar; +Cc: Mikael Abrahamsson, sam, bloat
Hi Sina,
That sounds great, and I understand the desire to separate the fixed component of latency and the buffer bloat / variable part. Messaging that in a way that accurately conveys the end user impact and the impact due to unmitigated buffers while being easy to understand is tricky.
Since you are measuring buffer bloat - how much latency *can* be caused by the excessive buffering, expressing the jitter number in terms of 95%ile would be appropriate - as that’s closely related to how large the excessive buffer is. The average jitter is more related to how the competing TCP streams have some gaps due to congestion control and these gaps can temporarily lower the buffer occupancy and result in a lower average jitter number.
Really appreciate this work, and the interface and ‘latency first’ nature of this test. It’s a great contribution, and will hopefully help drive ISPs to reducing their bloat, helping everyone.
Simon
> On Feb 25, 2021, at 11:47 AM, Sina Khanifar <sina@waveform.com> wrote:
>
>> So perhaps this can feed into the rating system, total latency < 50mS is an A, < 150mS is a B, 600mS is a C or something like that.
>
> The "grade" we give is purely a measure of bufferbloat. If you start
> with a latency of 500 ms on your connection, it wouldn't be fair for
> us to give you an F grade even if there is no increase in latency due
> to bufferbloat.
>
> This is why we added the "Real-World Impact" table below the grade -
> in many cases people may start with a connection that is already
> problematic for video conferencing, VoIP, and gaming.
>
> I think we're going to change the conditions on that table to have
> high 95%ile latency trigger the degraded performance shield warnings.
> In the future it might be neat for us to move to grades on the table
> as well.
>
>
> On Thu, Feb 25, 2021 at 5:53 AM Simon Barber <simon@superduper.net> wrote:
>>
>> So perhaps this can feed into the rating system, total latency < 50mS is an A, < 150mS is a B, 600mS is a C or something like that.
>>
>> Simon
>>
>> On February 25, 2021 5:49:26 AM Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>>
>>> On Thu, 25 Feb 2021, Simon Barber wrote:
>>>
>>>> The ITU say voice should be <150mS, however in the real world people are
>>>> a lot more tolerant. A GSM -> GSM phone call is ~350mS, and very few
>>>> people complain about that. That said the quality of the conversation is
>>>> affected, and staying under 150mS is better for a fast free flowing
>>>> conversation. Most people won't have a problem at 600mS and will have a
>>>> problem at 1000mS. That is for a 2 party voice call. A large group
>>>> presentation over video can tolerate more, but may have issues with
>>>> talking over when switching from presenter to questioner for example.
>>>
>>>
>>> I worked at a phone company 10+ years ago. We had some equipment that
>>> internally was ATM based and each "hop" added 7ms. This in combination
>>> with IP based telephony at the end points that added 40ms one-way per
>>> end-point (PDV buffer) caused people to complain when RTT started creeping
>>> up to 300-400ms. This was for PSTN calls.
>>>
>>> Yes, people might have more tolerance with mobile phone calls because they
>>> have lower expectations when out and about, but my experience is that
>>> people will definitely notice 300-400ms RTT but they might not get upset
>>> enough to open a support ticket until 600ms or more.
>>>
>>> --
>>> Mikael Abrahamsson email: swmike@swm.pp.se
>>
>>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 19:56 ` Simon Barber
@ 2021-02-25 20:10 ` Sina Khanifar
2021-02-25 20:52 ` Simon Barber
2021-02-25 20:53 ` Simon Barber
0 siblings, 2 replies; 41+ messages in thread
From: Sina Khanifar @ 2021-02-25 20:10 UTC (permalink / raw)
To: simon; +Cc: Mikael Abrahamsson, sam, bloat
Thanks for the kind words, Simon!
> Since you are measuring buffer bloat - how much latency *can* be caused by the excessive buffering, expressing the jitter number in terms of 95%ile would be appropriate - as that’s closely related to how large the excessive buffer is. The average jitter is more related to how the competing TCP streams have some gaps due to congestion control and these gaps can temporarily lower the buffer occupancy and result in a lower average jitter number.
I'm thinking that we might even remove jitter altogether from the UI,
and instead just show 95%ile latency. 95%ile latency and 95%ile jitter
should be equivalent, but 95% latency is really the more meaningful
measure for real-time communications, it feels like?
On Thu, Feb 25, 2021 at 11:57 AM Simon Barber <simon@superduper.net> wrote:
>
> Hi Sina,
>
> That sounds great, and I understand the desire to separate the fixed component of latency and the buffer bloat / variable part. Messaging that in a way that accurately conveys the end user impact and the impact due to unmitigated buffers while being easy to understand is tricky.
>
> Since you are measuring buffer bloat - how much latency *can* be caused by the excessive buffering, expressing the jitter number in terms of 95%ile would be appropriate - as that’s closely related to how large the excessive buffer is. The average jitter is more related to how the competing TCP streams have some gaps due to congestion control and these gaps can temporarily lower the buffer occupancy and result in a lower average jitter number.
>
> Really appreciate this work, and the interface and ‘latency first’ nature of this test. It’s a great contribution, and will hopefully help drive ISPs to reducing their bloat, helping everyone.
>
> Simon
>
>
> > On Feb 25, 2021, at 11:47 AM, Sina Khanifar <sina@waveform.com> wrote:
> >
> >> So perhaps this can feed into the rating system, total latency < 50mS is an A, < 150mS is a B, 600mS is a C or something like that.
> >
> > The "grade" we give is purely a measure of bufferbloat. If you start
> > with a latency of 500 ms on your connection, it wouldn't be fair for
> > us to give you an F grade even if there is no increase in latency due
> > to bufferbloat.
> >
> > This is why we added the "Real-World Impact" table below the grade -
> > in many cases people may start with a connection that is already
> > problematic for video conferencing, VoIP, and gaming.
> >
> > I think we're going to change the conditions on that table to have
> > high 95%ile latency trigger the degraded performance shield warnings.
> > In the future it might be neat for us to move to grades on the table
> > as well.
> >
> >
> > On Thu, Feb 25, 2021 at 5:53 AM Simon Barber <simon@superduper.net> wrote:
> >>
> >> So perhaps this can feed into the rating system, total latency < 50mS is an A, < 150mS is a B, 600mS is a C or something like that.
> >>
> >> Simon
> >>
> >> On February 25, 2021 5:49:26 AM Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> >>
> >>> On Thu, 25 Feb 2021, Simon Barber wrote:
> >>>
> >>>> The ITU say voice should be <150mS, however in the real world people are
> >>>> a lot more tolerant. A GSM -> GSM phone call is ~350mS, and very few
> >>>> people complain about that. That said the quality of the conversation is
> >>>> affected, and staying under 150mS is better for a fast free flowing
> >>>> conversation. Most people won't have a problem at 600mS and will have a
> >>>> problem at 1000mS. That is for a 2 party voice call. A large group
> >>>> presentation over video can tolerate more, but may have issues with
> >>>> talking over when switching from presenter to questioner for example.
> >>>
> >>>
> >>> I worked at a phone company 10+ years ago. We had some equipment that
> >>> internally was ATM based and each "hop" added 7ms. This in combination
> >>> with IP based telephony at the end points that added 40ms one-way per
> >>> end-point (PDV buffer) caused people to complain when RTT started creeping
> >>> up to 300-400ms. This was for PSTN calls.
> >>>
> >>> Yes, people might have more tolerance with mobile phone calls because they
> >>> have lower expectations when out and about, but my experience is that
> >>> people will definitely notice 300-400ms RTT but they might not get upset
> >>> enough to open a support ticket until 600ms or more.
> >>>
> >>> --
> >>> Mikael Abrahamsson email: swmike@swm.pp.se
> >>
> >>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 20:10 ` Sina Khanifar
@ 2021-02-25 20:52 ` Simon Barber
2021-02-25 20:53 ` Simon Barber
1 sibling, 0 replies; 41+ messages in thread
From: Simon Barber @ 2021-02-25 20:52 UTC (permalink / raw)
To: Sina Khanifar; +Cc: Mikael Abrahamsson, sam, bloat
[-- Attachment #1: Type: text/plain, Size: 4924 bytes --]
Yes, if you want to know how well real time communication is going to work,
something close to peak or 95%ile of total latency is the relevant measure.
Showing less data and keeping it simple is ideal, so users don't get confused.
Simon
On February 25, 2021 12:11:02 PM Sina Khanifar <sina@waveform.com> wrote:
> Thanks for the kind words, Simon!
>
>> Since you are measuring buffer bloat - how much latency *can* be caused by
>> the excessive buffering, expressing the jitter number in terms of 95%ile
>> would be appropriate - as that’s closely related to how large the excessive
>> buffer is. The average jitter is more related to how the competing TCP
>> streams have some gaps due to congestion control and these gaps can
>> temporarily lower the buffer occupancy and result in a lower average jitter
>> number.
>
> I'm thinking that we might even remove jitter altogether from the UI,
> and instead just show 95%ile latency. 95%ile latency and 95%ile jitter
> should be equivalent, but 95% latency is really the more meaningful
> measure for real-time communications, it feels like?
>
> On Thu, Feb 25, 2021 at 11:57 AM Simon Barber <simon@superduper.net> wrote:
>>
>> Hi Sina,
>>
>> That sounds great, and I understand the desire to separate the fixed
>> component of latency and the buffer bloat / variable part. Messaging that
>> in a way that accurately conveys the end user impact and the impact due to
>> unmitigated buffers while being easy to understand is tricky.
>>
>> Since you are measuring buffer bloat - how much latency *can* be caused by
>> the excessive buffering, expressing the jitter number in terms of 95%ile
>> would be appropriate - as that’s closely related to how large the excessive
>> buffer is. The average jitter is more related to how the competing TCP
>> streams have some gaps due to congestion control and these gaps can
>> temporarily lower the buffer occupancy and result in a lower average jitter
>> number.
>>
>> Really appreciate this work, and the interface and ‘latency first’ nature
>> of this test. It’s a great contribution, and will hopefully help drive ISPs
>> to reducing their bloat, helping everyone.
>>
>> Simon
>>
>>
>> > On Feb 25, 2021, at 11:47 AM, Sina Khanifar <sina@waveform.com> wrote:
>> >
>> >> So perhaps this can feed into the rating system, total latency < 50mS is
>> an A, < 150mS is a B, 600mS is a C or something like that.
>> >
>> > The "grade" we give is purely a measure of bufferbloat. If you start
>> > with a latency of 500 ms on your connection, it wouldn't be fair for
>> > us to give you an F grade even if there is no increase in latency due
>> > to bufferbloat.
>> >
>> > This is why we added the "Real-World Impact" table below the grade -
>> > in many cases people may start with a connection that is already
>> > problematic for video conferencing, VoIP, and gaming.
>> >
>> > I think we're going to change the conditions on that table to have
>> > high 95%ile latency trigger the degraded performance shield warnings.
>> > In the future it might be neat for us to move to grades on the table
>> > as well.
>> >
>> >
>> > On Thu, Feb 25, 2021 at 5:53 AM Simon Barber <simon@superduper.net> wrote:
>> >>
>> >> So perhaps this can feed into the rating system, total latency < 50mS is
>> an A, < 150mS is a B, 600mS is a C or something like that.
>> >>
>> >> Simon
>> >>
>> >> On February 25, 2021 5:49:26 AM Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>> >>
>> >>> On Thu, 25 Feb 2021, Simon Barber wrote:
>> >>>
>> >>>> The ITU say voice should be <150mS, however in the real world people are
>> >>>> a lot more tolerant. A GSM -> GSM phone call is ~350mS, and very few
>> >>>> people complain about that. That said the quality of the conversation is
>> >>>> affected, and staying under 150mS is better for a fast free flowing
>> >>>> conversation. Most people won't have a problem at 600mS and will have a
>> >>>> problem at 1000mS. That is for a 2 party voice call. A large group
>> >>>> presentation over video can tolerate more, but may have issues with
>> >>>> talking over when switching from presenter to questioner for example.
>> >>>
>> >>>
>> >>> I worked at a phone company 10+ years ago. We had some equipment that
>> >>> internally was ATM based and each "hop" added 7ms. This in combination
>> >>> with IP based telephony at the end points that added 40ms one-way per
>> >>> end-point (PDV buffer) caused people to complain when RTT started creeping
>> >>> up to 300-400ms. This was for PSTN calls.
>> >>>
>> >>> Yes, people might have more tolerance with mobile phone calls because they
>> >>> have lower expectations when out and about, but my experience is that
>> >>> people will definitely notice 300-400ms RTT but they might not get upset
>> >>> enough to open a support ticket until 600ms or more.
>> >>>
>> >>> --
>> >>> Mikael Abrahamsson email: swmike@swm.pp.se
>> >>
>> >>
>>
[-- Attachment #2: Type: text/html, Size: 7999 bytes --]
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 20:10 ` Sina Khanifar
2021-02-25 20:52 ` Simon Barber
@ 2021-02-25 20:53 ` Simon Barber
1 sibling, 0 replies; 41+ messages in thread
From: Simon Barber @ 2021-02-25 20:53 UTC (permalink / raw)
To: Sina Khanifar; +Cc: Mikael Abrahamsson, sam, bloat
[-- Attachment #1: Type: text/plain, Size: 4817 bytes --]
Having more detail available but not shown by default on the main page
might keep the geeks happy and make diagnosis easier.
Simon
On February 25, 2021 12:11:02 PM Sina Khanifar <sina@waveform.com> wrote:
> Thanks for the kind words, Simon!
>
>> Since you are measuring buffer bloat - how much latency *can* be caused by
>> the excessive buffering, expressing the jitter number in terms of 95%ile
>> would be appropriate - as that’s closely related to how large the excessive
>> buffer is. The average jitter is more related to how the competing TCP
>> streams have some gaps due to congestion control and these gaps can
>> temporarily lower the buffer occupancy and result in a lower average jitter
>> number.
>
> I'm thinking that we might even remove jitter altogether from the UI,
> and instead just show 95%ile latency. 95%ile latency and 95%ile jitter
> should be equivalent, but 95% latency is really the more meaningful
> measure for real-time communications, it feels like?
>
> On Thu, Feb 25, 2021 at 11:57 AM Simon Barber <simon@superduper.net> wrote:
>>
>> Hi Sina,
>>
>> That sounds great, and I understand the desire to separate the fixed
>> component of latency and the buffer bloat / variable part. Messaging that
>> in a way that accurately conveys the end user impact and the impact due to
>> unmitigated buffers while being easy to understand is tricky.
>>
>> Since you are measuring buffer bloat - how much latency *can* be caused by
>> the excessive buffering, expressing the jitter number in terms of 95%ile
>> would be appropriate - as that’s closely related to how large the excessive
>> buffer is. The average jitter is more related to how the competing TCP
>> streams have some gaps due to congestion control and these gaps can
>> temporarily lower the buffer occupancy and result in a lower average jitter
>> number.
>>
>> Really appreciate this work, and the interface and ‘latency first’ nature
>> of this test. It’s a great contribution, and will hopefully help drive ISPs
>> to reducing their bloat, helping everyone.
>>
>> Simon
>>
>>
>> > On Feb 25, 2021, at 11:47 AM, Sina Khanifar <sina@waveform.com> wrote:
>> >
>> >> So perhaps this can feed into the rating system, total latency < 50mS is
>> an A, < 150mS is a B, 600mS is a C or something like that.
>> >
>> > The "grade" we give is purely a measure of bufferbloat. If you start
>> > with a latency of 500 ms on your connection, it wouldn't be fair for
>> > us to give you an F grade even if there is no increase in latency due
>> > to bufferbloat.
>> >
>> > This is why we added the "Real-World Impact" table below the grade -
>> > in many cases people may start with a connection that is already
>> > problematic for video conferencing, VoIP, and gaming.
>> >
>> > I think we're going to change the conditions on that table to have
>> > high 95%ile latency trigger the degraded performance shield warnings.
>> > In the future it might be neat for us to move to grades on the table
>> > as well.
>> >
>> >
>> > On Thu, Feb 25, 2021 at 5:53 AM Simon Barber <simon@superduper.net> wrote:
>> >>
>> >> So perhaps this can feed into the rating system, total latency < 50mS is
>> an A, < 150mS is a B, 600mS is a C or something like that.
>> >>
>> >> Simon
>> >>
>> >> On February 25, 2021 5:49:26 AM Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>> >>
>> >>> On Thu, 25 Feb 2021, Simon Barber wrote:
>> >>>
>> >>>> The ITU say voice should be <150mS, however in the real world people are
>> >>>> a lot more tolerant. A GSM -> GSM phone call is ~350mS, and very few
>> >>>> people complain about that. That said the quality of the conversation is
>> >>>> affected, and staying under 150mS is better for a fast free flowing
>> >>>> conversation. Most people won't have a problem at 600mS and will have a
>> >>>> problem at 1000mS. That is for a 2 party voice call. A large group
>> >>>> presentation over video can tolerate more, but may have issues with
>> >>>> talking over when switching from presenter to questioner for example.
>> >>>
>> >>>
>> >>> I worked at a phone company 10+ years ago. We had some equipment that
>> >>> internally was ATM based and each "hop" added 7ms. This in combination
>> >>> with IP based telephony at the end points that added 40ms one-way per
>> >>> end-point (PDV buffer) caused people to complain when RTT started creeping
>> >>> up to 300-400ms. This was for PSTN calls.
>> >>>
>> >>> Yes, people might have more tolerance with mobile phone calls because they
>> >>> have lower expectations when out and about, but my experience is that
>> >>> people will definitely notice 300-400ms RTT but they might not get upset
>> >>> enough to open a support ticket until 600ms or more.
>> >>>
>> >>> --
>> >>> Mikael Abrahamsson email: swmike@swm.pp.se
>> >>
>> >>
>>
[-- Attachment #2: Type: text/html, Size: 7846 bytes --]
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 7:32 ` Sina Khanifar
2021-02-25 13:38 ` Simon Barber
@ 2021-02-25 13:46 ` Simon Barber
1 sibling, 0 replies; 41+ messages in thread
From: Simon Barber @ 2021-02-25 13:46 UTC (permalink / raw)
To: Sina Khanifar; +Cc: Dave Taht, sam, bloat
[-- Attachment #1: Type: text/plain, Size: 16310 bytes --]
And what counts is round trip end to end total latency. This is fixed
latency plus jitter (variation above the fixed) - ie peak total latency.
Peak total or high percentile 95/98 will be a much closer approximation to
the performance of a real world jitter buffer in a voip system than average
jitter. The higher the percentile the better.
2% drops distributed evenly over the call is one dropout every second. Most
users would notice that and most jitter buffers would expand to avoid that
high a level of loss. One quarter second burst loss every 10 seconds is
lower impact but about the same percentage, so you see the loss pattern
matters. Every jitter buffer is designed slightly differently so this
measurement is an approximation. But the key is that peak or close to peak
latency is the right measure.
Simon
On February 24, 2021 11:33:07 PM Sina Khanifar <sina@waveform.com> wrote:
> Thanks for the explanation.
>
> Right now, our criteria for phone/audio is "latency < 300 ms" and
> "jitter < 40 ms".
>
> It seems like something along the lines of "95th percentile latency <
> 300 ms" might be advisable in place of the two existing criteria?
>
>
> Sina.
>
> On Wed, Feb 24, 2021 at 11:15 PM Simon Barber <simon@superduper.net> wrote:
>>
>>
>>
>> On February 24, 2021 9:57:13 PM Sina Khanifar <sina@waveform.com> wrote:
>>
>>> Thanks for the feedback, Dave!
>>>
>>>> 0) "average" jitter is a meaningless number. In the case of a
>>>> videoconferencing application, what matters most is max jitter, where the
>>>> app will choose to ride the top edge of that, rather than follow it. I'd
>>>> prefer using a 98% number, rather than 75% number, to weight where the
>>>> typical delay in a videoconfernce might end up.
>>>
>>>
>>> Both DSLReports and Ookla's desktop app report jitter as an average
>>> rather than as a max number, so I'm a little hesitant to go against
>>> the norm - users might find it a bit surprising to see much larger
>>> jitter numbers reported. We're also not taking a whole ton of latency
>>> tests in each phase, so the 98% will often end up being the max
>>> number.
>>>
>>> With regards to the videoconferencing, we actually ran some real-world
>>> tests of Zoom with various levels of bufferbloat/jitter/latency, and
>>> calibrated our "real-world results" table on that basis. We used
>>> average jitter in those tests ... I think if we used 98% or even 95%
>>> the allowable number would be quite high.
>>
>>
>> Video and audio cannot be played out until the packets have arrived, so
>> late packets are effectively dropped, or the playback buffer must expand to
>> accommodate the most late packets. If the playback buffer expands to
>> accommodate the most late packets then the result is that the whole
>> conversation is delayed by that amount. More than a fraction of a percent
>> of dropped packets results in a very poor video or audio experience, this
>> is why average jitter is irrelevant and peak or maximum latency is the
>> correct measure to use.
>>
>> Yes, humans can tolerate quite a bit of delay. The conversation is
>> significantly less fluid though.
>>
>> Simon
>>
>>
>>
>>
>>
>>>
>>>> 1) The worst case scenario of bloat affecting a users experience is during
>>>> a simultaneous up and download, and I'd rather you did that rather than
>>>> test them separately. Also you get a more realistic figure for the actual
>>>> achievable bandwidth under contention and can expose problems like strict
>>>> priority queuing in one direction or another locking out further flows.
>>>
>>>
>>> We did consider this based on another user's feedback, but didn't
>>> implement it. Perhaps we can do this next time we revisit, though!
>>>
>>>> This points to any of number of problems (features!) It's certainly my hope
>>>> that all the cdn makers at this point have installed bufferbloat
>>>> mitigations. Testing a cdn's tcp IS a great idea, but as a bufferbloated
>>>> test, maybe not so much.
>>>
>>>
>>> We chose to use a CDN because it seemed like the only feasible way to
>>> saturate gigabit links at least somewhat consistently for a meaningful
>>> part of the globe, without setting up a whole lot of servers at quite
>>> high cost.
>>>
>>> But we weren't aware that bufferbloat could be abated from the CDN's
>>> end. This is a bit surprising to me, as our test results indicate that
>>> bufferbloat is regularly an issue even though we're using a CDN for
>>> the speed and latency tests. For example, these are the results on my
>>> own connection here (Cox, in Southern California), showing meaningful
>>> bufferbloat:
>>>
>>> https://www.waveform.com/tools/bufferbloat?test-id=ece467bd-e07a-45ea-9db6-e64d8da2c1d2
>>>
>>> I get even larger bufferbloat effects when running the test on a 4G LTE
>>> network:
>>>
>>> https://www.waveform.com/tools/bufferbloat?test-id=e99ae561-88e0-4e1e-bafd-90fe1de298ac
>>>
>>> If the CDN was abating bufferbloat, surely I wouldn't see results like these?
>>>
>>>> 3) Are you tracking an ecn statistics at this point (ecnseen)?
>>>
>>>
>>> We are not, no. I'd definitely be curious to see if we can add this in
>>> the future, though!
>>>
>>> Best,
>>>
>>> On Wed, Feb 24, 2021 at 2:10 PM Dave Taht <dave.taht@gmail.com> wrote:
>>>>
>>>>
>>>> So I've taken a tiny amount of time to run a few tests. For starters,
>>>> thank you very much
>>>> for your dedication and time into creating such a usable website, and faq.
>>>>
>>>> I have several issues though I really haven't had time to delve deep
>>>> into the packet captures. (others, please try taking em, and put them
>>>> somewhere?)
>>>>
>>>> 0) "average" jitter is a meaningless number. In the case of a
>>>> videoconferencing application,
>>>> what matters most is max jitter, where the app will choose to ride the
>>>> top edge of that, rather than follow it. I'd prefer using a 98%
>>>> number, rather than 75% number, to weight where the typical delay in a
>>>> videoconfernce might end up.
>>>>
>>>> 1) The worst case scenario of bloat affecting a users experience is
>>>> during a simultaneous up and download, and I'd rather you did that
>>>> rather than test them separately. Also you get
>>>> a more realistic figure for the actual achievable bandwidth under
>>>> contention and can expose problems like strict priority queuing in one
>>>> direction or another locking out further flows.
>>>>
>>>> 2) I get absurdly great results from it with or without sqm on on a
>>>> reasonably modern cablemodem (buffercontrol and pie and a cmts doing
>>>> the right things)
>>>>
>>>> This points to any of number of problems (features!) It's certainly my
>>>> hope that all the cdn makers at this point have installed bufferbloat
>>>> mitigations. Testing a cdn's tcp IS a great idea, but as a
>>>> bufferbloated test, maybe not so much.
>>>>
>>>> The packet capture of the tcp flows DOES show about 60ms jitter... but
>>>> no loss. Your test shows:
>>>>
>>>> https://www.waveform.com/tools/bufferbloat?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>>>>
>>>> And is very jittery in the beginning of the test on its estimates. I
>>>> really should be overjoyed at knowing a cdn is doing more of the right
>>>> things, but in terms of a test... and linux also has got a ton of
>>>> mitigations on the client side.
>>>>
>>>> 3) As a side note, ecn actually is negotiated on the upload, if it's
>>>> enabled on your system.
>>>> Are you tracking an ecn statistics at this point (ecnseen)? It is not
>>>> negotiated on the download (which is fine by me).
>>>>
>>>> I regrettable at this precise moment am unable to test a native
>>>> cablemodem at the same speed as a sqm box, hope to get further on this
>>>> tomorrow.
>>>>
>>>> Again, GREAT work so far, and I do think a test tool for all these
>>>> cdns - heck, one that tested all of them at the same time, is very,
>>>> very useful.
>>>>
>>>> On Wed, Feb 24, 2021 at 10:22 AM Sina Khanifar <sina@waveform.com> wrote:
>>>>>
>>>>>
>>>>> Hi all,
>>>>>
>>>>> A couple of months ago my co-founder Sam posted an early beta of the
>>>>> Bufferbloat test that we’ve been working on, and Dave also linked to
>>>>> it a couple of weeks ago.
>>>>>
>>>>> Thank you all so much for your feedback - we almost entirely
>>>>> redesigned the tool and the UI based on the comments we received.
>>>>> We’re almost ready to launch the tool officially today at this URL,
>>>>> but wanted to show it to the list in case anyone finds any last bugs
>>>>> that we might have overlooked:
>>>>>
>>>>> https://www.waveform.com/tools/bufferbloat
>>>>>
>>>>> If you find a bug, please share the "Share Your Results" link with us
>>>>> along with what happened. We capture some debugging information on the
>>>>> backend, and having a share link allows us to diagnose any issues.
>>>>>
>>>>> This is really more of a passion project than anything else for us –
>>>>> we don’t anticipate we’ll try to commercialize it or anything like
>>>>> that. We're very thankful for all the work the folks on this list have
>>>>> done to identify and fix bufferbloat, and hope this is a useful
>>>>> contribution. I’ve personally been very frustrated by bufferbloat on a
>>>>> range of devices, and decided it might be helpful to build another
>>>>> bufferbloat test when the DSLReports test was down at some point last
>>>>> year.
>>>>>
>>>>> Our goals with this project were:
>>>>> * To build a second solid bufferbloat test in case DSLReports goes down again.
>>>>> * Build a test where bufferbloat is front and center as the primary
>>>>> purpose of the test, rather than just a feature.
>>>>> * Try to explain bufferbloat and its effect on a user's connection
>>>>> as clearly as possible for a lay audience.
>>>>>
>>>>> A few notes:
>>>>> * On the backend, we’re using Cloudflare’s CDN to perform the actual
>>>>> download and upload speed test. I know John Graham-Cunning has posted
>>>>> to this list in the past; if he or anyone from Cloudflare sees this,
>>>>> we’d love some help. Our Cloudflare Workers are being
>>>>> bandwidth-throttled due to having a non-enterprise grade account.
>>>>> We’ve worked around this in a kludgy way, but we’d love to get it
>>>>> resolved.
>>>>> * We have lots of ideas for improvements, e.g. simultaneous
>>>>> upload/downloads, trying different file size chunks, time-series
>>>>> latency graphs, using WebRTC to test UDP traffic etc, but in the
>>>>> interest of getting things launched we're sticking with the current
>>>>> featureset.
>>>>> * There are a lot of browser-specific workarounds that we had to
>>>>> implement, and latency itself is measured in different ways on
>>>>> Safari/Webkit vs Chromium/Firefox due to limitations of the
>>>>> PerformanceTiming APIs. You may notice that latency is different on
>>>>> different browsers, however the actual bufferbloat (relative increase
>>>>> in latency) should be pretty consistent.
>>>>>
>>>>> In terms of some of the changes we made based on the feedback we
>>>>> receive on this list:
>>>>>
>>>>> Based on Toke’s feedback:
>>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
>>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
>>>>> * We changed the way the speed tests run to show an instantaneous
>>>>> speed as the test is being run.
>>>>> * We moved the bufferbloat grade into the main results box.
>>>>> * We tried really hard to get as close to saturating gigabit
>>>>> connections as possible. We redesigned completely the way we chunk
>>>>> files, added a “warming up” period, and spent quite a bit optimizing
>>>>> our code to minimize CPU usage, as we found that was often the
>>>>> limiting factor to our speed test results.
>>>>> * We changed the shield grades altogether and went through a few
>>>>> different iterations of how to show the effect of bufferbloat on
>>>>> connectivity, and ended up with a “table view” to try to show the
>>>>> effect that bufferbloat specifically is having on the connection
>>>>> (compared to when the connection is unloaded).
>>>>> * We now link from the results table view to the FAQ where the
>>>>> conditions for each type of connection are explained.
>>>>> * We also changed the way we measure latency and now use the faster
>>>>> of either Google’s CDN or Cloudflare at any given location. We’re also
>>>>> using the WebTiming APIs to get a more accurate latency number, though
>>>>> this does not work on some mobile browsers (e.g. iOS Safari) and as a
>>>>> result we show a higher latency on mobile devices. Since our test is
>>>>> less a test of absolute latency and more a test of relative latency
>>>>> with and without load, we felt this was workable.
>>>>> * Our jitter is now an average (was previously RMS).
>>>>> * The “before you start” text was rewritten and moved above the start button.
>>>>> * We now spell out upload and download instead of having arrows.
>>>>> * We hugely reduced the number of cross-site scripts. I was a bit
>>>>> embarrassed by this if I’m honest - I spent a long time building web
>>>>> tools for the EFF, where we almost never allowed any cross-site
>>>>> scripts. * Our site is hosted on Shopify, and adding any features via
>>>>> their app store ends up adding a whole lot of gunk. But we uninstalled
>>>>> some apps, rewrote our template, and ended up removing a whole lot of
>>>>> the gunk. There’s still plenty of room for improvement, but it should
>>>>> be a lot better than before.
>>>>>
>>>>> Based on Dave Collier-Brown’s feedback:
>>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
>>>>> * We replaced the “unloaded” and “loaded” language with “unloaded”
>>>>> and then “download active” and “upload active.” In the grade box we
>>>>> indicate that, for example, “Your latency increased moderately under
>>>>> load.”
>>>>> * We tried to generally make it easier for non-techie folks to
>>>>> understand by emphasizing the grade and adding the table showing how
>>>>> bufferbloat affects some commonly-used services.
>>>>> * We didn’t really change the candle charts too much - they’re
>>>>> mostly just to give a basic visual - we focused more on the actual
>>>>> meat of the results above that.
>>>>>
>>>>> Based on Sebastian Moeller’s feedback:
>>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
>>>>> * We considered doing a bidirectional saturating load, but decided
>>>>> to skip on implementing it for now. * It’s definitely something we’d
>>>>> like to experiment with more in the future.
>>>>> * We added a “warming up” period as well as a “draining” period to
>>>>> help fill and empty the buffer. We haven’t added the option for an
>>>>> extended test, but have this on our list of backlog changes to make in
>>>>> the future.
>>>>>
>>>>> Based on Y’s feedback (link):
>>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
>>>>> * We actually ended up removing the grades, but we explained our
>>>>> criteria for the new table in the FAQ.
>>>>>
>>>>> Based on Greg White's feedback (shared privately):
>>>>> * We added an FAQ answer explaining jitter and how we measure it.
>>>>>
>>>>> We’d love for you all to play with the new version of the tool and
>>>>> send over any feedback you might have. We’re going to be in a feature
>>>>> freeze before launch but we'd love to get any bugs sorted out. We'll
>>>>> likely put this project aside after we iron out a last round of bugs
>>>>> and launch, and turn back to working on projects that help us pay the
>>>>> bills, but we definitely hope to revisit and improve the tool over
>>>>> time.
>>>>>
>>>>> Best,
>>>>>
>>>>> Sina, Arshan, and Sam.
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>> Bloat@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> "For a successful technology, reality must take precedence over public
>>>> relations, for Mother Nature cannot be fooled" - Richard Feynman
>>>>
>>>> dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>
[-- Attachment #2: Type: text/html, Size: 23944 bytes --]
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 5:56 ` Sina Khanifar
2021-02-25 7:15 ` Simon Barber
@ 2021-02-25 7:20 ` Simon Barber
2021-02-25 10:51 ` Sebastian Moeller
2 siblings, 0 replies; 41+ messages in thread
From: Simon Barber @ 2021-02-25 7:20 UTC (permalink / raw)
To: Sina Khanifar, Dave Taht; +Cc: sam, bloat
[-- Attachment #1: Type: text/plain, Size: 14115 bytes --]
For web browsing average jitter is a legitimate measure but for interactive
media (2 way voip or video conferencing) peak or maximum is the only valid
measurement.
Simon
On February 24, 2021 9:57:13 PM Sina Khanifar <sina@waveform.com> wrote:
> Thanks for the feedback, Dave!
>
>> 0) "average" jitter is a meaningless number. In the case of a
>> videoconferencing application, what matters most is max jitter, where the
>> app will choose to ride the top edge of that, rather than follow it. I'd
>> prefer using a 98% number, rather than 75% number, to weight where the
>> typical delay in a videoconfernce might end up.
>
> Both DSLReports and Ookla's desktop app report jitter as an average
> rather than as a max number, so I'm a little hesitant to go against
> the norm - users might find it a bit surprising to see much larger
> jitter numbers reported. We're also not taking a whole ton of latency
> tests in each phase, so the 98% will often end up being the max
> number.
>
> With regards to the videoconferencing, we actually ran some real-world
> tests of Zoom with various levels of bufferbloat/jitter/latency, and
> calibrated our "real-world results" table on that basis. We used
> average jitter in those tests ... I think if we used 98% or even 95%
> the allowable number would be quite high.
>
>> 1) The worst case scenario of bloat affecting a users experience is during
>> a simultaneous up and download, and I'd rather you did that rather than
>> test them separately. Also you get a more realistic figure for the actual
>> achievable bandwidth under contention and can expose problems like strict
>> priority queuing in one direction or another locking out further flows.
>
> We did consider this based on another user's feedback, but didn't
> implement it. Perhaps we can do this next time we revisit, though!
>
>> This points to any of number of problems (features!) It's certainly my hope
>> that all the cdn makers at this point have installed bufferbloat
>> mitigations. Testing a cdn's tcp IS a great idea, but as a bufferbloated
>> test, maybe not so much.
>
> We chose to use a CDN because it seemed like the only feasible way to
> saturate gigabit links at least somewhat consistently for a meaningful
> part of the globe, without setting up a whole lot of servers at quite
> high cost.
>
> But we weren't aware that bufferbloat could be abated from the CDN's
> end. This is a bit surprising to me, as our test results indicate that
> bufferbloat is regularly an issue even though we're using a CDN for
> the speed and latency tests. For example, these are the results on my
> own connection here (Cox, in Southern California), showing meaningful
> bufferbloat:
>
> https://www.waveform.com/tools/bufferbloat?test-id=ece467bd-e07a-45ea-9db6-e64d8da2c1d2
>
> I get even larger bufferbloat effects when running the test on a 4G LTE
> network:
>
> https://www.waveform.com/tools/bufferbloat?test-id=e99ae561-88e0-4e1e-bafd-90fe1de298ac
>
> If the CDN was abating bufferbloat, surely I wouldn't see results like these?
>
>> 3) Are you tracking an ecn statistics at this point (ecnseen)?
>
> We are not, no. I'd definitely be curious to see if we can add this in
> the future, though!
>
> Best,
>
> On Wed, Feb 24, 2021 at 2:10 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>> So I've taken a tiny amount of time to run a few tests. For starters,
>> thank you very much
>> for your dedication and time into creating such a usable website, and faq.
>>
>> I have several issues though I really haven't had time to delve deep
>> into the packet captures. (others, please try taking em, and put them
>> somewhere?)
>>
>> 0) "average" jitter is a meaningless number. In the case of a
>> videoconferencing application,
>> what matters most is max jitter, where the app will choose to ride the
>> top edge of that, rather than follow it. I'd prefer using a 98%
>> number, rather than 75% number, to weight where the typical delay in a
>> videoconfernce might end up.
>>
>> 1) The worst case scenario of bloat affecting a users experience is
>> during a simultaneous up and download, and I'd rather you did that
>> rather than test them separately. Also you get
>> a more realistic figure for the actual achievable bandwidth under
>> contention and can expose problems like strict priority queuing in one
>> direction or another locking out further flows.
>>
>> 2) I get absurdly great results from it with or without sqm on on a
>> reasonably modern cablemodem (buffercontrol and pie and a cmts doing
>> the right things)
>>
>> This points to any of number of problems (features!) It's certainly my
>> hope that all the cdn makers at this point have installed bufferbloat
>> mitigations. Testing a cdn's tcp IS a great idea, but as a
>> bufferbloated test, maybe not so much.
>>
>> The packet capture of the tcp flows DOES show about 60ms jitter... but
>> no loss. Your test shows:
>>
>> https://www.waveform.com/tools/bufferbloat?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>>
>> And is very jittery in the beginning of the test on its estimates. I
>> really should be overjoyed at knowing a cdn is doing more of the right
>> things, but in terms of a test... and linux also has got a ton of
>> mitigations on the client side.
>>
>> 3) As a side note, ecn actually is negotiated on the upload, if it's
>> enabled on your system.
>> Are you tracking an ecn statistics at this point (ecnseen)? It is not
>> negotiated on the download (which is fine by me).
>>
>> I regrettable at this precise moment am unable to test a native
>> cablemodem at the same speed as a sqm box, hope to get further on this
>> tomorrow.
>>
>> Again, GREAT work so far, and I do think a test tool for all these
>> cdns - heck, one that tested all of them at the same time, is very,
>> very useful.
>>
>> On Wed, Feb 24, 2021 at 10:22 AM Sina Khanifar <sina@waveform.com> wrote:
>> >
>> > Hi all,
>> >
>> > A couple of months ago my co-founder Sam posted an early beta of the
>> > Bufferbloat test that we’ve been working on, and Dave also linked to
>> > it a couple of weeks ago.
>> >
>> > Thank you all so much for your feedback - we almost entirely
>> > redesigned the tool and the UI based on the comments we received.
>> > We’re almost ready to launch the tool officially today at this URL,
>> > but wanted to show it to the list in case anyone finds any last bugs
>> > that we might have overlooked:
>> >
>> > https://www.waveform.com/tools/bufferbloat
>> >
>> > If you find a bug, please share the "Share Your Results" link with us
>> > along with what happened. We capture some debugging information on the
>> > backend, and having a share link allows us to diagnose any issues.
>> >
>> > This is really more of a passion project than anything else for us –
>> > we don’t anticipate we’ll try to commercialize it or anything like
>> > that. We're very thankful for all the work the folks on this list have
>> > done to identify and fix bufferbloat, and hope this is a useful
>> > contribution. I’ve personally been very frustrated by bufferbloat on a
>> > range of devices, and decided it might be helpful to build another
>> > bufferbloat test when the DSLReports test was down at some point last
>> > year.
>> >
>> > Our goals with this project were:
>> > * To build a second solid bufferbloat test in case DSLReports goes down
>> again.
>> > * Build a test where bufferbloat is front and center as the primary
>> > purpose of the test, rather than just a feature.
>> > * Try to explain bufferbloat and its effect on a user's connection
>> > as clearly as possible for a lay audience.
>> >
>> > A few notes:
>> > * On the backend, we’re using Cloudflare’s CDN to perform the actual
>> > download and upload speed test. I know John Graham-Cunning has posted
>> > to this list in the past; if he or anyone from Cloudflare sees this,
>> > we’d love some help. Our Cloudflare Workers are being
>> > bandwidth-throttled due to having a non-enterprise grade account.
>> > We’ve worked around this in a kludgy way, but we’d love to get it
>> > resolved.
>> > * We have lots of ideas for improvements, e.g. simultaneous
>> > upload/downloads, trying different file size chunks, time-series
>> > latency graphs, using WebRTC to test UDP traffic etc, but in the
>> > interest of getting things launched we're sticking with the current
>> > featureset.
>> > * There are a lot of browser-specific workarounds that we had to
>> > implement, and latency itself is measured in different ways on
>> > Safari/Webkit vs Chromium/Firefox due to limitations of the
>> > PerformanceTiming APIs. You may notice that latency is different on
>> > different browsers, however the actual bufferbloat (relative increase
>> > in latency) should be pretty consistent.
>> >
>> > In terms of some of the changes we made based on the feedback we
>> > receive on this list:
>> >
>> > Based on Toke’s feedback:
>> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
>> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
>> > * We changed the way the speed tests run to show an instantaneous
>> > speed as the test is being run.
>> > * We moved the bufferbloat grade into the main results box.
>> > * We tried really hard to get as close to saturating gigabit
>> > connections as possible. We redesigned completely the way we chunk
>> > files, added a “warming up” period, and spent quite a bit optimizing
>> > our code to minimize CPU usage, as we found that was often the
>> > limiting factor to our speed test results.
>> > * We changed the shield grades altogether and went through a few
>> > different iterations of how to show the effect of bufferbloat on
>> > connectivity, and ended up with a “table view” to try to show the
>> > effect that bufferbloat specifically is having on the connection
>> > (compared to when the connection is unloaded).
>> > * We now link from the results table view to the FAQ where the
>> > conditions for each type of connection are explained.
>> > * We also changed the way we measure latency and now use the faster
>> > of either Google’s CDN or Cloudflare at any given location. We’re also
>> > using the WebTiming APIs to get a more accurate latency number, though
>> > this does not work on some mobile browsers (e.g. iOS Safari) and as a
>> > result we show a higher latency on mobile devices. Since our test is
>> > less a test of absolute latency and more a test of relative latency
>> > with and without load, we felt this was workable.
>> > * Our jitter is now an average (was previously RMS).
>> > * The “before you start” text was rewritten and moved above the start
>> button.
>> > * We now spell out upload and download instead of having arrows.
>> > * We hugely reduced the number of cross-site scripts. I was a bit
>> > embarrassed by this if I’m honest - I spent a long time building web
>> > tools for the EFF, where we almost never allowed any cross-site
>> > scripts. * Our site is hosted on Shopify, and adding any features via
>> > their app store ends up adding a whole lot of gunk. But we uninstalled
>> > some apps, rewrote our template, and ended up removing a whole lot of
>> > the gunk. There’s still plenty of room for improvement, but it should
>> > be a lot better than before.
>> >
>> > Based on Dave Collier-Brown’s feedback:
>> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
>> > * We replaced the “unloaded” and “loaded” language with “unloaded”
>> > and then “download active” and “upload active.” In the grade box we
>> > indicate that, for example, “Your latency increased moderately under
>> > load.”
>> > * We tried to generally make it easier for non-techie folks to
>> > understand by emphasizing the grade and adding the table showing how
>> > bufferbloat affects some commonly-used services.
>> > * We didn’t really change the candle charts too much - they’re
>> > mostly just to give a basic visual - we focused more on the actual
>> > meat of the results above that.
>> >
>> > Based on Sebastian Moeller’s feedback:
>> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
>> > * We considered doing a bidirectional saturating load, but decided
>> > to skip on implementing it for now. * It’s definitely something we’d
>> > like to experiment with more in the future.
>> > * We added a “warming up” period as well as a “draining” period to
>> > help fill and empty the buffer. We haven’t added the option for an
>> > extended test, but have this on our list of backlog changes to make in
>> > the future.
>> >
>> > Based on Y’s feedback (link):
>> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
>> > * We actually ended up removing the grades, but we explained our
>> > criteria for the new table in the FAQ.
>> >
>> > Based on Greg White's feedback (shared privately):
>> > * We added an FAQ answer explaining jitter and how we measure it.
>> >
>> > We’d love for you all to play with the new version of the tool and
>> > send over any feedback you might have. We’re going to be in a feature
>> > freeze before launch but we'd love to get any bugs sorted out. We'll
>> > likely put this project aside after we iron out a last round of bugs
>> > and launch, and turn back to working on projects that help us pay the
>> > bills, but we definitely hope to revisit and improve the tool over
>> > time.
>> >
>> > Best,
>> >
>> > Sina, Arshan, and Sam.
>> > _______________________________________________
>> > Bloat mailing list
>> > Bloat@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>>
>> --
>> "For a successful technology, reality must take precedence over public
>> relations, for Mother Nature cannot be fooled" - Richard Feynman
>>
>> dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
[-- Attachment #2: Type: text/html, Size: 20673 bytes --]
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 5:56 ` Sina Khanifar
2021-02-25 7:15 ` Simon Barber
2021-02-25 7:20 ` Simon Barber
@ 2021-02-25 10:51 ` Sebastian Moeller
2021-02-25 20:01 ` Sina Khanifar
2 siblings, 1 reply; 41+ messages in thread
From: Sebastian Moeller @ 2021-02-25 10:51 UTC (permalink / raw)
To: Sina Khanifar; +Cc: Dave Täht, sam, bloat
Hi Sina,
> On Feb 25, 2021, at 06:56, Sina Khanifar <sina@waveform.com> wrote:
>
> Thanks for the feedback, Dave!
>
>> 0) "average" jitter is a meaningless number. In the case of a videoconferencing application, what matters most is max jitter, where the app will choose to ride the top edge of that, rather than follow it. I'd prefer using a 98% number, rather than 75% number, to weight where the typical delay in a videoconfernce might end up.
>
> Both DSLReports and Ookla's desktop app report jitter as an average
> rather than as a max number, so I'm a little hesitant to go against
> the norm - users might find it a bit surprising to see much larger
> jitter numbers reported. We're also not taking a whole ton of latency
> tests in each phase, so the 98% will often end up being the max
> number.
> [...]
[SM] Maybe the solution would be to increase the frequency of the RTT measures and increase the quantile somewhat, maybe 90 or 95?
Best Regards
Sebastian
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 10:51 ` Sebastian Moeller
@ 2021-02-25 20:01 ` Sina Khanifar
2021-02-25 20:14 ` Sebastian Moeller
0 siblings, 1 reply; 41+ messages in thread
From: Sina Khanifar @ 2021-02-25 20:01 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Dave Täht, sam, bloat
> [SM] Maybe the solution would be to increase the frequency of the RTT measures and increase the quantile somewhat, maybe 90 or 95?
I think we scaled back the frequency of our RTT measurements to avoid
CPU issues, but I think we can increase them a little and then use
95th percentile latency with a cutoff of 400ms or so as the check vs
warning condition for videoconferencing, and VOIP.
We could also maybe use the 95th percentile cutoff for gaming? I'm not
sure what the limits/cutoff should be there, though. Would love some
suggestions.
On Thu, Feb 25, 2021 at 2:51 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Sina,
>
>
> > On Feb 25, 2021, at 06:56, Sina Khanifar <sina@waveform.com> wrote:
> >
> > Thanks for the feedback, Dave!
> >
> >> 0) "average" jitter is a meaningless number. In the case of a videoconferencing application, what matters most is max jitter, where the app will choose to ride the top edge of that, rather than follow it. I'd prefer using a 98% number, rather than 75% number, to weight where the typical delay in a videoconfernce might end up.
> >
> > Both DSLReports and Ookla's desktop app report jitter as an average
> > rather than as a max number, so I'm a little hesitant to go against
> > the norm - users might find it a bit surprising to see much larger
> > jitter numbers reported. We're also not taking a whole ton of latency
> > tests in each phase, so the 98% will often end up being the max
> > number.
> > [...]
>
> [SM] Maybe the solution would be to increase the frequency of the RTT measures and increase the quantile somewhat, maybe 90 or 95?
>
> Best Regards
> Sebastian
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 20:01 ` Sina Khanifar
@ 2021-02-25 20:14 ` Sebastian Moeller
2021-02-26 1:06 ` Daniel Lakeland
0 siblings, 1 reply; 41+ messages in thread
From: Sebastian Moeller @ 2021-02-25 20:14 UTC (permalink / raw)
To: Sina Khanifar; +Cc: Dave Täht, sam, bloat, contact
Hi Sina,
let me try to invite Daniel Lakeland (cc'd) into this discussion. He is doing tremendous work in the OpenWrt forum to single handedly help gamers getting the most out of their connections. I think he might have some opinion and data on latency requirements for modern gaming.
@Daniel, this discussion is about a new and really nice speedtest (that I already plugging in the OpenWrt forum, as you probably recall) that has a strong focus on latency under load increases. We are currently discussion what latency increase limits to use to rate a connection for on-line gaming
Now, as a non-gamer, I would assume that gaming has at least similarly strict latency requirements as VoIP, as in most games even a slight delay at the wrong time can direct;y translate into a "game-over". But as I said, I stopped reflex gaming pretty much when I realized how badly I was doing in doom/quake.
Best Regards
Sebastian
> On Feb 25, 2021, at 21:01, Sina Khanifar <sina@waveform.com> wrote:
>
>> [SM] Maybe the solution would be to increase the frequency of the RTT measures and increase the quantile somewhat, maybe 90 or 95?
>
> I think we scaled back the frequency of our RTT measurements to avoid
> CPU issues, but I think we can increase them a little and then use
> 95th percentile latency with a cutoff of 400ms or so as the check vs
> warning condition for videoconferencing, and VOIP.
That would be great, one of the nicest features of the dslreports test is the high resolution bufferbloat measurement. But since you mention CPU issues, I have the impression that the HR probes on dslreports occasionally show spikes that might be more related to the browser and endhosts CPU than to the router; that said the bulk of the probes seem to be dominated by the router.
>
> We could also maybe use the 95th percentile cutoff for gaming? I'm not
> sure what the limits/cutoff should be there, though. Would love some
> suggestions.
I hope Daniel will chime in, other than that, purely based on gut-feeling, I think 95% makes sense for a reaction gated applications?
Best Regards
Sebastian
>
> On Thu, Feb 25, 2021 at 2:51 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Sina,
>>
>>
>>> On Feb 25, 2021, at 06:56, Sina Khanifar <sina@waveform.com> wrote:
>>>
>>> Thanks for the feedback, Dave!
>>>
>>>> 0) "average" jitter is a meaningless number. In the case of a videoconferencing application, what matters most is max jitter, where the app will choose to ride the top edge of that, rather than follow it. I'd prefer using a 98% number, rather than 75% number, to weight where the typical delay in a videoconfernce might end up.
>>>
>>> Both DSLReports and Ookla's desktop app report jitter as an average
>>> rather than as a max number, so I'm a little hesitant to go against
>>> the norm - users might find it a bit surprising to see much larger
>>> jitter numbers reported. We're also not taking a whole ton of latency
>>> tests in each phase, so the 98% will often end up being the max
>>> number.
>>> [...]
>>
>> [SM] Maybe the solution would be to increase the frequency of the RTT measures and increase the quantile somewhat, maybe 90 or 95?
>>
>> Best Regards
>> Sebastian
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 20:14 ` Sebastian Moeller
@ 2021-02-26 1:06 ` Daniel Lakeland
2021-02-26 8:20 ` Sina Khanifar
0 siblings, 1 reply; 41+ messages in thread
From: Daniel Lakeland @ 2021-02-26 1:06 UTC (permalink / raw)
To: Sebastian Moeller, Sina Khanifar; +Cc: Dave Täht, sam, bloat
[-- Attachment #1: Type: text/plain, Size: 4422 bytes --]
On 2/25/21 12:14 PM, Sebastian Moeller wrote:
> Hi Sina,
>
> let me try to invite Daniel Lakeland (cc'd) into this discussion. He is doing tremendous work in the OpenWrt forum to single handedly help gamers getting the most out of their connections. I think he might have some opinion and data on latency requirements for modern gaming.
> @Daniel, this discussion is about a new and really nice speedtest (that I already plugging in the OpenWrt forum, as you probably recall) that has a strong focus on latency under load increases. We are currently discussion what latency increase limits to use to rate a connection for on-line gaming
>
> Now, as a non-gamer, I would assume that gaming has at least similarly strict latency requirements as VoIP, as in most games even a slight delay at the wrong time can direct;y translate into a "game-over". But as I said, I stopped reflex gaming pretty much when I realized how badly I was doing in doom/quake.
>
> Best Regards
> Sebastian
>
Thanks Sebastian,
I have used the waveform.com test myself, and found that it didn't do a
good job of measuring latency in terms of having rather high outlying
tails, which were not realistic for the actual traffic of interest.
Here's a test. I ran this test while doing a ping flood, with attached
txt file
https://www.waveform.com/tools/bufferbloat?test-id=e2fb822d-458b-43c6-984a-92694333ae92
Now this is with a QFQ on my desktop, and a custom HFSC shaper on my WAN
router. This is somewhat more relaxed than I used to run things (I used
to HFSC shape my desktop too, but now I just reschedule with QFQ). Pings
get highest priority along with interactive traffic in the QFQ system,
and they get low-latency but not realtime treatment at the WAN boundary.
Basically you can see this never went above 44ms ping time, whereas the
waveform test had outliers up to 228/231 ms
Almost all latency sensitive traffic will be UDP. Specifically the voice
in VOIP, and the control packets in games, those are all UDP. However it
seems like the waveform test measures http connection open/close, and
I'm thinking something about that is causing extreme outliers. From the
test description:
"We are using HTTP requests instead of WebSockets for our latency and
speed test. This has both advantages and disadvantages in terms of
measuring bufferbloat. We hope to improve this test in the future to
measure latency both via WebSockets, HTTP requests, and WebRTC Data
Channels."
I think the ideal test would use webRTC connection using UDP.
A lot of the games I've seen packet captures from have a fixed clock
tick in the vicinity of 60-65Hz with a UDP packet sent every tick. Even
1 packet lost per second would generally feel not that good to a player,
and it doesn't even need to be lost, just delayed by a large fraction of
1/64 = 15.6ms... High performance games will use closer to 120Hz.
So to guarantee good network performance for a game, you really want to
ensure less than say 10ms of increasing latency at the say 99.5%tile
(that's 1 packet delayed 67% or more of the between-packet time every
200 packets or so, or every ~3 seconds at 64Hz)
To measure this would really require WebRTC sending ~ 300 byte packets
at say 64Hz to a reflector that would send them back, and then compare
the send time to the receive time.
Rather than picking a strict percentile, I'd recommend trying to give a
"probability of no noticeable lag in a 1 second interval" or something
like that.
Supposing your test runs for say 10 seconds, you'll have 640 RTT
samples. Resampling 64 of them randomly say 100 times and calculate how
many of these resamples have less than 1/64 = 15ms of increased latency.
Then express something like 97% chance of lag free each second, or 45%
chance of lag free each second or whatever.
This is a quite stringent requirement. But you know what? The message
boards are FLOODED with people unhappy with their gaming performance, so
I think it's realistic. Honestly to tell a technically minded gamer "Hey
5% of your packets will be delayed more than 400ms" they'd say THAT"S
HORRIBLE not "oh good".
If a gamer can keep their round trip time below 10ms of latency increase
at the 99.5%tile level they'll probably be feeling good. If they get
above 20ms of increase for more than 1% of the time, they'll be really
irritated (this is more or less equivalent to 1% packet drop)
[-- Attachment #2: waveformpings.txt --]
[-- Type: text/plain, Size: 43082 bytes --]
PING waveform.com (23.227.38.65) 56(84) bytes of data.
64 bytes from myshopify.com (23.227.38.65): icmp_seq=1 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=2 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=3 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=4 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=5 ttl=53 time=16.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=6 ttl=53 time=16.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=7 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=8 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=9 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=10 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=11 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=12 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=13 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=14 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=15 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=16 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=17 ttl=53 time=17.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=18 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=19 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=20 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=21 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=22 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=23 ttl=53 time=15.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=24 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=25 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=26 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=27 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=28 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=29 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=30 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=31 ttl=53 time=15.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=32 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=33 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=34 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=35 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=36 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=37 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=38 ttl=53 time=14.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=39 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=40 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=41 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=42 ttl=53 time=19.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=43 ttl=53 time=16.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=44 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=45 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=46 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=47 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=48 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=49 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=50 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=51 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=52 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=53 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=54 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=55 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=56 ttl=53 time=15.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=57 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=58 ttl=53 time=16.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=59 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=60 ttl=53 time=19.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=61 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=62 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=63 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=64 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=65 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=66 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=67 ttl=53 time=16.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=68 ttl=53 time=15.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=69 ttl=53 time=22.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=70 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=71 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=72 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=74 ttl=53 time=24.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=75 ttl=53 time=19.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=76 ttl=53 time=16.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=77 ttl=53 time=17.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=78 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=79 ttl=53 time=28.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=82 ttl=53 time=28.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=83 ttl=53 time=21.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=84 ttl=53 time=27.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=85 ttl=53 time=22.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=86 ttl=53 time=29.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=87 ttl=53 time=21.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=88 ttl=53 time=16.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=89 ttl=53 time=27.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=90 ttl=53 time=23.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=91 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=93 ttl=53 time=24.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=94 ttl=53 time=21.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=95 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=96 ttl=53 time=16.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=97 ttl=53 time=17.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=98 ttl=53 time=23.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=99 ttl=53 time=23.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=100 ttl=53 time=16.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=101 ttl=53 time=16.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=102 ttl=53 time=20.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=103 ttl=53 time=28.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=104 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=105 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=106 ttl=53 time=17.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=107 ttl=53 time=17.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=108 ttl=53 time=32.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=110 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=111 ttl=53 time=21.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=112 ttl=53 time=24.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=113 ttl=53 time=27.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=114 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=115 ttl=53 time=18.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=116 ttl=53 time=27.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=117 ttl=53 time=25.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=118 ttl=53 time=17.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=119 ttl=53 time=27.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=120 ttl=53 time=22.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=121 ttl=53 time=21.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=122 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=123 ttl=53 time=19.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=124 ttl=53 time=27.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=125 ttl=53 time=34.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=126 ttl=53 time=19.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=127 ttl=53 time=26.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=128 ttl=53 time=30.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=129 ttl=53 time=20.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=130 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=132 ttl=53 time=22.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=133 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=134 ttl=53 time=27.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=135 ttl=53 time=27.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=136 ttl=53 time=20.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=137 ttl=53 time=27.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=138 ttl=53 time=29.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=139 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=140 ttl=53 time=17.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=141 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=142 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=143 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=144 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=145 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=146 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=147 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=148 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=149 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=150 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=151 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=152 ttl=53 time=15.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=153 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=154 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=155 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=156 ttl=53 time=14.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=157 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=158 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=159 ttl=53 time=29.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=160 ttl=53 time=21.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=161 ttl=53 time=22.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=162 ttl=53 time=19.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=164 ttl=53 time=28.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=165 ttl=53 time=31.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=166 ttl=53 time=18.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=167 ttl=53 time=21.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=169 ttl=53 time=21.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=170 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=171 ttl=53 time=19.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=172 ttl=53 time=23.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=173 ttl=53 time=22.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=174 ttl=53 time=24.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=175 ttl=53 time=19.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=176 ttl=53 time=27.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=177 ttl=53 time=19.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=178 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=179 ttl=53 time=21.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=180 ttl=53 time=25.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=181 ttl=53 time=26.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=182 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=183 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=184 ttl=53 time=30.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=185 ttl=53 time=27.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=186 ttl=53 time=23.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=187 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=189 ttl=53 time=29.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=190 ttl=53 time=26.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=191 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=192 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=193 ttl=53 time=23.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=194 ttl=53 time=22.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=195 ttl=53 time=19.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=196 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=197 ttl=53 time=15.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=198 ttl=53 time=17.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=199 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=200 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=201 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=202 ttl=53 time=28.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=203 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=204 ttl=53 time=40.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=205 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=206 ttl=53 time=44.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=207 ttl=53 time=22.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=208 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=209 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=210 ttl=53 time=15.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=211 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=212 ttl=53 time=34.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=213 ttl=53 time=16.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=214 ttl=53 time=16.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=215 ttl=53 time=16.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=216 ttl=53 time=15.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=217 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=218 ttl=53 time=19.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=219 ttl=53 time=17.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=220 ttl=53 time=17.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=221 ttl=53 time=22.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=222 ttl=53 time=19.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=223 ttl=53 time=25.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=224 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=225 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=226 ttl=53 time=16.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=227 ttl=53 time=17.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=228 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=229 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=230 ttl=53 time=16.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=231 ttl=53 time=17.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=232 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=233 ttl=53 time=16.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=234 ttl=53 time=16.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=235 ttl=53 time=17.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=236 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=237 ttl=53 time=16.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=238 ttl=53 time=16.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=239 ttl=53 time=17.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=240 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=241 ttl=53 time=15.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=242 ttl=53 time=15.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=243 ttl=53 time=16.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=244 ttl=53 time=24.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=245 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=246 ttl=53 time=16.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=247 ttl=53 time=17.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=248 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=249 ttl=53 time=17.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=250 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=251 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=252 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=253 ttl=53 time=16.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=254 ttl=53 time=19.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=255 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=256 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=257 ttl=53 time=16.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=258 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=259 ttl=53 time=23.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=260 ttl=53 time=15.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=261 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=262 ttl=53 time=20.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=263 ttl=53 time=18.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=264 ttl=53 time=22.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=265 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=266 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=267 ttl=53 time=20.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=268 ttl=53 time=16.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=269 ttl=53 time=20.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=270 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=271 ttl=53 time=15.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=272 ttl=53 time=22.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=273 ttl=53 time=24.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=274 ttl=53 time=17.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=275 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=276 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=277 ttl=53 time=17.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=278 ttl=53 time=18.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=279 ttl=53 time=19.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=280 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=281 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=282 ttl=53 time=19.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=283 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=284 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=285 ttl=53 time=16.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=286 ttl=53 time=17.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=287 ttl=53 time=16.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=288 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=289 ttl=53 time=16.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=290 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=291 ttl=53 time=16.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=292 ttl=53 time=17.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=293 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=294 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=295 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=296 ttl=53 time=19.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=297 ttl=53 time=18.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=298 ttl=53 time=16.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=299 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=300 ttl=53 time=16.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=301 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=302 ttl=53 time=25.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=303 ttl=53 time=16.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=304 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=305 ttl=53 time=16.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=306 ttl=53 time=16.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=307 ttl=53 time=17.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=308 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=309 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=310 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=311 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=312 ttl=53 time=16.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=313 ttl=53 time=17.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=314 ttl=53 time=16.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=315 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=316 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=317 ttl=53 time=24.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=318 ttl=53 time=16.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=319 ttl=53 time=16.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=320 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=321 ttl=53 time=16.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=322 ttl=53 time=17.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=323 ttl=53 time=16.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=324 ttl=53 time=16.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=325 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=326 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=327 ttl=53 time=23.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=328 ttl=53 time=16.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=329 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=330 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=331 ttl=53 time=15.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=332 ttl=53 time=17.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=333 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=334 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=335 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=336 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=337 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=338 ttl=53 time=14.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=339 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=340 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=341 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=342 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=343 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=344 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=345 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=346 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=347 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=348 ttl=53 time=15.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=349 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=350 ttl=53 time=16.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=351 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=352 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=353 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=354 ttl=53 time=14.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=355 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=356 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=357 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=358 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=359 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=360 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=361 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=362 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=363 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=364 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=365 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=366 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=367 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=368 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=369 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=370 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=371 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=372 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=373 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=374 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=375 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=376 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=377 ttl=53 time=17.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=378 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=379 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=380 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=381 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=382 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=383 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=384 ttl=53 time=15.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=385 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=386 ttl=53 time=18.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=387 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=388 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=389 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=390 ttl=53 time=14.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=391 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=392 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=393 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=394 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=395 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=396 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=397 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=398 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=399 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=400 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=401 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=402 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=403 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=404 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=405 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=406 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=407 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=408 ttl=53 time=15.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=409 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=410 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=411 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=412 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=413 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=414 ttl=53 time=15.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=415 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=416 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=417 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=418 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=419 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=420 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=421 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=422 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=423 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=424 ttl=53 time=17.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=425 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=426 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=427 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=428 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=429 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=430 ttl=53 time=15.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=431 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=432 ttl=53 time=15.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=433 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=434 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=435 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=436 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=437 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=438 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=439 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=440 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=441 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=442 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=443 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=444 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=445 ttl=53 time=15.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=446 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=447 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=448 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=449 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=450 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=451 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=452 ttl=53 time=15.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=453 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=454 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=455 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=456 ttl=53 time=16.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=457 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=458 ttl=53 time=16.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=459 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=460 ttl=53 time=15.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=461 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=462 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=463 ttl=53 time=15.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=464 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=465 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=466 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=467 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=468 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=469 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=470 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=471 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=472 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=473 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=474 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=475 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=476 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=477 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=478 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=479 ttl=53 time=16.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=480 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=481 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=482 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=483 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=484 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=485 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=486 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=487 ttl=53 time=14.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=488 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=489 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=490 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=491 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=492 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=493 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=494 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=495 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=496 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=497 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=498 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=499 ttl=53 time=15.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=500 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=501 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=502 ttl=53 time=21.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=503 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=504 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=505 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=506 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=507 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=508 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=509 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=510 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=511 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=512 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=513 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=514 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=515 ttl=53 time=16.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=516 ttl=53 time=15.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=517 ttl=53 time=14.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=518 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=519 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=520 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=521 ttl=53 time=14.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=522 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=523 ttl=53 time=15.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=524 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=525 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=526 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=527 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=528 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=529 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=530 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=531 ttl=53 time=15.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=532 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=533 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=534 ttl=53 time=15.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=535 ttl=53 time=14.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=536 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=537 ttl=53 time=14.9 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=538 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=539 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=540 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=541 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=542 ttl=53 time=16.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=543 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=544 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=545 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=546 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=547 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=548 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=549 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=550 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=551 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=552 ttl=53 time=15.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=553 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=554 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=555 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=556 ttl=53 time=15.0 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=557 ttl=53 time=15.4 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=558 ttl=53 time=15.3 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=559 ttl=53 time=14.7 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=560 ttl=53 time=15.1 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=561 ttl=53 time=14.8 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=562 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=563 ttl=53 time=15.2 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=564 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=565 ttl=53 time=15.5 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=566 ttl=53 time=15.6 ms
64 bytes from myshopify.com (23.227.38.65): icmp_seq=567 ttl=53 time=15.0 ms
--- waveform.com ping statistics ---
567 packets transmitted, 558 received, 1.5873% packet loss, time 114141ms
rtt min/avg/max/mdev = 14.545/17.098/44.054/3.849 ms
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-26 1:06 ` Daniel Lakeland
@ 2021-02-26 8:20 ` Sina Khanifar
2021-02-26 17:25 ` Michael Richardson
0 siblings, 1 reply; 41+ messages in thread
From: Sina Khanifar @ 2021-02-26 8:20 UTC (permalink / raw)
To: contact; +Cc: Sebastian Moeller, Dave Täht, sam, bloat
Hi Daniel, and thanks for chiming in!
> I have used the waveform.com test myself, and found that it didn't do a good job of measuring latency in terms of having rather high outlying tails, which were not realistic for the actual traffic of interest.
I think the reason that this is happening may be due to CPU
throttling, and potentially a limitation of the fact that ours is a
browser-based test, rather than anything else.
I am on a cable modem with about 900 Mbps down and 35 Mbps up. If I
use Chrome's developer tools to I limit my connection to 700 Mbps down
and 25 Mbps up, I end up with no bufferbloat as the tool can no longer
saturate my connection. I don't see those unusual latency spikes.
This CPU-throttling issue is one of the biggest problems with
browser-based tests. We spent a lot of time trying to minimize CPU
usage, but this was the best we could manage (and it's quite a bit
better than earlier versions of the test).
I think that given this limitation of browser-based tests, maybe the
users you have in mind are not our target audience. Our goal is to
make an easy-to-use bufferbloat test "for the rest of us." For
technically-minded gamers, running flent will likely give much better
results. I'd love for it to be otherwise, so we may try to tinker with
things to see if we can get CPU usage down and simultaneously deal
with the UDP question below.
> I think the ideal test would use webRTC connection using UDP.
I definitely agree that UDP traffic is the most latency-sensitive. But
bufferbloat also affects TCP traffic.
Our thinking went something like this: QoS features on some routers
prioritize UDP traffic. But bufferbloat affects all traffic. If we
only tested UDP, QoS prioritization of UDP would mean that some
routers would show no bufferbloat in our test, but nevertheless
exhibit increased latency on TCP traffic. By testing TCP, we could
ensure that both TCP and UDP traffic weren't affected by bufferbloat.
That being said: the ideal case would be to test both TCP and UDP
traffic, and then combine the results from both to rate bufferbloat
and grade the "real-world impact" section. I believe Cloudflare is
adding support for WebSockets and UDP traffic to the workers, which
should mean that we can add a UDP test when that happens.
We'll revisit this in the future, but given the CPU throttling issue
discussed above, I'm not sure it's going to get our test to exactly
where you'd like it to be.
> Supposing your test runs for say 10 seconds, you'll have 640 RTT samples. Resampling 64 of them randomly say 100 times and calculate how many of these resamples have less than 1/64 = 15ms of increased latency. Then express something like 97% chance of lag free each second, or 45% chance of lag free each second or whatever.
I'm not quite sure I understand this part of your suggested protocol.
What do you mean by resampling? Why is the resamplling necessary?
Would like to grok this in case in the future we're able to deal with
the other issues and can implement an updated set of criteria for
tech-savvy gamers.
Best,
Sina.
On Thu, Feb 25, 2021 at 5:06 PM Daniel Lakeland
<contact@lakelandappliedsciences.com> wrote:
>
> On 2/25/21 12:14 PM, Sebastian Moeller wrote:
> > Hi Sina,
> >
> > let me try to invite Daniel Lakeland (cc'd) into this discussion. He is doing tremendous work in the OpenWrt forum to single handedly help gamers getting the most out of their connections. I think he might have some opinion and data on latency requirements for modern gaming.
> > @Daniel, this discussion is about a new and really nice speedtest (that I already plugging in the OpenWrt forum, as you probably recall) that has a strong focus on latency under load increases. We are currently discussion what latency increase limits to use to rate a connection for on-line gaming
> >
> > Now, as a non-gamer, I would assume that gaming has at least similarly strict latency requirements as VoIP, as in most games even a slight delay at the wrong time can direct;y translate into a "game-over". But as I said, I stopped reflex gaming pretty much when I realized how badly I was doing in doom/quake.
> >
> > Best Regards
> > Sebastian
> >
> Thanks Sebastian,
>
> I have used the waveform.com test myself, and found that it didn't do a
> good job of measuring latency in terms of having rather high outlying
> tails, which were not realistic for the actual traffic of interest.
>
> Here's a test. I ran this test while doing a ping flood, with attached
> txt file
>
> https://www.waveform.com/tools/bufferbloat?test-id=e2fb822d-458b-43c6-984a-92694333ae92
>
>
> Now this is with a QFQ on my desktop, and a custom HFSC shaper on my WAN
> router. This is somewhat more relaxed than I used to run things (I used
> to HFSC shape my desktop too, but now I just reschedule with QFQ). Pings
> get highest priority along with interactive traffic in the QFQ system,
> and they get low-latency but not realtime treatment at the WAN boundary.
> Basically you can see this never went above 44ms ping time, whereas the
> waveform test had outliers up to 228/231 ms
>
> Almost all latency sensitive traffic will be UDP. Specifically the voice
> in VOIP, and the control packets in games, those are all UDP. However it
> seems like the waveform test measures http connection open/close, and
> I'm thinking something about that is causing extreme outliers. From the
> test description:
>
> "We are using HTTP requests instead of WebSockets for our latency and
> speed test. This has both advantages and disadvantages in terms of
> measuring bufferbloat. We hope to improve this test in the future to
> measure latency both via WebSockets, HTTP requests, and WebRTC Data
> Channels."
>
> I think the ideal test would use webRTC connection using UDP.
>
> A lot of the games I've seen packet captures from have a fixed clock
> tick in the vicinity of 60-65Hz with a UDP packet sent every tick. Even
> 1 packet lost per second would generally feel not that good to a player,
> and it doesn't even need to be lost, just delayed by a large fraction of
> 1/64 = 15.6ms... High performance games will use closer to 120Hz.
>
> So to guarantee good network performance for a game, you really want to
> ensure less than say 10ms of increasing latency at the say 99.5%tile
> (that's 1 packet delayed 67% or more of the between-packet time every
> 200 packets or so, or every ~3 seconds at 64Hz)
>
> To measure this would really require WebRTC sending ~ 300 byte packets
> at say 64Hz to a reflector that would send them back, and then compare
> the send time to the receive time.
>
> Rather than picking a strict percentile, I'd recommend trying to give a
> "probability of no noticeable lag in a 1 second interval" or something
> like that.
>
> Supposing your test runs for say 10 seconds, you'll have 640 RTT
> samples. Resampling 64 of them randomly say 100 times and calculate how
> many of these resamples have less than 1/64 = 15ms of increased latency.
> Then express something like 97% chance of lag free each second, or 45%
> chance of lag free each second or whatever.
>
> This is a quite stringent requirement. But you know what? The message
> boards are FLOODED with people unhappy with their gaming performance, so
> I think it's realistic. Honestly to tell a technically minded gamer "Hey
> 5% of your packets will be delayed more than 400ms" they'd say THAT"S
> HORRIBLE not "oh good".
>
> If a gamer can keep their round trip time below 10ms of latency increase
> at the 99.5%tile level they'll probably be feeling good. If they get
> above 20ms of increase for more than 1% of the time, they'll be really
> irritated (this is more or less equivalent to 1% packet drop)
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-26 8:20 ` Sina Khanifar
@ 2021-02-26 17:25 ` Michael Richardson
0 siblings, 0 replies; 41+ messages in thread
From: Michael Richardson @ 2021-02-26 17:25 UTC (permalink / raw)
To: Sina Khanifar; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 1151 bytes --]
Sina Khanifar <sina@waveform.com> wrote:
> I think that given this limitation of browser-based tests, maybe the
> users you have in mind are not our target audience. Our goal is to make
> an easy-to-use bufferbloat test "for the rest of us." For
> technically-minded gamers, running flent will likely give much better
> results. I'd love for it to be otherwise, so we may try to tinker with
> things to see if we can get CPU usage down and simultaneously deal with
> the UDP question below.
I think that your goals are good.
In many cases around here, the higher speed connections are also more bloated,
and the challenge is convincing a home owner that they will get better
interactive performance on a VDSL2 than a FTTH, if they upgrade their router.
(This is particularly true if their wifi is slower than their downlink, and
also bloated)
It's about quality over quantity.
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | IoT architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-24 18:22 [Bloat] Updated Bufferbloat Test Sina Khanifar
2021-02-24 22:10 ` Dave Taht
@ 2021-02-24 22:15 ` Kenneth Porter
2021-02-25 5:29 ` Kenneth Porter
2021-02-25 1:16 ` David Collier-Brown
` (5 subsequent siblings)
7 siblings, 1 reply; 41+ messages in thread
From: Kenneth Porter @ 2021-02-24 22:15 UTC (permalink / raw)
To: bloat
My results:
<https://www.waveform.com/tools/bufferbloat?test-id=18fcb496-5764-4b13-8dcc-afbd1e75d3a8>
My LAN is 100 Mbps. I'm using an OpenWRT-based router with Cake. My ISP is
Xfinity.
I'll try to remember to run this from my office tonight when nobody's
around. We've got a 50 Mbps fiber connection with AT&T and I'm using
fq_codel on a CentOS 7 system for the SQM. That should be interesting.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-24 22:15 ` Kenneth Porter
@ 2021-02-25 5:29 ` Kenneth Porter
2021-02-25 5:35 ` Dave Taht
0 siblings, 1 reply; 41+ messages in thread
From: Kenneth Porter @ 2021-02-25 5:29 UTC (permalink / raw)
To: bloat
On 2/24/2021 2:15 PM, Kenneth Porter wrote:
>
> I'll try to remember to run this from my office tonight when nobody's
> around. We've got a 50 Mbps fiber connection with AT&T and I'm using
> fq_codel on a CentOS 7 system for the SQM. That should be interesting.
https://www.waveform.com/tools/bufferbloat?test-id=494dbe95-5302-4e1c-84cd-fbb4c8871ea2
This is after restarting the sqm-scripts. I got initial bad results and
looked at the debug log and found that the iptables commands were
failing to get a lock during system boot to set up the mangle table. I
think it's competing with firewalld and fail2ban and losing. I don't see
any lock errors from the restart and now the test shows good bloat results.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 5:29 ` Kenneth Porter
@ 2021-02-25 5:35 ` Dave Taht
2021-02-25 6:24 ` Kenneth Porter
0 siblings, 1 reply; 41+ messages in thread
From: Dave Taht @ 2021-02-25 5:35 UTC (permalink / raw)
To: Kenneth Porter; +Cc: bloat
and with the sqmscripts off?
On Wed, Feb 24, 2021 at 9:29 PM Kenneth Porter <shiva@sewingwitch.com> wrote:
>
> On 2/24/2021 2:15 PM, Kenneth Porter wrote:
> >
> > I'll try to remember to run this from my office tonight when nobody's
> > around. We've got a 50 Mbps fiber connection with AT&T and I'm using
> > fq_codel on a CentOS 7 system for the SQM. That should be interesting.
>
> https://www.waveform.com/tools/bufferbloat?test-id=494dbe95-5302-4e1c-84cd-fbb4c8871ea2
>
> This is after restarting the sqm-scripts. I got initial bad results and
> looked at the debug log and found that the iptables commands were
> failing to get a lock during system boot to set up the mangle table. I
> think it's competing with firewalld and fail2ban and losing. I don't see
> any lock errors from the restart and now the test shows good bloat results.
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
"For a successful technology, reality must take precedence over public
relations, for Mother Nature cannot be fooled" - Richard Feynman
dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-24 18:22 [Bloat] Updated Bufferbloat Test Sina Khanifar
2021-02-24 22:10 ` Dave Taht
2021-02-24 22:15 ` Kenneth Porter
@ 2021-02-25 1:16 ` David Collier-Brown
2021-02-25 6:21 ` Sina Khanifar
2021-02-25 4:48 ` Marco Belmonte
` (4 subsequent siblings)
7 siblings, 1 reply; 41+ messages in thread
From: David Collier-Brown @ 2021-02-25 1:16 UTC (permalink / raw)
To: bloat
I like it, but the next thing I wonder of is "how good am I versus
what's normal?"
Let's say cable modems run at 42 Giga-somethings per second and I have a
cable modem. If I get 40 Gsps, or 95%, is that good or bad?
Doing that as a little horizaontal graph might be a good approach, so
you can see if you land in the range around 42 up, 2 down, better or
much worse...
--dave
On 2021-02-24 1:22 p.m., Sina Khanifar wrote:
> Hi all,
>
> A couple of months ago my co-founder Sam posted an early beta of the
> Bufferbloat test that we’ve been working on, and Dave also linked to
> it a couple of weeks ago.
>
> Thank you all so much for your feedback - we almost entirely
> redesigned the tool and the UI based on the comments we received.
> We’re almost ready to launch the tool officially today at this URL,
> but wanted to show it to the list in case anyone finds any last bugs
> that we might have overlooked:
>
> https://www.waveform.com/tools/bufferbloat
>
> If you find a bug, please share the "Share Your Results" link with us
> along with what happened. We capture some debugging information on the
> backend, and having a share link allows us to diagnose any issues.
>
> This is really more of a passion project than anything else for us –
> we don’t anticipate we’ll try to commercialize it or anything like
> that. We're very thankful for all the work the folks on this list have
> done to identify and fix bufferbloat, and hope this is a useful
> contribution. I’ve personally been very frustrated by bufferbloat on a
> range of devices, and decided it might be helpful to build another
> bufferbloat test when the DSLReports test was down at some point last
> year.
>
> Our goals with this project were:
> * To build a second solid bufferbloat test in case DSLReports goes
> down again.
> * Build a test where bufferbloat is front and center as the primary
> purpose of the test, rather than just a feature.
> * Try to explain bufferbloat and its effect on a user's connection
> as clearly as possible for a lay audience.
>
> A few notes:
> * On the backend, we’re using Cloudflare’s CDN to perform the actual
> download and upload speed test. I know John Graham-Cunning has posted
> to this list in the past; if he or anyone from Cloudflare sees this,
> we’d love some help. Our Cloudflare Workers are being
> bandwidth-throttled due to having a non-enterprise grade account.
> We’ve worked around this in a kludgy way, but we’d love to get it
> resolved.
> * We have lots of ideas for improvements, e.g. simultaneous
> upload/downloads, trying different file size chunks, time-series
> latency graphs, using WebRTC to test UDP traffic etc, but in the
> interest of getting things launched we're sticking with the current
> featureset.
> * There are a lot of browser-specific workarounds that we had to
> implement, and latency itself is measured in different ways on
> Safari/Webkit vs Chromium/Firefox due to limitations of the
> PerformanceTiming APIs. You may notice that latency is different on
> different browsers, however the actual bufferbloat (relative increase
> in latency) should be pretty consistent.
>
> In terms of some of the changes we made based on the feedback we
> receive on this list:
>
> Based on Toke’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
> * We changed the way the speed tests run to show an instantaneous
> speed as the test is being run.
> * We moved the bufferbloat grade into the main results box.
> * We tried really hard to get as close to saturating gigabit
> connections as possible. We redesigned completely the way we chunk
> files, added a “warming up” period, and spent quite a bit optimizing
> our code to minimize CPU usage, as we found that was often the
> limiting factor to our speed test results.
> * We changed the shield grades altogether and went through a few
> different iterations of how to show the effect of bufferbloat on
> connectivity, and ended up with a “table view” to try to show the
> effect that bufferbloat specifically is having on the connection
> (compared to when the connection is unloaded).
> * We now link from the results table view to the FAQ where the
> conditions for each type of connection are explained.
> * We also changed the way we measure latency and now use the faster
> of either Google’s CDN or Cloudflare at any given location. We’re also
> using the WebTiming APIs to get a more accurate latency number, though
> this does not work on some mobile browsers (e.g. iOS Safari) and as a
> result we show a higher latency on mobile devices. Since our test is
> less a test of absolute latency and more a test of relative latency
> with and without load, we felt this was workable.
> * Our jitter is now an average (was previously RMS).
> * The “before you start” text was rewritten and moved above the start
> button.
> * We now spell out upload and download instead of having arrows.
> * We hugely reduced the number of cross-site scripts. I was a bit
> embarrassed by this if I’m honest - I spent a long time building web
> tools for the EFF, where we almost never allowed any cross-site
> scripts. * Our site is hosted on Shopify, and adding any features via
> their app store ends up adding a whole lot of gunk. But we uninstalled
> some apps, rewrote our template, and ended up removing a whole lot of
> the gunk. There’s still plenty of room for improvement, but it should
> be a lot better than before.
>
> Based on Dave Collier-Brown’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
> * We replaced the “unloaded” and “loaded” language with “unloaded”
> and then “download active” and “upload active.” In the grade box we
> indicate that, for example, “Your latency increased moderately under
> load.”
> * We tried to generally make it easier for non-techie folks to
> understand by emphasizing the grade and adding the table showing how
> bufferbloat affects some commonly-used services.
> * We didn’t really change the candle charts too much - they’re
> mostly just to give a basic visual - we focused more on the actual
> meat of the results above that.
>
> Based on Sebastian Moeller’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
> * We considered doing a bidirectional saturating load, but decided
> to skip on implementing it for now. * It’s definitely something we’d
> like to experiment with more in the future.
> * We added a “warming up” period as well as a “draining” period to
> help fill and empty the buffer. We haven’t added the option for an
> extended test, but have this on our list of backlog changes to make in
> the future.
>
> Based on Y’s feedback (link):
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
> * We actually ended up removing the grades, but we explained our
> criteria for the new table in the FAQ.
>
> Based on Greg White's feedback (shared privately):
> * We added an FAQ answer explaining jitter and how we measure it.
>
> We’d love for you all to play with the new version of the tool and
> send over any feedback you might have. We’re going to be in a feature
> freeze before launch but we'd love to get any bugs sorted out. We'll
> likely put this project aside after we iron out a last round of bugs
> and launch, and turn back to working on projects that help us pay the
> bills, but we definitely hope to revisit and improve the tool over
> time.
>
> Best,
>
> Sina, Arshan, and Sam.
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
David Collier-Brown, | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
davecb@spamcop.net | -- Mark Twain
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 1:16 ` David Collier-Brown
@ 2021-02-25 6:21 ` Sina Khanifar
0 siblings, 0 replies; 41+ messages in thread
From: Sina Khanifar @ 2021-02-25 6:21 UTC (permalink / raw)
To: davecb, dave.collier-brown; +Cc: bloat
Hmm, I'm not quite sure what you mean here - not all cable modems are
equal, surely? Some cable modems will run at 42 Gsps but others will
run at 0.1 Gsps. I'm not sure a comparison to a global average would
be helpful?
I think our "grading" system is meant to give at least some indication
of "how good am I versus what's normal?" - if you're getting anything
less than an A, there's clearly room for improvement.
On Wed, Feb 24, 2021 at 5:16 PM David Collier-Brown <davecb.42@gmail.com> wrote:
>
> I like it, but the next thing I wonder of is "how good am I versus
> what's normal?"
>
> Let's say cable modems run at 42 Giga-somethings per second and I have a
> cable modem. If I get 40 Gsps, or 95%, is that good or bad?
>
> Doing that as a little horizaontal graph might be a good approach, so
> you can see if you land in the range around 42 up, 2 down, better or
> much worse...
>
> --dave
>
>
>
> On 2021-02-24 1:22 p.m., Sina Khanifar wrote:
> > Hi all,
> >
> > A couple of months ago my co-founder Sam posted an early beta of the
> > Bufferbloat test that we’ve been working on, and Dave also linked to
> > it a couple of weeks ago.
> >
> > Thank you all so much for your feedback - we almost entirely
> > redesigned the tool and the UI based on the comments we received.
> > We’re almost ready to launch the tool officially today at this URL,
> > but wanted to show it to the list in case anyone finds any last bugs
> > that we might have overlooked:
> >
> > https://www.waveform.com/tools/bufferbloat
> >
> > If you find a bug, please share the "Share Your Results" link with us
> > along with what happened. We capture some debugging information on the
> > backend, and having a share link allows us to diagnose any issues.
> >
> > This is really more of a passion project than anything else for us –
> > we don’t anticipate we’ll try to commercialize it or anything like
> > that. We're very thankful for all the work the folks on this list have
> > done to identify and fix bufferbloat, and hope this is a useful
> > contribution. I’ve personally been very frustrated by bufferbloat on a
> > range of devices, and decided it might be helpful to build another
> > bufferbloat test when the DSLReports test was down at some point last
> > year.
> >
> > Our goals with this project were:
> > * To build a second solid bufferbloat test in case DSLReports goes
> > down again.
> > * Build a test where bufferbloat is front and center as the primary
> > purpose of the test, rather than just a feature.
> > * Try to explain bufferbloat and its effect on a user's connection
> > as clearly as possible for a lay audience.
> >
> > A few notes:
> > * On the backend, we’re using Cloudflare’s CDN to perform the actual
> > download and upload speed test. I know John Graham-Cunning has posted
> > to this list in the past; if he or anyone from Cloudflare sees this,
> > we’d love some help. Our Cloudflare Workers are being
> > bandwidth-throttled due to having a non-enterprise grade account.
> > We’ve worked around this in a kludgy way, but we’d love to get it
> > resolved.
> > * We have lots of ideas for improvements, e.g. simultaneous
> > upload/downloads, trying different file size chunks, time-series
> > latency graphs, using WebRTC to test UDP traffic etc, but in the
> > interest of getting things launched we're sticking with the current
> > featureset.
> > * There are a lot of browser-specific workarounds that we had to
> > implement, and latency itself is measured in different ways on
> > Safari/Webkit vs Chromium/Firefox due to limitations of the
> > PerformanceTiming APIs. You may notice that latency is different on
> > different browsers, however the actual bufferbloat (relative increase
> > in latency) should be pretty consistent.
> >
> > In terms of some of the changes we made based on the feedback we
> > receive on this list:
> >
> > Based on Toke’s feedback:
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
> > * We changed the way the speed tests run to show an instantaneous
> > speed as the test is being run.
> > * We moved the bufferbloat grade into the main results box.
> > * We tried really hard to get as close to saturating gigabit
> > connections as possible. We redesigned completely the way we chunk
> > files, added a “warming up” period, and spent quite a bit optimizing
> > our code to minimize CPU usage, as we found that was often the
> > limiting factor to our speed test results.
> > * We changed the shield grades altogether and went through a few
> > different iterations of how to show the effect of bufferbloat on
> > connectivity, and ended up with a “table view” to try to show the
> > effect that bufferbloat specifically is having on the connection
> > (compared to when the connection is unloaded).
> > * We now link from the results table view to the FAQ where the
> > conditions for each type of connection are explained.
> > * We also changed the way we measure latency and now use the faster
> > of either Google’s CDN or Cloudflare at any given location. We’re also
> > using the WebTiming APIs to get a more accurate latency number, though
> > this does not work on some mobile browsers (e.g. iOS Safari) and as a
> > result we show a higher latency on mobile devices. Since our test is
> > less a test of absolute latency and more a test of relative latency
> > with and without load, we felt this was workable.
> > * Our jitter is now an average (was previously RMS).
> > * The “before you start” text was rewritten and moved above the start
> > button.
> > * We now spell out upload and download instead of having arrows.
> > * We hugely reduced the number of cross-site scripts. I was a bit
> > embarrassed by this if I’m honest - I spent a long time building web
> > tools for the EFF, where we almost never allowed any cross-site
> > scripts. * Our site is hosted on Shopify, and adding any features via
> > their app store ends up adding a whole lot of gunk. But we uninstalled
> > some apps, rewrote our template, and ended up removing a whole lot of
> > the gunk. There’s still plenty of room for improvement, but it should
> > be a lot better than before.
> >
> > Based on Dave Collier-Brown’s feedback:
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
> > * We replaced the “unloaded” and “loaded” language with “unloaded”
> > and then “download active” and “upload active.” In the grade box we
> > indicate that, for example, “Your latency increased moderately under
> > load.”
> > * We tried to generally make it easier for non-techie folks to
> > understand by emphasizing the grade and adding the table showing how
> > bufferbloat affects some commonly-used services.
> > * We didn’t really change the candle charts too much - they’re
> > mostly just to give a basic visual - we focused more on the actual
> > meat of the results above that.
> >
> > Based on Sebastian Moeller’s feedback:
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
> > * We considered doing a bidirectional saturating load, but decided
> > to skip on implementing it for now. * It’s definitely something we’d
> > like to experiment with more in the future.
> > * We added a “warming up” period as well as a “draining” period to
> > help fill and empty the buffer. We haven’t added the option for an
> > extended test, but have this on our list of backlog changes to make in
> > the future.
> >
> > Based on Y’s feedback (link):
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
> > * We actually ended up removing the grades, but we explained our
> > criteria for the new table in the FAQ.
> >
> > Based on Greg White's feedback (shared privately):
> > * We added an FAQ answer explaining jitter and how we measure it.
> >
> > We’d love for you all to play with the new version of the tool and
> > send over any feedback you might have. We’re going to be in a feature
> > freeze before launch but we'd love to get any bugs sorted out. We'll
> > likely put this project aside after we iron out a last round of bugs
> > and launch, and turn back to working on projects that help us pay the
> > bills, but we definitely hope to revisit and improve the tool over
> > time.
> >
> > Best,
> >
> > Sina, Arshan, and Sam.
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
> --
> David Collier-Brown, | Always do right. This will gratify
> System Programmer and Author | some people and astonish the rest
> davecb@spamcop.net | -- Mark Twain
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-24 18:22 [Bloat] Updated Bufferbloat Test Sina Khanifar
` (2 preceding siblings ...)
2021-02-25 1:16 ` David Collier-Brown
@ 2021-02-25 4:48 ` Marco Belmonte
[not found] ` <1D68EF40C3A8A269831408C8@172.27.17.193>
` (3 subsequent siblings)
7 siblings, 0 replies; 41+ messages in thread
From: Marco Belmonte @ 2021-02-25 4:48 UTC (permalink / raw)
To: bloat
Wow, this is awesome and what a great contribution! Thanks so much!
On 2/24/2021 10:22 AM, Sina Khanifar wrote:
> Hi all,
>
> A couple of months ago my co-founder Sam posted an early beta of the
> Bufferbloat test that we’ve been working on, and Dave also linked to
> it a couple of weeks ago.
>
> Thank you all so much for your feedback - we almost entirely
> redesigned the tool and the UI based on the comments we received.
> We’re almost ready to launch the tool officially today at this URL,
> but wanted to show it to the list in case anyone finds any last bugs
> that we might have overlooked:
>
> https://www.waveform.com/tools/bufferbloat
>
> If you find a bug, please share the "Share Your Results" link with us
> along with what happened. We capture some debugging information on the
> backend, and having a share link allows us to diagnose any issues.
>
> This is really more of a passion project than anything else for us –
> we don’t anticipate we’ll try to commercialize it or anything like
> that. We're very thankful for all the work the folks on this list have
> done to identify and fix bufferbloat, and hope this is a useful
> contribution. I’ve personally been very frustrated by bufferbloat on a
> range of devices, and decided it might be helpful to build another
> bufferbloat test when the DSLReports test was down at some point last
> year.
>
> Our goals with this project were:
> * To build a second solid bufferbloat test in case DSLReports goes down again.
> * Build a test where bufferbloat is front and center as the primary
> purpose of the test, rather than just a feature.
> * Try to explain bufferbloat and its effect on a user's connection
> as clearly as possible for a lay audience.
>
> A few notes:
> * On the backend, we’re using Cloudflare’s CDN to perform the actual
> download and upload speed test. I know John Graham-Cunning has posted
> to this list in the past; if he or anyone from Cloudflare sees this,
> we’d love some help. Our Cloudflare Workers are being
> bandwidth-throttled due to having a non-enterprise grade account.
> We’ve worked around this in a kludgy way, but we’d love to get it
> resolved.
> * We have lots of ideas for improvements, e.g. simultaneous
> upload/downloads, trying different file size chunks, time-series
> latency graphs, using WebRTC to test UDP traffic etc, but in the
> interest of getting things launched we're sticking with the current
> featureset.
> * There are a lot of browser-specific workarounds that we had to
> implement, and latency itself is measured in different ways on
> Safari/Webkit vs Chromium/Firefox due to limitations of the
> PerformanceTiming APIs. You may notice that latency is different on
> different browsers, however the actual bufferbloat (relative increase
> in latency) should be pretty consistent.
>
> In terms of some of the changes we made based on the feedback we
> receive on this list:
>
> Based on Toke’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
> * We changed the way the speed tests run to show an instantaneous
> speed as the test is being run.
> * We moved the bufferbloat grade into the main results box.
> * We tried really hard to get as close to saturating gigabit
> connections as possible. We redesigned completely the way we chunk
> files, added a “warming up” period, and spent quite a bit optimizing
> our code to minimize CPU usage, as we found that was often the
> limiting factor to our speed test results.
> * We changed the shield grades altogether and went through a few
> different iterations of how to show the effect of bufferbloat on
> connectivity, and ended up with a “table view” to try to show the
> effect that bufferbloat specifically is having on the connection
> (compared to when the connection is unloaded).
> * We now link from the results table view to the FAQ where the
> conditions for each type of connection are explained.
> * We also changed the way we measure latency and now use the faster
> of either Google’s CDN or Cloudflare at any given location. We’re also
> using the WebTiming APIs to get a more accurate latency number, though
> this does not work on some mobile browsers (e.g. iOS Safari) and as a
> result we show a higher latency on mobile devices. Since our test is
> less a test of absolute latency and more a test of relative latency
> with and without load, we felt this was workable.
> * Our jitter is now an average (was previously RMS).
> * The “before you start” text was rewritten and moved above the start button.
> * We now spell out upload and download instead of having arrows.
> * We hugely reduced the number of cross-site scripts. I was a bit
> embarrassed by this if I’m honest - I spent a long time building web
> tools for the EFF, where we almost never allowed any cross-site
> scripts. * Our site is hosted on Shopify, and adding any features via
> their app store ends up adding a whole lot of gunk. But we uninstalled
> some apps, rewrote our template, and ended up removing a whole lot of
> the gunk. There’s still plenty of room for improvement, but it should
> be a lot better than before.
>
> Based on Dave Collier-Brown’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
> * We replaced the “unloaded” and “loaded” language with “unloaded”
> and then “download active” and “upload active.” In the grade box we
> indicate that, for example, “Your latency increased moderately under
> load.”
> * We tried to generally make it easier for non-techie folks to
> understand by emphasizing the grade and adding the table showing how
> bufferbloat affects some commonly-used services.
> * We didn’t really change the candle charts too much - they’re
> mostly just to give a basic visual - we focused more on the actual
> meat of the results above that.
>
> Based on Sebastian Moeller’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
> * We considered doing a bidirectional saturating load, but decided
> to skip on implementing it for now. * It’s definitely something we’d
> like to experiment with more in the future.
> * We added a “warming up” period as well as a “draining” period to
> help fill and empty the buffer. We haven’t added the option for an
> extended test, but have this on our list of backlog changes to make in
> the future.
>
> Based on Y’s feedback (link):
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
> * We actually ended up removing the grades, but we explained our
> criteria for the new table in the FAQ.
>
> Based on Greg White's feedback (shared privately):
> * We added an FAQ answer explaining jitter and how we measure it.
>
> We’d love for you all to play with the new version of the tool and
> send over any feedback you might have. We’re going to be in a feature
> freeze before launch but we'd love to get any bugs sorted out. We'll
> likely put this project aside after we iron out a last round of bugs
> and launch, and turn back to working on projects that help us pay the
> bills, but we definitely hope to revisit and improve the tool over
> time.
>
> Best,
>
> Sina, Arshan, and Sam.
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 41+ messages in thread
[parent not found: <1D68EF40C3A8A269831408C8@172.27.17.193>]
* Re: [Bloat] Updated Bufferbloat Test
2021-02-24 18:22 [Bloat] Updated Bufferbloat Test Sina Khanifar
` (4 preceding siblings ...)
[not found] ` <1D68EF40C3A8A269831408C8@172.27.17.193>
@ 2021-02-25 9:47 ` Mikael Abrahamsson
2021-02-25 10:49 ` Sebastian Moeller
2021-02-25 12:27 ` Toke Høiland-Jørgensen
7 siblings, 0 replies; 41+ messages in thread
From: Mikael Abrahamsson @ 2021-02-25 9:47 UTC (permalink / raw)
To: Sina Khanifar; +Cc: bloat, sam
On Wed, 24 Feb 2021, Sina Khanifar wrote:
> https://www.waveform.com/tools/bufferbloat
I thought I just wanted to confirm that the tool seems to accurately seems
to measure even higher speeds.
This is my 1000/1000 debloated with FQ_CODEL to 900/900:
https://www.waveform.com/tools/bufferbloat?test-id=1ad173ce-9b9f-483c-842c-ea5cc08c2ff6
This is with SQM removed:
https://www.waveform.com/tools/bufferbloat?test-id=67168eb7-f7e2-44eb-9720-0dd52c725e8c
My ISP has told me that they have a 10ms FIFO in my downstream direction,
and openwrt defaults with FQ_CODEL in the upstream direction, and this
seems to be accurately reflected in what the tool shows.
Also my APU2 can't really keep up with 900/900 I think, because when I
set SQM to 500/500 I get very tightly controlled PDV:
https://www.waveform.com/tools/bufferbloat?test-id=58626d8c-2eea-43f9-9904-b1ec43f28235
Tool looks good, I like it! Thanks!
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-24 18:22 [Bloat] Updated Bufferbloat Test Sina Khanifar
` (5 preceding siblings ...)
2021-02-25 9:47 ` Mikael Abrahamsson
@ 2021-02-25 10:49 ` Sebastian Moeller
2021-02-25 20:41 ` Sina Khanifar
2021-02-25 12:27 ` Toke Høiland-Jørgensen
7 siblings, 1 reply; 41+ messages in thread
From: Sebastian Moeller @ 2021-02-25 10:49 UTC (permalink / raw)
To: Sina Khanifar; +Cc: bloat, sam
Hi Sina,
great work! I took the liberty to advertise this test already for some weeks, because even in its still evolving developing state it was/is already producubg interesting actionable results. Thanks foe fixing the latency numbers for (desktop) Safari. More below.
> On Feb 24, 2021, at 19:22, Sina Khanifar <sina@waveform.com> wrote:
>
> Hi all,
>
> A couple of months ago my co-founder Sam posted an early beta of the
> Bufferbloat test that we’ve been working on, and Dave also linked to
> it a couple of weeks ago.
>
> Thank you all so much for your feedback - we almost entirely
> redesigned the tool and the UI based on the comments we received.
> We’re almost ready to launch the tool officially today at this URL,
> but wanted to show it to the list in case anyone finds any last bugs
> that we might have overlooked:
>
> https://www.waveform.com/tools/bufferbloat
>
> If you find a bug, please share the "Share Your Results" link with us
> along with what happened. We capture some debugging information on the
> backend, and having a share link allows us to diagnose any issues.
[SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
>
> This is really more of a passion project than anything else for us –
> we don’t anticipate we’ll try to commercialize it or anything like
> that. We're very thankful for all the work the folks on this list have
> done to identify and fix bufferbloat, and hope this is a useful
> contribution. I’ve personally been very frustrated by bufferbloat on a
> range of devices, and decided it might be helpful to build another
> bufferbloat test when the DSLReports test was down at some point last
> year.
>
> Our goals with this project were:
> * To build a second solid bufferbloat test in case DSLReports goes down again.
> * Build a test where bufferbloat is front and center as the primary
> purpose of the test, rather than just a feature.
> * Try to explain bufferbloat and its effect on a user's connection
> as clearly as possible for a lay audience.
>
> A few notes:
> * On the backend, we’re using Cloudflare’s CDN to perform the actual
> download and upload speed test. I know John Graham-Cunning has posted
> to this list in the past; if he or anyone from Cloudflare sees this,
> we’d love some help. Our Cloudflare Workers are being
> bandwidth-throttled due to having a non-enterprise grade account.
> We’ve worked around this in a kludgy way, but we’d love to get it
> resolved.
[SM] I think this was a decent decision, as it seems your tests has less issues even filling 1Gbps links than most others.
> * We have lots of ideas for improvements, e.g. simultaneous
> upload/downloads, trying different file size chunks, time-series
> latency graphs, using WebRTC to test UDP traffic etc, but in the
> interest of getting things launched we're sticking with the current
> featureset.
[SM] Reasonable trade-off, and hopefully potential for pleasant surprises in the future ;)
> * There are a lot of browser-specific workarounds that we had to
> implement, and latency itself is measured in different ways on
> Safari/Webkit vs Chromium/Firefox due to limitations of the
> PerformanceTiming APIs. You may notice that latency is different on
> different browsers, however the actual bufferbloat (relative increase
> in latency) should be pretty consistent.
>
> In terms of some of the changes we made based on the feedback we
> receive on this list:
>
> Based on Toke’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
> * We changed the way the speed tests run to show an instantaneous
> speed as the test is being run.
[SM] Great, if only so it feels comparable to "other" speedtests.
> * We moved the bufferbloat grade into the main results box.
[SM] +1; that helps set the mood ;)
> * We tried really hard to get as close to saturating gigabit
> connections as possible. We redesigned completely the way we chunk
> files, added a “warming up” period, and spent quite a bit optimizing
> our code to minimize CPU usage, as we found that was often the
> limiting factor to our speed test results.
> * We changed the shield grades altogether and went through a few
> different iterations of how to show the effect of bufferbloat on
> connectivity, and ended up with a “table view” to try to show the
> effect that bufferbloat specifically is having on the connection
> (compared to when the connection is unloaded).
> * We now link from the results table view to the FAQ where the
> conditions for each type of connection are explained.
> * We also changed the way we measure latency and now use the faster
> of either Google’s CDN or Cloudflare at any given location. We’re also
> using the WebTiming APIs to get a more accurate latency number, though
> this does not work on some mobile browsers (e.g. iOS Safari) and as a
> result we show a higher latency on mobile devices. Since our test is
> less a test of absolute latency and more a test of relative latency
> with and without load, we felt this was workable.
> * Our jitter is now an average (was previously RMS).
> * The “before you start” text was rewritten and moved above the start button.
> * We now spell out upload and download instead of having arrows.
> * We hugely reduced the number of cross-site scripts. I was a bit
> embarrassed by this if I’m honest - I spent a long time building web
> tools for the EFF, where we almost never allowed any cross-site
> scripts. * Our site is hosted on Shopify, and adding any features via
> their app store ends up adding a whole lot of gunk. But we uninstalled
> some apps, rewrote our template, and ended up removing a whole lot of
> the gunk. There’s still plenty of room for improvement, but it should
> be a lot better than before.
>
> Based on Dave Collier-Brown’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
> * We replaced the “unloaded” and “loaded” language with “unloaded”
> and then “download active” and “upload active.” In the grade box we
> indicate that, for example, “Your latency increased moderately under
> load.”
> * We tried to generally make it easier for non-techie folks to
> understand by emphasizing the grade and adding the table showing how
> bufferbloat affects some commonly-used services.
> * We didn’t really change the candle charts too much - they’re
> mostly just to give a basic visual - we focused more on the actual
> meat of the results above that.
>
> Based on Sebastian Moeller’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
> * We considered doing a bidirectional saturating load, but decided
> to skip on implementing it for now. * It’s definitely something we’d
> like to experiment with more in the future.
> * We added a “warming up” period as well as a “draining” period to
> help fill and empty the buffer. We haven’t added the option for an
> extended test, but have this on our list of backlog changes to make in
> the future.
>
> Based on Y’s feedback (link):
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
> * We actually ended up removing the grades, but we explained our
> criteria for the new table in the FAQ.
>
> Based on Greg White's feedback (shared privately):
> * We added an FAQ answer explaining jitter and how we measure it.
[SM] "There are a number of different waus of measuring and defining jitter. For the purpose of this test, we calculate jitter by taking the average of the deviations from the mean latency."
Small typo "waus" instead of "ways".
Best Regards
Sebastian
>
> We’d love for you all to play with the new version of the tool and
> send over any feedback you might have. We’re going to be in a feature
> freeze before launch but we'd love to get any bugs sorted out. We'll
> likely put this project aside after we iron out a last round of bugs
> and launch, and turn back to working on projects that help us pay the
> bills, but we definitely hope to revisit and improve the tool over
> time.
>
> Best,
>
> Sina, Arshan, and Sam.
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 10:49 ` Sebastian Moeller
@ 2021-02-25 20:41 ` Sina Khanifar
2021-02-25 20:50 ` Sina Khanifar
0 siblings, 1 reply; 41+ messages in thread
From: Sina Khanifar @ 2021-02-25 20:41 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat, sam
Hi Sebastian!
> [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
We actually collect all this data, it's just a little bit hidden. If
you take the test-id from the end of the URL and put it at the end of
a URL like this:
https://bufferbloat.waveform.workers.dev/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
You'll get a whole bunch of extra info, including useragent, a linux
timestamp, and a bunch of other fun stuff :). We'll consider surfacing
this more at some point in the future though!
> Small typo "waus" instead of "ways".
Thanks for catching this! A fix is in the works :).
On Thu, Feb 25, 2021 at 2:49 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Sina,
>
> great work! I took the liberty to advertise this test already for some weeks, because even in its still evolving developing state it was/is already producubg interesting actionable results. Thanks foe fixing the latency numbers for (desktop) Safari. More below.
>
>
> > On Feb 24, 2021, at 19:22, Sina Khanifar <sina@waveform.com> wrote:
> >
> > Hi all,
> >
> > A couple of months ago my co-founder Sam posted an early beta of the
> > Bufferbloat test that we’ve been working on, and Dave also linked to
> > it a couple of weeks ago.
> >
> > Thank you all so much for your feedback - we almost entirely
> > redesigned the tool and the UI based on the comments we received.
> > We’re almost ready to launch the tool officially today at this URL,
> > but wanted to show it to the list in case anyone finds any last bugs
> > that we might have overlooked:
> >
> > https://www.waveform.com/tools/bufferbloat
> >
> > If you find a bug, please share the "Share Your Results" link with us
> > along with what happened. We capture some debugging information on the
> > backend, and having a share link allows us to diagnose any issues.
>
> [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
>
>
> >
> > This is really more of a passion project than anything else for us –
> > we don’t anticipate we’ll try to commercialize it or anything like
> > that. We're very thankful for all the work the folks on this list have
> > done to identify and fix bufferbloat, and hope this is a useful
> > contribution. I’ve personally been very frustrated by bufferbloat on a
> > range of devices, and decided it might be helpful to build another
> > bufferbloat test when the DSLReports test was down at some point last
> > year.
> >
> > Our goals with this project were:
> > * To build a second solid bufferbloat test in case DSLReports goes down again.
> > * Build a test where bufferbloat is front and center as the primary
> > purpose of the test, rather than just a feature.
> > * Try to explain bufferbloat and its effect on a user's connection
> > as clearly as possible for a lay audience.
> >
> > A few notes:
> > * On the backend, we’re using Cloudflare’s CDN to perform the actual
> > download and upload speed test. I know John Graham-Cunning has posted
> > to this list in the past; if he or anyone from Cloudflare sees this,
> > we’d love some help. Our Cloudflare Workers are being
> > bandwidth-throttled due to having a non-enterprise grade account.
> > We’ve worked around this in a kludgy way, but we’d love to get it
> > resolved.
>
> [SM] I think this was a decent decision, as it seems your tests has less issues even filling 1Gbps links than most others.
>
>
> > * We have lots of ideas for improvements, e.g. simultaneous
> > upload/downloads, trying different file size chunks, time-series
> > latency graphs, using WebRTC to test UDP traffic etc, but in the
> > interest of getting things launched we're sticking with the current
> > featureset.
>
> [SM] Reasonable trade-off, and hopefully potential for pleasant surprises in the future ;)
>
> > * There are a lot of browser-specific workarounds that we had to
> > implement, and latency itself is measured in different ways on
> > Safari/Webkit vs Chromium/Firefox due to limitations of the
> > PerformanceTiming APIs. You may notice that latency is different on
> > different browsers, however the actual bufferbloat (relative increase
> > in latency) should be pretty consistent.
> >
> > In terms of some of the changes we made based on the feedback we
> > receive on this list:
> >
> > Based on Toke’s feedback:
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
> > * We changed the way the speed tests run to show an instantaneous
> > speed as the test is being run.
>
> [SM] Great, if only so it feels comparable to "other" speedtests.
>
>
> > * We moved the bufferbloat grade into the main results box.
>
> [SM] +1; that helps set the mood ;)
>
> > * We tried really hard to get as close to saturating gigabit
> > connections as possible. We redesigned completely the way we chunk
> > files, added a “warming up” period, and spent quite a bit optimizing
> > our code to minimize CPU usage, as we found that was often the
> > limiting factor to our speed test results.
> > * We changed the shield grades altogether and went through a few
> > different iterations of how to show the effect of bufferbloat on
> > connectivity, and ended up with a “table view” to try to show the
> > effect that bufferbloat specifically is having on the connection
> > (compared to when the connection is unloaded).
> > * We now link from the results table view to the FAQ where the
> > conditions for each type of connection are explained.
> > * We also changed the way we measure latency and now use the faster
> > of either Google’s CDN or Cloudflare at any given location. We’re also
> > using the WebTiming APIs to get a more accurate latency number, though
> > this does not work on some mobile browsers (e.g. iOS Safari) and as a
> > result we show a higher latency on mobile devices. Since our test is
> > less a test of absolute latency and more a test of relative latency
> > with and without load, we felt this was workable.
> > * Our jitter is now an average (was previously RMS).
> > * The “before you start” text was rewritten and moved above the start button.
> > * We now spell out upload and download instead of having arrows.
> > * We hugely reduced the number of cross-site scripts. I was a bit
> > embarrassed by this if I’m honest - I spent a long time building web
> > tools for the EFF, where we almost never allowed any cross-site
> > scripts. * Our site is hosted on Shopify, and adding any features via
> > their app store ends up adding a whole lot of gunk. But we uninstalled
> > some apps, rewrote our template, and ended up removing a whole lot of
> > the gunk. There’s still plenty of room for improvement, but it should
> > be a lot better than before.
> >
> > Based on Dave Collier-Brown’s feedback:
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
> > * We replaced the “unloaded” and “loaded” language with “unloaded”
> > and then “download active” and “upload active.” In the grade box we
> > indicate that, for example, “Your latency increased moderately under
> > load.”
> > * We tried to generally make it easier for non-techie folks to
> > understand by emphasizing the grade and adding the table showing how
> > bufferbloat affects some commonly-used services.
> > * We didn’t really change the candle charts too much - they’re
> > mostly just to give a basic visual - we focused more on the actual
> > meat of the results above that.
> >
> > Based on Sebastian Moeller’s feedback:
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
> > * We considered doing a bidirectional saturating load, but decided
> > to skip on implementing it for now. * It’s definitely something we’d
> > like to experiment with more in the future.
> > * We added a “warming up” period as well as a “draining” period to
> > help fill and empty the buffer. We haven’t added the option for an
> > extended test, but have this on our list of backlog changes to make in
> > the future.
> >
> > Based on Y’s feedback (link):
> > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
> > * We actually ended up removing the grades, but we explained our
> > criteria for the new table in the FAQ.
> >
> > Based on Greg White's feedback (shared privately):
> > * We added an FAQ answer explaining jitter and how we measure it.
>
> [SM] "There are a number of different waus of measuring and defining jitter. For the purpose of this test, we calculate jitter by taking the average of the deviations from the mean latency."
>
> Small typo "waus" instead of "ways".
>
> Best Regards
> Sebastian
>
>
> >
> > We’d love for you all to play with the new version of the tool and
> > send over any feedback you might have. We’re going to be in a feature
> > freeze before launch but we'd love to get any bugs sorted out. We'll
> > likely put this project aside after we iron out a last round of bugs
> > and launch, and turn back to working on projects that help us pay the
> > bills, but we definitely hope to revisit and improve the tool over
> > time.
> >
> > Best,
> >
> > Sina, Arshan, and Sam.
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 20:41 ` Sina Khanifar
@ 2021-02-25 20:50 ` Sina Khanifar
2021-02-25 21:15 ` Sebastian Moeller
0 siblings, 1 reply; 41+ messages in thread
From: Sina Khanifar @ 2021-02-25 20:50 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat, sam
> https://bufferbloat.waveform.workers.dev/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
One quick edit, I just changed the route to these, the debug data is
now available at:
https://bufferbloat.waveform.com/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
On Thu, Feb 25, 2021 at 12:41 PM Sina Khanifar <sina@waveform.com> wrote:
>
> Hi Sebastian!
>
> > [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
>
> We actually collect all this data, it's just a little bit hidden. If
> you take the test-id from the end of the URL and put it at the end of
> a URL like this:
>
> https://bufferbloat.waveform.workers.dev/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>
> You'll get a whole bunch of extra info, including useragent, a linux
> timestamp, and a bunch of other fun stuff :). We'll consider surfacing
> this more at some point in the future though!
>
> > Small typo "waus" instead of "ways".
>
> Thanks for catching this! A fix is in the works :).
>
> On Thu, Feb 25, 2021 at 2:49 AM Sebastian Moeller <moeller0@gmx.de> wrote:
> >
> > Hi Sina,
> >
> > great work! I took the liberty to advertise this test already for some weeks, because even in its still evolving developing state it was/is already producubg interesting actionable results. Thanks foe fixing the latency numbers for (desktop) Safari. More below.
> >
> >
> > > On Feb 24, 2021, at 19:22, Sina Khanifar <sina@waveform.com> wrote:
> > >
> > > Hi all,
> > >
> > > A couple of months ago my co-founder Sam posted an early beta of the
> > > Bufferbloat test that we’ve been working on, and Dave also linked to
> > > it a couple of weeks ago.
> > >
> > > Thank you all so much for your feedback - we almost entirely
> > > redesigned the tool and the UI based on the comments we received.
> > > We’re almost ready to launch the tool officially today at this URL,
> > > but wanted to show it to the list in case anyone finds any last bugs
> > > that we might have overlooked:
> > >
> > > https://www.waveform.com/tools/bufferbloat
> > >
> > > If you find a bug, please share the "Share Your Results" link with us
> > > along with what happened. We capture some debugging information on the
> > > backend, and having a share link allows us to diagnose any issues.
> >
> > [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
> >
> >
> > >
> > > This is really more of a passion project than anything else for us –
> > > we don’t anticipate we’ll try to commercialize it or anything like
> > > that. We're very thankful for all the work the folks on this list have
> > > done to identify and fix bufferbloat, and hope this is a useful
> > > contribution. I’ve personally been very frustrated by bufferbloat on a
> > > range of devices, and decided it might be helpful to build another
> > > bufferbloat test when the DSLReports test was down at some point last
> > > year.
> > >
> > > Our goals with this project were:
> > > * To build a second solid bufferbloat test in case DSLReports goes down again.
> > > * Build a test where bufferbloat is front and center as the primary
> > > purpose of the test, rather than just a feature.
> > > * Try to explain bufferbloat and its effect on a user's connection
> > > as clearly as possible for a lay audience.
> > >
> > > A few notes:
> > > * On the backend, we’re using Cloudflare’s CDN to perform the actual
> > > download and upload speed test. I know John Graham-Cunning has posted
> > > to this list in the past; if he or anyone from Cloudflare sees this,
> > > we’d love some help. Our Cloudflare Workers are being
> > > bandwidth-throttled due to having a non-enterprise grade account.
> > > We’ve worked around this in a kludgy way, but we’d love to get it
> > > resolved.
> >
> > [SM] I think this was a decent decision, as it seems your tests has less issues even filling 1Gbps links than most others.
> >
> >
> > > * We have lots of ideas for improvements, e.g. simultaneous
> > > upload/downloads, trying different file size chunks, time-series
> > > latency graphs, using WebRTC to test UDP traffic etc, but in the
> > > interest of getting things launched we're sticking with the current
> > > featureset.
> >
> > [SM] Reasonable trade-off, and hopefully potential for pleasant surprises in the future ;)
> >
> > > * There are a lot of browser-specific workarounds that we had to
> > > implement, and latency itself is measured in different ways on
> > > Safari/Webkit vs Chromium/Firefox due to limitations of the
> > > PerformanceTiming APIs. You may notice that latency is different on
> > > different browsers, however the actual bufferbloat (relative increase
> > > in latency) should be pretty consistent.
> > >
> > > In terms of some of the changes we made based on the feedback we
> > > receive on this list:
> > >
> > > Based on Toke’s feedback:
> > > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
> > > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
> > > * We changed the way the speed tests run to show an instantaneous
> > > speed as the test is being run.
> >
> > [SM] Great, if only so it feels comparable to "other" speedtests.
> >
> >
> > > * We moved the bufferbloat grade into the main results box.
> >
> > [SM] +1; that helps set the mood ;)
> >
> > > * We tried really hard to get as close to saturating gigabit
> > > connections as possible. We redesigned completely the way we chunk
> > > files, added a “warming up” period, and spent quite a bit optimizing
> > > our code to minimize CPU usage, as we found that was often the
> > > limiting factor to our speed test results.
> > > * We changed the shield grades altogether and went through a few
> > > different iterations of how to show the effect of bufferbloat on
> > > connectivity, and ended up with a “table view” to try to show the
> > > effect that bufferbloat specifically is having on the connection
> > > (compared to when the connection is unloaded).
> > > * We now link from the results table view to the FAQ where the
> > > conditions for each type of connection are explained.
> > > * We also changed the way we measure latency and now use the faster
> > > of either Google’s CDN or Cloudflare at any given location. We’re also
> > > using the WebTiming APIs to get a more accurate latency number, though
> > > this does not work on some mobile browsers (e.g. iOS Safari) and as a
> > > result we show a higher latency on mobile devices. Since our test is
> > > less a test of absolute latency and more a test of relative latency
> > > with and without load, we felt this was workable.
> > > * Our jitter is now an average (was previously RMS).
> > > * The “before you start” text was rewritten and moved above the start button.
> > > * We now spell out upload and download instead of having arrows.
> > > * We hugely reduced the number of cross-site scripts. I was a bit
> > > embarrassed by this if I’m honest - I spent a long time building web
> > > tools for the EFF, where we almost never allowed any cross-site
> > > scripts. * Our site is hosted on Shopify, and adding any features via
> > > their app store ends up adding a whole lot of gunk. But we uninstalled
> > > some apps, rewrote our template, and ended up removing a whole lot of
> > > the gunk. There’s still plenty of room for improvement, but it should
> > > be a lot better than before.
> > >
> > > Based on Dave Collier-Brown’s feedback:
> > > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
> > > * We replaced the “unloaded” and “loaded” language with “unloaded”
> > > and then “download active” and “upload active.” In the grade box we
> > > indicate that, for example, “Your latency increased moderately under
> > > load.”
> > > * We tried to generally make it easier for non-techie folks to
> > > understand by emphasizing the grade and adding the table showing how
> > > bufferbloat affects some commonly-used services.
> > > * We didn’t really change the candle charts too much - they’re
> > > mostly just to give a basic visual - we focused more on the actual
> > > meat of the results above that.
> > >
> > > Based on Sebastian Moeller’s feedback:
> > > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
> > > * We considered doing a bidirectional saturating load, but decided
> > > to skip on implementing it for now. * It’s definitely something we’d
> > > like to experiment with more in the future.
> > > * We added a “warming up” period as well as a “draining” period to
> > > help fill and empty the buffer. We haven’t added the option for an
> > > extended test, but have this on our list of backlog changes to make in
> > > the future.
> > >
> > > Based on Y’s feedback (link):
> > > https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
> > > * We actually ended up removing the grades, but we explained our
> > > criteria for the new table in the FAQ.
> > >
> > > Based on Greg White's feedback (shared privately):
> > > * We added an FAQ answer explaining jitter and how we measure it.
> >
> > [SM] "There are a number of different waus of measuring and defining jitter. For the purpose of this test, we calculate jitter by taking the average of the deviations from the mean latency."
> >
> > Small typo "waus" instead of "ways".
> >
> > Best Regards
> > Sebastian
> >
> >
> > >
> > > We’d love for you all to play with the new version of the tool and
> > > send over any feedback you might have. We’re going to be in a feature
> > > freeze before launch but we'd love to get any bugs sorted out. We'll
> > > likely put this project aside after we iron out a last round of bugs
> > > and launch, and turn back to working on projects that help us pay the
> > > bills, but we definitely hope to revisit and improve the tool over
> > > time.
> > >
> > > Best,
> > >
> > > Sina, Arshan, and Sam.
> > > _______________________________________________
> > > Bloat mailing list
> > > Bloat@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/bloat
> >
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 20:50 ` Sina Khanifar
@ 2021-02-25 21:15 ` Sebastian Moeller
2021-02-26 8:23 ` Sina Khanifar
0 siblings, 1 reply; 41+ messages in thread
From: Sebastian Moeller @ 2021-02-25 21:15 UTC (permalink / raw)
To: Sina Khanifar; +Cc: bloat, sam
Hi Sina,
most excellent! While I concur with Simon that "keeping it simple" is the right approach, would it be an option to embed the details link into the results page?
Best Regards
Sebastian
> On Feb 25, 2021, at 21:50, Sina Khanifar <sina@waveform.com> wrote:
>
>> https://bufferbloat.waveform.workers.dev/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>
> One quick edit, I just changed the route to these, the debug data is
> now available at:
>
> https://bufferbloat.waveform.com/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>
>
> On Thu, Feb 25, 2021 at 12:41 PM Sina Khanifar <sina@waveform.com> wrote:
>>
>> Hi Sebastian!
>>
>>> [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
>>
>> We actually collect all this data, it's just a little bit hidden. If
>> you take the test-id from the end of the URL and put it at the end of
>> a URL like this:
>>
>> https://bufferbloat.waveform.workers.dev/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>>
>> You'll get a whole bunch of extra info, including useragent, a linux
>> timestamp, and a bunch of other fun stuff :). We'll consider surfacing
>> this more at some point in the future though!
>>
>>> Small typo "waus" instead of "ways".
>>
>> Thanks for catching this! A fix is in the works :).
>>
>> On Thu, Feb 25, 2021 at 2:49 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>>>
>>> Hi Sina,
>>>
>>> great work! I took the liberty to advertise this test already for some weeks, because even in its still evolving developing state it was/is already producubg interesting actionable results. Thanks foe fixing the latency numbers for (desktop) Safari. More below.
>>>
>>>
>>>> On Feb 24, 2021, at 19:22, Sina Khanifar <sina@waveform.com> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> A couple of months ago my co-founder Sam posted an early beta of the
>>>> Bufferbloat test that we’ve been working on, and Dave also linked to
>>>> it a couple of weeks ago.
>>>>
>>>> Thank you all so much for your feedback - we almost entirely
>>>> redesigned the tool and the UI based on the comments we received.
>>>> We’re almost ready to launch the tool officially today at this URL,
>>>> but wanted to show it to the list in case anyone finds any last bugs
>>>> that we might have overlooked:
>>>>
>>>> https://www.waveform.com/tools/bufferbloat
>>>>
>>>> If you find a bug, please share the "Share Your Results" link with us
>>>> along with what happened. We capture some debugging information on the
>>>> backend, and having a share link allows us to diagnose any issues.
>>>
>>> [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
>>>
>>>
>>>>
>>>> This is really more of a passion project than anything else for us –
>>>> we don’t anticipate we’ll try to commercialize it or anything like
>>>> that. We're very thankful for all the work the folks on this list have
>>>> done to identify and fix bufferbloat, and hope this is a useful
>>>> contribution. I’ve personally been very frustrated by bufferbloat on a
>>>> range of devices, and decided it might be helpful to build another
>>>> bufferbloat test when the DSLReports test was down at some point last
>>>> year.
>>>>
>>>> Our goals with this project were:
>>>> * To build a second solid bufferbloat test in case DSLReports goes down again.
>>>> * Build a test where bufferbloat is front and center as the primary
>>>> purpose of the test, rather than just a feature.
>>>> * Try to explain bufferbloat and its effect on a user's connection
>>>> as clearly as possible for a lay audience.
>>>>
>>>> A few notes:
>>>> * On the backend, we’re using Cloudflare’s CDN to perform the actual
>>>> download and upload speed test. I know John Graham-Cunning has posted
>>>> to this list in the past; if he or anyone from Cloudflare sees this,
>>>> we’d love some help. Our Cloudflare Workers are being
>>>> bandwidth-throttled due to having a non-enterprise grade account.
>>>> We’ve worked around this in a kludgy way, but we’d love to get it
>>>> resolved.
>>>
>>> [SM] I think this was a decent decision, as it seems your tests has less issues even filling 1Gbps links than most others.
>>>
>>>
>>>> * We have lots of ideas for improvements, e.g. simultaneous
>>>> upload/downloads, trying different file size chunks, time-series
>>>> latency graphs, using WebRTC to test UDP traffic etc, but in the
>>>> interest of getting things launched we're sticking with the current
>>>> featureset.
>>>
>>> [SM] Reasonable trade-off, and hopefully potential for pleasant surprises in the future ;)
>>>
>>>> * There are a lot of browser-specific workarounds that we had to
>>>> implement, and latency itself is measured in different ways on
>>>> Safari/Webkit vs Chromium/Firefox due to limitations of the
>>>> PerformanceTiming APIs. You may notice that latency is different on
>>>> different browsers, however the actual bufferbloat (relative increase
>>>> in latency) should be pretty consistent.
>>>>
>>>> In terms of some of the changes we made based on the feedback we
>>>> receive on this list:
>>>>
>>>> Based on Toke’s feedback:
>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
>>>> * We changed the way the speed tests run to show an instantaneous
>>>> speed as the test is being run.
>>>
>>> [SM] Great, if only so it feels comparable to "other" speedtests.
>>>
>>>
>>>> * We moved the bufferbloat grade into the main results box.
>>>
>>> [SM] +1; that helps set the mood ;)
>>>
>>>> * We tried really hard to get as close to saturating gigabit
>>>> connections as possible. We redesigned completely the way we chunk
>>>> files, added a “warming up” period, and spent quite a bit optimizing
>>>> our code to minimize CPU usage, as we found that was often the
>>>> limiting factor to our speed test results.
>>>> * We changed the shield grades altogether and went through a few
>>>> different iterations of how to show the effect of bufferbloat on
>>>> connectivity, and ended up with a “table view” to try to show the
>>>> effect that bufferbloat specifically is having on the connection
>>>> (compared to when the connection is unloaded).
>>>> * We now link from the results table view to the FAQ where the
>>>> conditions for each type of connection are explained.
>>>> * We also changed the way we measure latency and now use the faster
>>>> of either Google’s CDN or Cloudflare at any given location. We’re also
>>>> using the WebTiming APIs to get a more accurate latency number, though
>>>> this does not work on some mobile browsers (e.g. iOS Safari) and as a
>>>> result we show a higher latency on mobile devices. Since our test is
>>>> less a test of absolute latency and more a test of relative latency
>>>> with and without load, we felt this was workable.
>>>> * Our jitter is now an average (was previously RMS).
>>>> * The “before you start” text was rewritten and moved above the start button.
>>>> * We now spell out upload and download instead of having arrows.
>>>> * We hugely reduced the number of cross-site scripts. I was a bit
>>>> embarrassed by this if I’m honest - I spent a long time building web
>>>> tools for the EFF, where we almost never allowed any cross-site
>>>> scripts. * Our site is hosted on Shopify, and adding any features via
>>>> their app store ends up adding a whole lot of gunk. But we uninstalled
>>>> some apps, rewrote our template, and ended up removing a whole lot of
>>>> the gunk. There’s still plenty of room for improvement, but it should
>>>> be a lot better than before.
>>>>
>>>> Based on Dave Collier-Brown’s feedback:
>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
>>>> * We replaced the “unloaded” and “loaded” language with “unloaded”
>>>> and then “download active” and “upload active.” In the grade box we
>>>> indicate that, for example, “Your latency increased moderately under
>>>> load.”
>>>> * We tried to generally make it easier for non-techie folks to
>>>> understand by emphasizing the grade and adding the table showing how
>>>> bufferbloat affects some commonly-used services.
>>>> * We didn’t really change the candle charts too much - they’re
>>>> mostly just to give a basic visual - we focused more on the actual
>>>> meat of the results above that.
>>>>
>>>> Based on Sebastian Moeller’s feedback:
>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
>>>> * We considered doing a bidirectional saturating load, but decided
>>>> to skip on implementing it for now. * It’s definitely something we’d
>>>> like to experiment with more in the future.
>>>> * We added a “warming up” period as well as a “draining” period to
>>>> help fill and empty the buffer. We haven’t added the option for an
>>>> extended test, but have this on our list of backlog changes to make in
>>>> the future.
>>>>
>>>> Based on Y’s feedback (link):
>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
>>>> * We actually ended up removing the grades, but we explained our
>>>> criteria for the new table in the FAQ.
>>>>
>>>> Based on Greg White's feedback (shared privately):
>>>> * We added an FAQ answer explaining jitter and how we measure it.
>>>
>>> [SM] "There are a number of different waus of measuring and defining jitter. For the purpose of this test, we calculate jitter by taking the average of the deviations from the mean latency."
>>>
>>> Small typo "waus" instead of "ways".
>>>
>>> Best Regards
>>> Sebastian
>>>
>>>
>>>>
>>>> We’d love for you all to play with the new version of the tool and
>>>> send over any feedback you might have. We’re going to be in a feature
>>>> freeze before launch but we'd love to get any bugs sorted out. We'll
>>>> likely put this project aside after we iron out a last round of bugs
>>>> and launch, and turn back to working on projects that help us pay the
>>>> bills, but we definitely hope to revisit and improve the tool over
>>>> time.
>>>>
>>>> Best,
>>>>
>>>> Sina, Arshan, and Sam.
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 21:15 ` Sebastian Moeller
@ 2021-02-26 8:23 ` Sina Khanifar
2021-02-26 18:41 ` Sina Khanifar
0 siblings, 1 reply; 41+ messages in thread
From: Sina Khanifar @ 2021-02-26 8:23 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat, sam
> would it be an option to embed the details link into the results page?
> Having more detail available but not shown by default on the main page might keep the geeks happy and make diagnosis easier.
Will give this a bit of thought and see if we can make it happen!
On Thu, Feb 25, 2021 at 1:15 PM Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Sina,
>
> most excellent! While I concur with Simon that "keeping it simple" is the right approach, would it be an option to embed the details link into the results page?
>
> Best Regards
> Sebastian
>
>
>
> > On Feb 25, 2021, at 21:50, Sina Khanifar <sina@waveform.com> wrote:
> >
> >> https://bufferbloat.waveform.workers.dev/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
> >
> > One quick edit, I just changed the route to these, the debug data is
> > now available at:
> >
> > https://bufferbloat.waveform.com/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
> >
> >
> > On Thu, Feb 25, 2021 at 12:41 PM Sina Khanifar <sina@waveform.com> wrote:
> >>
> >> Hi Sebastian!
> >>
> >>> [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
> >>
> >> We actually collect all this data, it's just a little bit hidden. If
> >> you take the test-id from the end of the URL and put it at the end of
> >> a URL like this:
> >>
> >> https://bufferbloat.waveform.workers.dev/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
> >>
> >> You'll get a whole bunch of extra info, including useragent, a linux
> >> timestamp, and a bunch of other fun stuff :). We'll consider surfacing
> >> this more at some point in the future though!
> >>
> >>> Small typo "waus" instead of "ways".
> >>
> >> Thanks for catching this! A fix is in the works :).
> >>
> >> On Thu, Feb 25, 2021 at 2:49 AM Sebastian Moeller <moeller0@gmx.de> wrote:
> >>>
> >>> Hi Sina,
> >>>
> >>> great work! I took the liberty to advertise this test already for some weeks, because even in its still evolving developing state it was/is already producubg interesting actionable results. Thanks foe fixing the latency numbers for (desktop) Safari. More below.
> >>>
> >>>
> >>>> On Feb 24, 2021, at 19:22, Sina Khanifar <sina@waveform.com> wrote:
> >>>>
> >>>> Hi all,
> >>>>
> >>>> A couple of months ago my co-founder Sam posted an early beta of the
> >>>> Bufferbloat test that we’ve been working on, and Dave also linked to
> >>>> it a couple of weeks ago.
> >>>>
> >>>> Thank you all so much for your feedback - we almost entirely
> >>>> redesigned the tool and the UI based on the comments we received.
> >>>> We’re almost ready to launch the tool officially today at this URL,
> >>>> but wanted to show it to the list in case anyone finds any last bugs
> >>>> that we might have overlooked:
> >>>>
> >>>> https://www.waveform.com/tools/bufferbloat
> >>>>
> >>>> If you find a bug, please share the "Share Your Results" link with us
> >>>> along with what happened. We capture some debugging information on the
> >>>> backend, and having a share link allows us to diagnose any issues.
> >>>
> >>> [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
> >>>
> >>>
> >>>>
> >>>> This is really more of a passion project than anything else for us –
> >>>> we don’t anticipate we’ll try to commercialize it or anything like
> >>>> that. We're very thankful for all the work the folks on this list have
> >>>> done to identify and fix bufferbloat, and hope this is a useful
> >>>> contribution. I’ve personally been very frustrated by bufferbloat on a
> >>>> range of devices, and decided it might be helpful to build another
> >>>> bufferbloat test when the DSLReports test was down at some point last
> >>>> year.
> >>>>
> >>>> Our goals with this project were:
> >>>> * To build a second solid bufferbloat test in case DSLReports goes down again.
> >>>> * Build a test where bufferbloat is front and center as the primary
> >>>> purpose of the test, rather than just a feature.
> >>>> * Try to explain bufferbloat and its effect on a user's connection
> >>>> as clearly as possible for a lay audience.
> >>>>
> >>>> A few notes:
> >>>> * On the backend, we’re using Cloudflare’s CDN to perform the actual
> >>>> download and upload speed test. I know John Graham-Cunning has posted
> >>>> to this list in the past; if he or anyone from Cloudflare sees this,
> >>>> we’d love some help. Our Cloudflare Workers are being
> >>>> bandwidth-throttled due to having a non-enterprise grade account.
> >>>> We’ve worked around this in a kludgy way, but we’d love to get it
> >>>> resolved.
> >>>
> >>> [SM] I think this was a decent decision, as it seems your tests has less issues even filling 1Gbps links than most others.
> >>>
> >>>
> >>>> * We have lots of ideas for improvements, e.g. simultaneous
> >>>> upload/downloads, trying different file size chunks, time-series
> >>>> latency graphs, using WebRTC to test UDP traffic etc, but in the
> >>>> interest of getting things launched we're sticking with the current
> >>>> featureset.
> >>>
> >>> [SM] Reasonable trade-off, and hopefully potential for pleasant surprises in the future ;)
> >>>
> >>>> * There are a lot of browser-specific workarounds that we had to
> >>>> implement, and latency itself is measured in different ways on
> >>>> Safari/Webkit vs Chromium/Firefox due to limitations of the
> >>>> PerformanceTiming APIs. You may notice that latency is different on
> >>>> different browsers, however the actual bufferbloat (relative increase
> >>>> in latency) should be pretty consistent.
> >>>>
> >>>> In terms of some of the changes we made based on the feedback we
> >>>> receive on this list:
> >>>>
> >>>> Based on Toke’s feedback:
> >>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
> >>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
> >>>> * We changed the way the speed tests run to show an instantaneous
> >>>> speed as the test is being run.
> >>>
> >>> [SM] Great, if only so it feels comparable to "other" speedtests.
> >>>
> >>>
> >>>> * We moved the bufferbloat grade into the main results box.
> >>>
> >>> [SM] +1; that helps set the mood ;)
> >>>
> >>>> * We tried really hard to get as close to saturating gigabit
> >>>> connections as possible. We redesigned completely the way we chunk
> >>>> files, added a “warming up” period, and spent quite a bit optimizing
> >>>> our code to minimize CPU usage, as we found that was often the
> >>>> limiting factor to our speed test results.
> >>>> * We changed the shield grades altogether and went through a few
> >>>> different iterations of how to show the effect of bufferbloat on
> >>>> connectivity, and ended up with a “table view” to try to show the
> >>>> effect that bufferbloat specifically is having on the connection
> >>>> (compared to when the connection is unloaded).
> >>>> * We now link from the results table view to the FAQ where the
> >>>> conditions for each type of connection are explained.
> >>>> * We also changed the way we measure latency and now use the faster
> >>>> of either Google’s CDN or Cloudflare at any given location. We’re also
> >>>> using the WebTiming APIs to get a more accurate latency number, though
> >>>> this does not work on some mobile browsers (e.g. iOS Safari) and as a
> >>>> result we show a higher latency on mobile devices. Since our test is
> >>>> less a test of absolute latency and more a test of relative latency
> >>>> with and without load, we felt this was workable.
> >>>> * Our jitter is now an average (was previously RMS).
> >>>> * The “before you start” text was rewritten and moved above the start button.
> >>>> * We now spell out upload and download instead of having arrows.
> >>>> * We hugely reduced the number of cross-site scripts. I was a bit
> >>>> embarrassed by this if I’m honest - I spent a long time building web
> >>>> tools for the EFF, where we almost never allowed any cross-site
> >>>> scripts. * Our site is hosted on Shopify, and adding any features via
> >>>> their app store ends up adding a whole lot of gunk. But we uninstalled
> >>>> some apps, rewrote our template, and ended up removing a whole lot of
> >>>> the gunk. There’s still plenty of room for improvement, but it should
> >>>> be a lot better than before.
> >>>>
> >>>> Based on Dave Collier-Brown’s feedback:
> >>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
> >>>> * We replaced the “unloaded” and “loaded” language with “unloaded”
> >>>> and then “download active” and “upload active.” In the grade box we
> >>>> indicate that, for example, “Your latency increased moderately under
> >>>> load.”
> >>>> * We tried to generally make it easier for non-techie folks to
> >>>> understand by emphasizing the grade and adding the table showing how
> >>>> bufferbloat affects some commonly-used services.
> >>>> * We didn’t really change the candle charts too much - they’re
> >>>> mostly just to give a basic visual - we focused more on the actual
> >>>> meat of the results above that.
> >>>>
> >>>> Based on Sebastian Moeller’s feedback:
> >>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
> >>>> * We considered doing a bidirectional saturating load, but decided
> >>>> to skip on implementing it for now. * It’s definitely something we’d
> >>>> like to experiment with more in the future.
> >>>> * We added a “warming up” period as well as a “draining” period to
> >>>> help fill and empty the buffer. We haven’t added the option for an
> >>>> extended test, but have this on our list of backlog changes to make in
> >>>> the future.
> >>>>
> >>>> Based on Y’s feedback (link):
> >>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
> >>>> * We actually ended up removing the grades, but we explained our
> >>>> criteria for the new table in the FAQ.
> >>>>
> >>>> Based on Greg White's feedback (shared privately):
> >>>> * We added an FAQ answer explaining jitter and how we measure it.
> >>>
> >>> [SM] "There are a number of different waus of measuring and defining jitter. For the purpose of this test, we calculate jitter by taking the average of the deviations from the mean latency."
> >>>
> >>> Small typo "waus" instead of "ways".
> >>>
> >>> Best Regards
> >>> Sebastian
> >>>
> >>>
> >>>>
> >>>> We’d love for you all to play with the new version of the tool and
> >>>> send over any feedback you might have. We’re going to be in a feature
> >>>> freeze before launch but we'd love to get any bugs sorted out. We'll
> >>>> likely put this project aside after we iron out a last round of bugs
> >>>> and launch, and turn back to working on projects that help us pay the
> >>>> bills, but we definitely hope to revisit and improve the tool over
> >>>> time.
> >>>>
> >>>> Best,
> >>>>
> >>>> Sina, Arshan, and Sam.
> >>>> _______________________________________________
> >>>> Bloat mailing list
> >>>> Bloat@lists.bufferbloat.net
> >>>> https://lists.bufferbloat.net/listinfo/bloat
> >>>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-26 8:23 ` Sina Khanifar
@ 2021-02-26 18:41 ` Sina Khanifar
2021-02-26 19:58 ` Sebastian Moeller
0 siblings, 1 reply; 41+ messages in thread
From: Sina Khanifar @ 2021-02-26 18:41 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat, sam
Just a quick follow-up question @Sebastian @Simon:
We're thinking of implementing a "download" button that would download
the same data currently shown in the JSON view but instead as a CSV.
Would that work?
Best,
Sina.
On Fri, Feb 26, 2021 at 12:23 AM Sina Khanifar <sina@waveform.com> wrote:
>
> > would it be an option to embed the details link into the results page?
>
> > Having more detail available but not shown by default on the main page might keep the geeks happy and make diagnosis easier.
>
> Will give this a bit of thought and see if we can make it happen!
>
> On Thu, Feb 25, 2021 at 1:15 PM Sebastian Moeller <moeller0@gmx.de> wrote:
> >
> > Hi Sina,
> >
> > most excellent! While I concur with Simon that "keeping it simple" is the right approach, would it be an option to embed the details link into the results page?
> >
> > Best Regards
> > Sebastian
> >
> >
> >
> > > On Feb 25, 2021, at 21:50, Sina Khanifar <sina@waveform.com> wrote:
> > >
> > >> https://bufferbloat.waveform.workers.dev/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
> > >
> > > One quick edit, I just changed the route to these, the debug data is
> > > now available at:
> > >
> > > https://bufferbloat.waveform.com/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
> > >
> > >
> > > On Thu, Feb 25, 2021 at 12:41 PM Sina Khanifar <sina@waveform.com> wrote:
> > >>
> > >> Hi Sebastian!
> > >>
> > >>> [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
> > >>
> > >> We actually collect all this data, it's just a little bit hidden. If
> > >> you take the test-id from the end of the URL and put it at the end of
> > >> a URL like this:
> > >>
> > >> https://bufferbloat.waveform.workers.dev/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
> > >>
> > >> You'll get a whole bunch of extra info, including useragent, a linux
> > >> timestamp, and a bunch of other fun stuff :). We'll consider surfacing
> > >> this more at some point in the future though!
> > >>
> > >>> Small typo "waus" instead of "ways".
> > >>
> > >> Thanks for catching this! A fix is in the works :).
> > >>
> > >> On Thu, Feb 25, 2021 at 2:49 AM Sebastian Moeller <moeller0@gmx.de> wrote:
> > >>>
> > >>> Hi Sina,
> > >>>
> > >>> great work! I took the liberty to advertise this test already for some weeks, because even in its still evolving developing state it was/is already producubg interesting actionable results. Thanks foe fixing the latency numbers for (desktop) Safari. More below.
> > >>>
> > >>>
> > >>>> On Feb 24, 2021, at 19:22, Sina Khanifar <sina@waveform.com> wrote:
> > >>>>
> > >>>> Hi all,
> > >>>>
> > >>>> A couple of months ago my co-founder Sam posted an early beta of the
> > >>>> Bufferbloat test that we’ve been working on, and Dave also linked to
> > >>>> it a couple of weeks ago.
> > >>>>
> > >>>> Thank you all so much for your feedback - we almost entirely
> > >>>> redesigned the tool and the UI based on the comments we received.
> > >>>> We’re almost ready to launch the tool officially today at this URL,
> > >>>> but wanted to show it to the list in case anyone finds any last bugs
> > >>>> that we might have overlooked:
> > >>>>
> > >>>> https://www.waveform.com/tools/bufferbloat
> > >>>>
> > >>>> If you find a bug, please share the "Share Your Results" link with us
> > >>>> along with what happened. We capture some debugging information on the
> > >>>> backend, and having a share link allows us to diagnose any issues.
> > >>>
> > >>> [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
> > >>>
> > >>>
> > >>>>
> > >>>> This is really more of a passion project than anything else for us –
> > >>>> we don’t anticipate we’ll try to commercialize it or anything like
> > >>>> that. We're very thankful for all the work the folks on this list have
> > >>>> done to identify and fix bufferbloat, and hope this is a useful
> > >>>> contribution. I’ve personally been very frustrated by bufferbloat on a
> > >>>> range of devices, and decided it might be helpful to build another
> > >>>> bufferbloat test when the DSLReports test was down at some point last
> > >>>> year.
> > >>>>
> > >>>> Our goals with this project were:
> > >>>> * To build a second solid bufferbloat test in case DSLReports goes down again.
> > >>>> * Build a test where bufferbloat is front and center as the primary
> > >>>> purpose of the test, rather than just a feature.
> > >>>> * Try to explain bufferbloat and its effect on a user's connection
> > >>>> as clearly as possible for a lay audience.
> > >>>>
> > >>>> A few notes:
> > >>>> * On the backend, we’re using Cloudflare’s CDN to perform the actual
> > >>>> download and upload speed test. I know John Graham-Cunning has posted
> > >>>> to this list in the past; if he or anyone from Cloudflare sees this,
> > >>>> we’d love some help. Our Cloudflare Workers are being
> > >>>> bandwidth-throttled due to having a non-enterprise grade account.
> > >>>> We’ve worked around this in a kludgy way, but we’d love to get it
> > >>>> resolved.
> > >>>
> > >>> [SM] I think this was a decent decision, as it seems your tests has less issues even filling 1Gbps links than most others.
> > >>>
> > >>>
> > >>>> * We have lots of ideas for improvements, e.g. simultaneous
> > >>>> upload/downloads, trying different file size chunks, time-series
> > >>>> latency graphs, using WebRTC to test UDP traffic etc, but in the
> > >>>> interest of getting things launched we're sticking with the current
> > >>>> featureset.
> > >>>
> > >>> [SM] Reasonable trade-off, and hopefully potential for pleasant surprises in the future ;)
> > >>>
> > >>>> * There are a lot of browser-specific workarounds that we had to
> > >>>> implement, and latency itself is measured in different ways on
> > >>>> Safari/Webkit vs Chromium/Firefox due to limitations of the
> > >>>> PerformanceTiming APIs. You may notice that latency is different on
> > >>>> different browsers, however the actual bufferbloat (relative increase
> > >>>> in latency) should be pretty consistent.
> > >>>>
> > >>>> In terms of some of the changes we made based on the feedback we
> > >>>> receive on this list:
> > >>>>
> > >>>> Based on Toke’s feedback:
> > >>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
> > >>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
> > >>>> * We changed the way the speed tests run to show an instantaneous
> > >>>> speed as the test is being run.
> > >>>
> > >>> [SM] Great, if only so it feels comparable to "other" speedtests.
> > >>>
> > >>>
> > >>>> * We moved the bufferbloat grade into the main results box.
> > >>>
> > >>> [SM] +1; that helps set the mood ;)
> > >>>
> > >>>> * We tried really hard to get as close to saturating gigabit
> > >>>> connections as possible. We redesigned completely the way we chunk
> > >>>> files, added a “warming up” period, and spent quite a bit optimizing
> > >>>> our code to minimize CPU usage, as we found that was often the
> > >>>> limiting factor to our speed test results.
> > >>>> * We changed the shield grades altogether and went through a few
> > >>>> different iterations of how to show the effect of bufferbloat on
> > >>>> connectivity, and ended up with a “table view” to try to show the
> > >>>> effect that bufferbloat specifically is having on the connection
> > >>>> (compared to when the connection is unloaded).
> > >>>> * We now link from the results table view to the FAQ where the
> > >>>> conditions for each type of connection are explained.
> > >>>> * We also changed the way we measure latency and now use the faster
> > >>>> of either Google’s CDN or Cloudflare at any given location. We’re also
> > >>>> using the WebTiming APIs to get a more accurate latency number, though
> > >>>> this does not work on some mobile browsers (e.g. iOS Safari) and as a
> > >>>> result we show a higher latency on mobile devices. Since our test is
> > >>>> less a test of absolute latency and more a test of relative latency
> > >>>> with and without load, we felt this was workable.
> > >>>> * Our jitter is now an average (was previously RMS).
> > >>>> * The “before you start” text was rewritten and moved above the start button.
> > >>>> * We now spell out upload and download instead of having arrows.
> > >>>> * We hugely reduced the number of cross-site scripts. I was a bit
> > >>>> embarrassed by this if I’m honest - I spent a long time building web
> > >>>> tools for the EFF, where we almost never allowed any cross-site
> > >>>> scripts. * Our site is hosted on Shopify, and adding any features via
> > >>>> their app store ends up adding a whole lot of gunk. But we uninstalled
> > >>>> some apps, rewrote our template, and ended up removing a whole lot of
> > >>>> the gunk. There’s still plenty of room for improvement, but it should
> > >>>> be a lot better than before.
> > >>>>
> > >>>> Based on Dave Collier-Brown’s feedback:
> > >>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
> > >>>> * We replaced the “unloaded” and “loaded” language with “unloaded”
> > >>>> and then “download active” and “upload active.” In the grade box we
> > >>>> indicate that, for example, “Your latency increased moderately under
> > >>>> load.”
> > >>>> * We tried to generally make it easier for non-techie folks to
> > >>>> understand by emphasizing the grade and adding the table showing how
> > >>>> bufferbloat affects some commonly-used services.
> > >>>> * We didn’t really change the candle charts too much - they’re
> > >>>> mostly just to give a basic visual - we focused more on the actual
> > >>>> meat of the results above that.
> > >>>>
> > >>>> Based on Sebastian Moeller’s feedback:
> > >>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
> > >>>> * We considered doing a bidirectional saturating load, but decided
> > >>>> to skip on implementing it for now. * It’s definitely something we’d
> > >>>> like to experiment with more in the future.
> > >>>> * We added a “warming up” period as well as a “draining” period to
> > >>>> help fill and empty the buffer. We haven’t added the option for an
> > >>>> extended test, but have this on our list of backlog changes to make in
> > >>>> the future.
> > >>>>
> > >>>> Based on Y’s feedback (link):
> > >>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
> > >>>> * We actually ended up removing the grades, but we explained our
> > >>>> criteria for the new table in the FAQ.
> > >>>>
> > >>>> Based on Greg White's feedback (shared privately):
> > >>>> * We added an FAQ answer explaining jitter and how we measure it.
> > >>>
> > >>> [SM] "There are a number of different waus of measuring and defining jitter. For the purpose of this test, we calculate jitter by taking the average of the deviations from the mean latency."
> > >>>
> > >>> Small typo "waus" instead of "ways".
> > >>>
> > >>> Best Regards
> > >>> Sebastian
> > >>>
> > >>>
> > >>>>
> > >>>> We’d love for you all to play with the new version of the tool and
> > >>>> send over any feedback you might have. We’re going to be in a feature
> > >>>> freeze before launch but we'd love to get any bugs sorted out. We'll
> > >>>> likely put this project aside after we iron out a last round of bugs
> > >>>> and launch, and turn back to working on projects that help us pay the
> > >>>> bills, but we definitely hope to revisit and improve the tool over
> > >>>> time.
> > >>>>
> > >>>> Best,
> > >>>>
> > >>>> Sina, Arshan, and Sam.
> > >>>> _______________________________________________
> > >>>> Bloat mailing list
> > >>>> Bloat@lists.bufferbloat.net
> > >>>> https://lists.bufferbloat.net/listinfo/bloat
> > >>>
> >
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-26 18:41 ` Sina Khanifar
@ 2021-02-26 19:58 ` Sebastian Moeller
0 siblings, 0 replies; 41+ messages in thread
From: Sebastian Moeller @ 2021-02-26 19:58 UTC (permalink / raw)
To: Sina Khanifar; +Cc: bloat, sam
Hi Sina,
> On Feb 26, 2021, at 19:41, Sina Khanifar <sina@waveform.com> wrote:
>
> Just a quick follow-up question @Sebastian @Simon:
>
> We're thinking of implementing a "download" button that would download
> the same data currently shown in the JSON view but instead as a CSV.
> Would that work?
Sure, I guess. Why CVS though, the data ist not really that table like with multiple different record types / key value pairs. I think that json looks like a good match, no?
Best Regards
Sebastian
>
> Best,
>
> Sina.
>
> On Fri, Feb 26, 2021 at 12:23 AM Sina Khanifar <sina@waveform.com> wrote:
>>
>>> would it be an option to embed the details link into the results page?
>>
>>> Having more detail available but not shown by default on the main page might keep the geeks happy and make diagnosis easier.
>>
>> Will give this a bit of thought and see if we can make it happen!
>>
>> On Thu, Feb 25, 2021 at 1:15 PM Sebastian Moeller <moeller0@gmx.de> wrote:
>>>
>>> Hi Sina,
>>>
>>> most excellent! While I concur with Simon that "keeping it simple" is the right approach, would it be an option to embed the details link into the results page?
>>>
>>> Best Regards
>>> Sebastian
>>>
>>>
>>>
>>>> On Feb 25, 2021, at 21:50, Sina Khanifar <sina@waveform.com> wrote:
>>>>
>>>>> https://bufferbloat.waveform.workers.dev/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>>>>
>>>> One quick edit, I just changed the route to these, the debug data is
>>>> now available at:
>>>>
>>>> https://bufferbloat.waveform.com/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>>>>
>>>>
>>>> On Thu, Feb 25, 2021 at 12:41 PM Sina Khanifar <sina@waveform.com> wrote:
>>>>>
>>>>> Hi Sebastian!
>>>>>
>>>>>> [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
>>>>>
>>>>> We actually collect all this data, it's just a little bit hidden. If
>>>>> you take the test-id from the end of the URL and put it at the end of
>>>>> a URL like this:
>>>>>
>>>>> https://bufferbloat.waveform.workers.dev/test-results?test-id=6fc7dd95-8bfa-4b76-b141-ed423b6580a9
>>>>>
>>>>> You'll get a whole bunch of extra info, including useragent, a linux
>>>>> timestamp, and a bunch of other fun stuff :). We'll consider surfacing
>>>>> this more at some point in the future though!
>>>>>
>>>>>> Small typo "waus" instead of "ways".
>>>>>
>>>>> Thanks for catching this! A fix is in the works :).
>>>>>
>>>>> On Thu, Feb 25, 2021 at 2:49 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>>>
>>>>>> Hi Sina,
>>>>>>
>>>>>> great work! I took the liberty to advertise this test already for some weeks, because even in its still evolving developing state it was/is already producubg interesting actionable results. Thanks foe fixing the latency numbers for (desktop) Safari. More below.
>>>>>>
>>>>>>
>>>>>>> On Feb 24, 2021, at 19:22, Sina Khanifar <sina@waveform.com> wrote:
>>>>>>>
>>>>>>> Hi all,
>>>>>>>
>>>>>>> A couple of months ago my co-founder Sam posted an early beta of the
>>>>>>> Bufferbloat test that we’ve been working on, and Dave also linked to
>>>>>>> it a couple of weeks ago.
>>>>>>>
>>>>>>> Thank you all so much for your feedback - we almost entirely
>>>>>>> redesigned the tool and the UI based on the comments we received.
>>>>>>> We’re almost ready to launch the tool officially today at this URL,
>>>>>>> but wanted to show it to the list in case anyone finds any last bugs
>>>>>>> that we might have overlooked:
>>>>>>>
>>>>>>> https://www.waveform.com/tools/bufferbloat
>>>>>>>
>>>>>>> If you find a bug, please share the "Share Your Results" link with us
>>>>>>> along with what happened. We capture some debugging information on the
>>>>>>> backend, and having a share link allows us to diagnose any issues.
>>>>>>
>>>>>> [SM] not a bug, more of a feature request, could you add information on whether the test ran over IPv6 or IPv4, and which browser/user agent was involved (nothing too deep, just desktop/mobile and firefox/chrome/safari/brave/...) as well as the date and time of the test? All of these can help to interpret the test results.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> This is really more of a passion project than anything else for us –
>>>>>>> we don’t anticipate we’ll try to commercialize it or anything like
>>>>>>> that. We're very thankful for all the work the folks on this list have
>>>>>>> done to identify and fix bufferbloat, and hope this is a useful
>>>>>>> contribution. I’ve personally been very frustrated by bufferbloat on a
>>>>>>> range of devices, and decided it might be helpful to build another
>>>>>>> bufferbloat test when the DSLReports test was down at some point last
>>>>>>> year.
>>>>>>>
>>>>>>> Our goals with this project were:
>>>>>>> * To build a second solid bufferbloat test in case DSLReports goes down again.
>>>>>>> * Build a test where bufferbloat is front and center as the primary
>>>>>>> purpose of the test, rather than just a feature.
>>>>>>> * Try to explain bufferbloat and its effect on a user's connection
>>>>>>> as clearly as possible for a lay audience.
>>>>>>>
>>>>>>> A few notes:
>>>>>>> * On the backend, we’re using Cloudflare’s CDN to perform the actual
>>>>>>> download and upload speed test. I know John Graham-Cunning has posted
>>>>>>> to this list in the past; if he or anyone from Cloudflare sees this,
>>>>>>> we’d love some help. Our Cloudflare Workers are being
>>>>>>> bandwidth-throttled due to having a non-enterprise grade account.
>>>>>>> We’ve worked around this in a kludgy way, but we’d love to get it
>>>>>>> resolved.
>>>>>>
>>>>>> [SM] I think this was a decent decision, as it seems your tests has less issues even filling 1Gbps links than most others.
>>>>>>
>>>>>>
>>>>>>> * We have lots of ideas for improvements, e.g. simultaneous
>>>>>>> upload/downloads, trying different file size chunks, time-series
>>>>>>> latency graphs, using WebRTC to test UDP traffic etc, but in the
>>>>>>> interest of getting things launched we're sticking with the current
>>>>>>> featureset.
>>>>>>
>>>>>> [SM] Reasonable trade-off, and hopefully potential for pleasant surprises in the future ;)
>>>>>>
>>>>>>> * There are a lot of browser-specific workarounds that we had to
>>>>>>> implement, and latency itself is measured in different ways on
>>>>>>> Safari/Webkit vs Chromium/Firefox due to limitations of the
>>>>>>> PerformanceTiming APIs. You may notice that latency is different on
>>>>>>> different browsers, however the actual bufferbloat (relative increase
>>>>>>> in latency) should be pretty consistent.
>>>>>>>
>>>>>>> In terms of some of the changes we made based on the feedback we
>>>>>>> receive on this list:
>>>>>>>
>>>>>>> Based on Toke’s feedback:
>>>>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
>>>>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
>>>>>>> * We changed the way the speed tests run to show an instantaneous
>>>>>>> speed as the test is being run.
>>>>>>
>>>>>> [SM] Great, if only so it feels comparable to "other" speedtests.
>>>>>>
>>>>>>
>>>>>>> * We moved the bufferbloat grade into the main results box.
>>>>>>
>>>>>> [SM] +1; that helps set the mood ;)
>>>>>>
>>>>>>> * We tried really hard to get as close to saturating gigabit
>>>>>>> connections as possible. We redesigned completely the way we chunk
>>>>>>> files, added a “warming up” period, and spent quite a bit optimizing
>>>>>>> our code to minimize CPU usage, as we found that was often the
>>>>>>> limiting factor to our speed test results.
>>>>>>> * We changed the shield grades altogether and went through a few
>>>>>>> different iterations of how to show the effect of bufferbloat on
>>>>>>> connectivity, and ended up with a “table view” to try to show the
>>>>>>> effect that bufferbloat specifically is having on the connection
>>>>>>> (compared to when the connection is unloaded).
>>>>>>> * We now link from the results table view to the FAQ where the
>>>>>>> conditions for each type of connection are explained.
>>>>>>> * We also changed the way we measure latency and now use the faster
>>>>>>> of either Google’s CDN or Cloudflare at any given location. We’re also
>>>>>>> using the WebTiming APIs to get a more accurate latency number, though
>>>>>>> this does not work on some mobile browsers (e.g. iOS Safari) and as a
>>>>>>> result we show a higher latency on mobile devices. Since our test is
>>>>>>> less a test of absolute latency and more a test of relative latency
>>>>>>> with and without load, we felt this was workable.
>>>>>>> * Our jitter is now an average (was previously RMS).
>>>>>>> * The “before you start” text was rewritten and moved above the start button.
>>>>>>> * We now spell out upload and download instead of having arrows.
>>>>>>> * We hugely reduced the number of cross-site scripts. I was a bit
>>>>>>> embarrassed by this if I’m honest - I spent a long time building web
>>>>>>> tools for the EFF, where we almost never allowed any cross-site
>>>>>>> scripts. * Our site is hosted on Shopify, and adding any features via
>>>>>>> their app store ends up adding a whole lot of gunk. But we uninstalled
>>>>>>> some apps, rewrote our template, and ended up removing a whole lot of
>>>>>>> the gunk. There’s still plenty of room for improvement, but it should
>>>>>>> be a lot better than before.
>>>>>>>
>>>>>>> Based on Dave Collier-Brown’s feedback:
>>>>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015966.html
>>>>>>> * We replaced the “unloaded” and “loaded” language with “unloaded”
>>>>>>> and then “download active” and “upload active.” In the grade box we
>>>>>>> indicate that, for example, “Your latency increased moderately under
>>>>>>> load.”
>>>>>>> * We tried to generally make it easier for non-techie folks to
>>>>>>> understand by emphasizing the grade and adding the table showing how
>>>>>>> bufferbloat affects some commonly-used services.
>>>>>>> * We didn’t really change the candle charts too much - they’re
>>>>>>> mostly just to give a basic visual - we focused more on the actual
>>>>>>> meat of the results above that.
>>>>>>>
>>>>>>> Based on Sebastian Moeller’s feedback:
>>>>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015963.html
>>>>>>> * We considered doing a bidirectional saturating load, but decided
>>>>>>> to skip on implementing it for now. * It’s definitely something we’d
>>>>>>> like to experiment with more in the future.
>>>>>>> * We added a “warming up” period as well as a “draining” period to
>>>>>>> help fill and empty the buffer. We haven’t added the option for an
>>>>>>> extended test, but have this on our list of backlog changes to make in
>>>>>>> the future.
>>>>>>>
>>>>>>> Based on Y’s feedback (link):
>>>>>>> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015962.html
>>>>>>> * We actually ended up removing the grades, but we explained our
>>>>>>> criteria for the new table in the FAQ.
>>>>>>>
>>>>>>> Based on Greg White's feedback (shared privately):
>>>>>>> * We added an FAQ answer explaining jitter and how we measure it.
>>>>>>
>>>>>> [SM] "There are a number of different waus of measuring and defining jitter. For the purpose of this test, we calculate jitter by taking the average of the deviations from the mean latency."
>>>>>>
>>>>>> Small typo "waus" instead of "ways".
>>>>>>
>>>>>> Best Regards
>>>>>> Sebastian
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> We’d love for you all to play with the new version of the tool and
>>>>>>> send over any feedback you might have. We’re going to be in a feature
>>>>>>> freeze before launch but we'd love to get any bugs sorted out. We'll
>>>>>>> likely put this project aside after we iron out a last round of bugs
>>>>>>> and launch, and turn back to working on projects that help us pay the
>>>>>>> bills, but we definitely hope to revisit and improve the tool over
>>>>>>> time.
>>>>>>>
>>>>>>> Best,
>>>>>>>
>>>>>>> Sina, Arshan, and Sam.
>>>>>>> _______________________________________________
>>>>>>> Bloat mailing list
>>>>>>> Bloat@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>>>
>>>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-24 18:22 [Bloat] Updated Bufferbloat Test Sina Khanifar
` (6 preceding siblings ...)
2021-02-25 10:49 ` Sebastian Moeller
@ 2021-02-25 12:27 ` Toke Høiland-Jørgensen
2021-02-25 14:48 ` Toke Høiland-Jørgensen
7 siblings, 1 reply; 41+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-02-25 12:27 UTC (permalink / raw)
To: Sina Khanifar, bloat; +Cc: sam
Sina Khanifar <sina@waveform.com> writes:
> Based on Toke’s feedback:
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015960.html
> https://lists.bufferbloat.net/pipermail/bloat/2020-November/015976.html
Thank you for the update, and especially this very detailed changelog!
I'm impressed! A few points on the specific items below:
> * We changed the way the speed tests run to show an instantaneous
> speed as the test is being run.
Much better, I can actually see what's going on now :)
Maybe an 'abort' button somewhere would be useful? Once you've clicked
start the only way to abort is currently to close the browser tab...
> * We moved the bufferbloat grade into the main results box.
Also very good!
> * We tried really hard to get as close to saturating gigabit
> connections as possible. We redesigned completely the way we chunk
> files, added a “warming up” period, and spent quite a bit optimizing
> our code to minimize CPU usage, as we found that was often the
> limiting factor to our speed test results.
Yup, this seems to work better now! I can basically saturate my
connection now; Chromium seems to be a bit better than Firefox in this
respect, but I ended up getting very close on both:
Chromium:
https://www.waveform.com/tools/bufferbloat?test-id=b14731d3-46d7-49ba-8cc7-3641b495e6c7
Firefox:
https://www.waveform.com/tools/bufferbloat?test-id=877f496a-457a-4cc2-8f4c-91e23065c59e
(this is with a ~100Mbps base load on a Gbps connection, so at least the
Chromium result is pretty much link speed).
Interestingly, while my link is not bloated (the above results are
without running any shaping, just FQ-CoDel on the physical link), it did
manage to knock out the BFD exchange with my upstream BGP peers, causing
routes to flap. So it's definitely saturating something! :D
> * We changed the shield grades altogether and went through a few
> different iterations of how to show the effect of bufferbloat on
> connectivity, and ended up with a “table view” to try to show the
> effect that bufferbloat specifically is having on the connection
> (compared to when the connection is unloaded).
I like this, with one caveat: When you have a good score, you end up
with a table that has all checkmarks in both the "normally" and "with
bufferbloat" columns, which is a bit confusing (makes one think "huh, I
can do low-latency gaming with bufferbloat?"). So I think changing the
column headings would be good; if I'm interpreting what you're trying to
convey, the second column should really say "your connection", right?
And maybe "normally" should be "Ideally"?
> * We now link from the results table view to the FAQ where the
> conditions for each type of connection are explained.
This works well. I also like the FAQ in general (the water/oil in the
sink analogy is great!). What did you base the router recommendations
on? I haven't heard about that Asus gaming router before, does that ship
SQM? Also, the first time you mention the open source distributions,
OpenWrt is not a link (but it is the second time around).
> * We also changed the way we measure latency and now use the faster
> of either Google’s CDN or Cloudflare at any given location.
Are you sure this is working? Mine seems to pick the Google fonts CDN.
The Javascript console outputs 'latency_source_selector.js:26 times
(2) [12.81000001472421, 12.80999998562038]', but in the network tab I
see two OPTIONS requests to fonts.gstatic.com, so I suspect those two
requests both go there? My ICMP ping time to Google is ~11ms, and it's
1.8ms to speed.cloudflare.com, so it seems a bit odd that it would pick
the other one... But maybe it's replying faster to HTTP?
> We’re also using the WebTiming APIs to get a more accurate latency
> number, though this does not work on some mobile browsers (e.g. iOS
> Safari) and as a result we show a higher latency on mobile devices.
> Since our test is less a test of absolute latency and more a test of
> relative latency with and without load, we felt this was workable.
That seems reasonable.
> * Our jitter is now an average (was previously RMS).
I'll echo what the others have said about jitter.
> * The “before you start” text was rewritten and moved above the start button.
> * We now spell out upload and download instead of having arrows.
> * We hugely reduced the number of cross-site scripts. I was a bit
> embarrassed by this if I’m honest - I spent a long time building web
> tools for the EFF, where we almost never allowed any cross-site
> scripts. * Our site is hosted on Shopify, and adding any features via
> their app store ends up adding a whole lot of gunk. But we uninstalled
> some apps, rewrote our template, and ended up removing a whole lot of
> the gunk. There’s still plenty of room for improvement, but it should
> be a lot better than before.
Thank you for this! It even works without allowing all the shopify
scripts, so that's all good :)
-Toke
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [Bloat] Updated Bufferbloat Test
2021-02-25 12:27 ` Toke Høiland-Jørgensen
@ 2021-02-25 14:48 ` Toke Høiland-Jørgensen
0 siblings, 0 replies; 41+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-02-25 14:48 UTC (permalink / raw)
To: Sina Khanifar, bloat; +Cc: sam
Toke Høiland-Jørgensen <toke@toke.dk> writes:
>> * We tried really hard to get as close to saturating gigabit
>> connections as possible. We redesigned completely the way we chunk
>> files, added a “warming up” period, and spent quite a bit optimizing
>> our code to minimize CPU usage, as we found that was often the
>> limiting factor to our speed test results.
>
> Yup, this seems to work better now! I can basically saturate my
> connection now; Chromium seems to be a bit better than Firefox in this
> respect, but I ended up getting very close on both:
>
> Chromium:
> https://www.waveform.com/tools/bufferbloat?test-id=b14731d3-46d7-49ba-8cc7-3641b495e6c7
> Firefox:
> https://www.waveform.com/tools/bufferbloat?test-id=877f496a-457a-4cc2-8f4c-91e23065c59e
>
> (this is with a ~100Mbps base load on a Gbps connection, so at least the
> Chromium result is pretty much link speed).
Did another test while replacing the queue on my router with a big FIFO.
Still got an A+ score:
https://www.waveform.com/tools/bufferbloat?test-id=9965c8db-367c-45f1-927c-a94eb8da0e08
However, note the max latency in download; quite a few outliers, jet I
still get a jitter score of only 22.6ms. Also, this time there's a
warning triangle on the "low latency gaming" row of the table, but the
score is still A+. Should it really be possible to get the highest score
while one of the rows has a warning in it?
-Toke
^ permalink raw reply [flat|nested] 41+ messages in thread