* [Make-wifi-fast] dslreports is no longer free
@ 2020-05-01 16:44 Dave Taht
2020-05-01 19:48 ` [Make-wifi-fast] [Cake] " Sebastian Moeller
0 siblings, 1 reply; 26+ messages in thread
From: Dave Taht @ 2020-05-01 16:44 UTC (permalink / raw)
To: bloat, cerowrt-devel, Make-Wifi-fast, Cake List
https://www.reddit.com/r/HomeNetworking/comments/gbd6g0/dsl_reports_speed_test_no_longer_free/
They ran out of bandwidth.
Message to users here:
http://www.dslreports.com/speedtest
--
Make Music, Not War
Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-435-0729
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] dslreports is no longer free
2020-05-01 16:44 [Make-wifi-fast] dslreports is no longer free Dave Taht
@ 2020-05-01 19:48 ` Sebastian Moeller
2020-05-01 20:09 ` [Bloat] " Sergey Fedorov
` (2 more replies)
0 siblings, 3 replies; 26+ messages in thread
From: Sebastian Moeller @ 2020-05-01 19:48 UTC (permalink / raw)
To: Dave Täht; +Cc: bloat, cerowrt-devel, Make-Wifi-fast, Cake List
Hi Dave,
well, it was a free service and it lasted a long time. I want to raise a toast to Justin and convey my sincere thanks for years of investing into the "good" of the internet.
Now, the question is which test is going to be the rightful successor?
Short of running netperf/irtt/iper2/iperf3 on a hosted server, I see lots of potential but none of the tests are really there yet (grievances in now particular order):
OOKLA: speedtest.net.
Pros: ubiquitious, allows selection of single flow versus multi-flow test, allows server selection
Cons: only IPv4, only static unloaded RTT measurement, no control over measurement duration
BUFFERBLOAT verdict: incomplete, maybe usable as load generator
NETFLIX: fast.com.
Pros: allows selection of upload testing, supposedly decent back-end, duration configurable
allows unloaded, loaded download and loaded upload RTT measurements (but reports sinlge numbers for loaded and unloaded RTT, that are not the max)
Cons: RTT report as two numbers one for the loaded and one for unloaded RTT, time-course of RTTs missing
BUFFERBLOAT verdict: incomplete, but oh, so close...
NPERF: nperf.com
Pros: allows server selection, RTT measurement and report as time course, also reports average rates and static RTT/jitter for Up- and Download
Cons: RTT measurement for unloaded only, reported RTT static only , no control over measurement duration
BUFFERBLOAT verdict: incomplete,
THINKBROADBAND: www.thinkbroadband.com/speedtest
Pros: IPv6, reports coarse RTT time courses for all three measurement phases
Cons: only static unloaded RTT report in final results, time courses only visible immediately after testing, no control over measurement duration
BUFFERBLOAT verdict: a bit coarse, might work for users within a reasonable distance to the UK for acute de-bloating sessions (history reporting is bad though)
honorable mentioning:
BREITBANDMESSUNG: breitbandmessung.de
Pros: query of contracted internet access speed before measurement, with a scheduler that will only start a test when the backend has sufficient capacity to saturate the user-supplied contracted rates, IPv6 (happy-eyeballs)
Cons: only static unloaded RTT measurement, no control over measurement duration
BUFFERBLOAT verdict: unsuitable, exceot as load generator, but the bandwidth reservation feature is quite nice.
Best Regards
Sebastian
> On May 1, 2020, at 18:44, Dave Taht <dave.taht@gmail.com> wrote:
>
> https://www.reddit.com/r/HomeNetworking/comments/gbd6g0/dsl_reports_speed_test_no_longer_free/
>
> They ran out of bandwidth.
>
> Message to users here:
>
> http://www.dslreports.com/speedtest
>
>
> --
> Make Music, Not War
>
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Bloat] [Cake] dslreports is no longer free
2020-05-01 19:48 ` [Make-wifi-fast] [Cake] " Sebastian Moeller
@ 2020-05-01 20:09 ` Sergey Fedorov
2020-05-01 21:11 ` [Make-wifi-fast] " Sebastian Moeller
[not found] ` <mailman.170.1588363787.24343.bloat@lists.bufferbloat.net>
2020-05-27 9:08 ` [Make-wifi-fast] [Bloat] [Cake] " Matthew Ford
2 siblings, 1 reply; 26+ messages in thread
From: Sergey Fedorov @ 2020-05-01 20:09 UTC (permalink / raw)
To: Sebastian Moeller
Cc: Dave Täht, Cake List, Make-Wifi-fast, cerowrt-devel, bloat
[-- Attachment #1: Type: text/plain, Size: 4738 bytes --]
Great review, Sebastian!
> NETFLIX: fast.com.
> Pros: allows selection of upload testing, supposedly decent
> back-end, duration configurable
> allows unloaded, loaded download and loaded upload RTT
> measurements (but reports sinlge numbers for loaded and unloaded RTT, that
> are not the max)
> Cons: RTT report as two numbers one for the loaded and one for
> unloaded RTT, time-course of RTTs missing
> BUFFERBLOAT verdict: incomplete, but oh, so close...
Just a note that I have a plan to separate the loaded latency into
upload/download. It's not great UX now they way it's implemented.
The timeline view is a bit more nuanced, in the spirit of the simplistic
UX, but I've been thinking on a good way to show that for super users as
well.
Two latency numbers - that's more user friendly, we want the general user
to understand the meaning. And latency under load is much easier than
bufferbloat.
As a side note, if our backend is decent, I'm curious what are the backends
for the speed tests that exist that are great :)
SERGEY FEDOROV
Director of Engineering
sfedorov@netflix.com
121 Albright Way | Los Gatos, CA 95032
On Fri, May 1, 2020 at 12:48 PM Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Dave,
>
> well, it was a free service and it lasted a long time. I want to raise a
> toast to Justin and convey my sincere thanks for years of investing into
> the "good" of the internet.
>
> Now, the question is which test is going to be the rightful successor?
>
> Short of running netperf/irtt/iper2/iperf3 on a hosted server, I see lots
> of potential but none of the tests are really there yet (grievances in now
> particular order):
>
> OOKLA: speedtest.net.
> Pros: ubiquitious, allows selection of single flow versus
> multi-flow test, allows server selection
> Cons: only IPv4, only static unloaded RTT measurement, no control
> over measurement duration
> BUFFERBLOAT verdict: incomplete, maybe usable as load generator
>
>
> NETFLIX: fast.com.
> Pros: allows selection of upload testing, supposedly decent
> back-end, duration configurable
> allows unloaded, loaded download and loaded upload RTT
> measurements (but reports sinlge numbers for loaded and unloaded RTT, that
> are not the max)
> Cons: RTT report as two numbers one for the loaded and one for
> unloaded RTT, time-course of RTTs missing
> BUFFERBLOAT verdict: incomplete, but oh, so close...
>
>
> NPERF: nperf.com
> Pros: allows server selection, RTT measurement and report as time
> course, also reports average rates and static RTT/jitter for Up- and
> Download
> Cons: RTT measurement for unloaded only, reported RTT static only
> , no control over measurement duration
> BUFFERBLOAT verdict: incomplete,
>
>
> THINKBROADBAND: www.thinkbroadband.com/speedtest
> Pros: IPv6, reports coarse RTT time courses for all three
> measurement phases
> Cons: only static unloaded RTT report in final results, time
> courses only visible immediately after testing, no control over measurement
> duration
> BUFFERBLOAT verdict: a bit coarse, might work for users within a
> reasonable distance to the UK for acute de-bloating sessions (history
> reporting is bad though)
>
>
> honorable mentioning:
> BREITBANDMESSUNG: breitbandmessung.de
> Pros: query of contracted internet access speed before
> measurement, with a scheduler that will only start a test when the backend
> has sufficient capacity to saturate the user-supplied contracted rates,
> IPv6 (happy-eyeballs)
> Cons: only static unloaded RTT measurement, no control over
> measurement duration
> BUFFERBLOAT verdict: unsuitable, exceot as load generator, but the
> bandwidth reservation feature is quite nice.
>
> Best Regards
> Sebastian
>
>
> > On May 1, 2020, at 18:44, Dave Taht <dave.taht@gmail.com> wrote:
> >
> >
> https://www.reddit.com/r/HomeNetworking/comments/gbd6g0/dsl_reports_speed_test_no_longer_free/
> >
> > They ran out of bandwidth.
> >
> > Message to users here:
> >
> > http://www.dslreports.com/speedtest
> >
> >
> > --
> > Make Music, Not War
> >
> > Dave Täht
> > CTO, TekLibre, LLC
> > http://www.teklibre.com
> > Tel: 1-831-435-0729
> > _______________________________________________
> > Cake mailing list
> > Cake@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cake
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 7843 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] dslreports is no longer free
2020-05-01 20:09 ` [Bloat] " Sergey Fedorov
@ 2020-05-01 21:11 ` Sebastian Moeller
2020-05-01 21:37 ` Sergey Fedorov
[not found] ` <mailman.191.1588369068.24343.bloat@lists.bufferbloat.net>
0 siblings, 2 replies; 26+ messages in thread
From: Sebastian Moeller @ 2020-05-01 21:11 UTC (permalink / raw)
To: Sergey Fedorov
Cc: Dave Täht, Cake List, Make-Wifi-fast, cerowrt-devel, bloat
Hi Sergey,
> On May 1, 2020, at 22:09, Sergey Fedorov <sfedorov@netflix.com> wrote:
>
> Great review, Sebastian!
>
> NETFLIX: fast.com.
> Pros: allows selection of upload testing, supposedly decent back-end, duration configurable
> allows unloaded, loaded download and loaded upload RTT measurements (but reports sinlge numbers for loaded and unloaded RTT, that are not the max)
> Cons: RTT report as two numbers one for the loaded and one for unloaded RTT, time-course of RTTs missing
> BUFFERBLOAT verdict: incomplete, but oh, so close...
> Just a note that I have a plan to separate the loaded latency into upload/download. It's not great UX now they way it's implemented.
Great! I really appreciate the way fast.com evolves carefully to not confuse the intended users and to stay true to its core mission while it still gaining additional features that are not directly part of Netflix business case to operate that test in the first place. Don't get me wrong, I absolutely love that I can easily understand why you should be interested in getting reliable robust speedtests from all existing or potential customers to your back-end; and unlike an ISP's internal speedtest, you are not likely to sugar coat things ;) as your goal and the end-user's goal are fully aligned.
> The timeline view is a bit more nuanced, in the spirit of the simplistic UX, but I've been thinking on a good way to show that for super users as well.
Great again! I see the beauty of keeping things simple while maybe hiding optional information behind an additional "click".
> Two latency numbers - that's more user friendly, we want the general user to understand the meaning.
+1; for normal users that is already bliss. For de-bloating a link however a bit more time resolution generally makes things a bit easier to reason about ;)
> And latency under load is much easier than bufferbloat.
+1; as far as I can tell that term sort of was a decent description of the observed phenomenon that then got a life of its own; in retrospect it was not the most self explanatory term. I like to talk about the latency-under-load-increase when helping people to debloat their links, but that also is a tad on the long side.
>
> As a side note, if our backend is decent, I'm curious what are the backends for the speed tests that exist that are great :)
Ah, I might have tried too hard at understatement, this was the only back-end worth mentioning in the "pros" section...
(well, I also like how breitbandmessung.de deals with their purposefully limited backend (all located in a single" data center in Germany located in an AS that is not directly owned by any ISP, it's the german regulators official speedtest for germany against which we can effectively measure and get an early exit from contracts if the ISPs can not deliver the contracted rates (with a bit of slack)))
Best Regards
Sebastian
>
> SERGEY FEDOROV
> Director of Engineering
> sfedorov@netflix.com
> 121 Albright Way | Los Gatos, CA 95032
>
>
>
> On Fri, May 1, 2020 at 12:48 PM Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Dave,
>
> well, it was a free service and it lasted a long time. I want to raise a toast to Justin and convey my sincere thanks for years of investing into the "good" of the internet.
>
> Now, the question is which test is going to be the rightful successor?
>
> Short of running netperf/irtt/iper2/iperf3 on a hosted server, I see lots of potential but none of the tests are really there yet (grievances in now particular order):
>
> OOKLA: speedtest.net.
> Pros: ubiquitious, allows selection of single flow versus multi-flow test, allows server selection
> Cons: only IPv4, only static unloaded RTT measurement, no control over measurement duration
> BUFFERBLOAT verdict: incomplete, maybe usable as load generator
>
>
> NETFLIX: fast.com.
> Pros: allows selection of upload testing, supposedly decent back-end, duration configurable
> allows unloaded, loaded download and loaded upload RTT measurements (but reports sinlge numbers for loaded and unloaded RTT, that are not the max)
> Cons: RTT report as two numbers one for the loaded and one for unloaded RTT, time-course of RTTs missing
> BUFFERBLOAT verdict: incomplete, but oh, so close...
>
>
> NPERF: nperf.com
> Pros: allows server selection, RTT measurement and report as time course, also reports average rates and static RTT/jitter for Up- and Download
> Cons: RTT measurement for unloaded only, reported RTT static only , no control over measurement duration
> BUFFERBLOAT verdict: incomplete,
>
>
> THINKBROADBAND: www.thinkbroadband.com/speedtest
> Pros: IPv6, reports coarse RTT time courses for all three measurement phases
> Cons: only static unloaded RTT report in final results, time courses only visible immediately after testing, no control over measurement duration
> BUFFERBLOAT verdict: a bit coarse, might work for users within a reasonable distance to the UK for acute de-bloating sessions (history reporting is bad though)
>
>
> honorable mentioning:
> BREITBANDMESSUNG: breitbandmessung.de
> Pros: query of contracted internet access speed before measurement, with a scheduler that will only start a test when the backend has sufficient capacity to saturate the user-supplied contracted rates, IPv6 (happy-eyeballs)
> Cons: only static unloaded RTT measurement, no control over measurement duration
> BUFFERBLOAT verdict: unsuitable, exceot as load generator, but the bandwidth reservation feature is quite nice.
>
> Best Regards
> Sebastian
>
>
> > On May 1, 2020, at 18:44, Dave Taht <dave.taht@gmail.com> wrote:
> >
> > https://www.reddit.com/r/HomeNetworking/comments/gbd6g0/dsl_reports_speed_test_no_longer_free/
> >
> > They ran out of bandwidth.
> >
> > Message to users here:
> >
> > http://www.dslreports.com/speedtest
> >
> >
> > --
> > Make Music, Not War
> >
> > Dave Täht
> > CTO, TekLibre, LLC
> > http://www.teklibre.com
> > Tel: 1-831-435-0729
> > _______________________________________________
> > Cake mailing list
> > Cake@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cake
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Bloat] [Cake] dslreports is no longer free
2020-05-01 21:11 ` [Make-wifi-fast] " Sebastian Moeller
@ 2020-05-01 21:37 ` Sergey Fedorov
[not found] ` <mailman.191.1588369068.24343.bloat@lists.bufferbloat.net>
1 sibling, 0 replies; 26+ messages in thread
From: Sergey Fedorov @ 2020-05-01 21:37 UTC (permalink / raw)
To: Sebastian Moeller
Cc: Dave Täht, Cake List, Make-Wifi-fast, cerowrt-devel, bloat
[-- Attachment #1: Type: text/plain, Size: 8302 bytes --]
Thanks for the kind words, Sebastian!
+1; for normal users that is already bliss. For de-bloating a link however
> a bit more time resolution generally makes things a bit easier to reason
> about ;)
Apologies, I misunderstood your original statement. I interpreted it as a
vote to keep a single bufferbloat metric (vs loaded/unloaded latency).
Agreed on time resolution and its value. No question it's useful for
diagnostics. Open question is to what extent browser-based tools should be
used for detailed troubleshooting (due to sandboxing limitations), and when
is the time for the big guns (like flent) to enter the scene.
I like to talk about the latency-under-load-increase when helping people
> to debloat their links, but that also is a tad on the long side.
Fully agree on length, don't like the verboseness as well. Still looking
for a term that is shorter and yet generic enough that I can explain to my
mom.
Ah, I might have tried too hard at understatement, this was the only
> back-end worth mentioning in the "pros" section...
Got it. The breitbandmessung case is indeed interesting.
SERGEY FEDOROV
Director of Engineering
sfedorov@netflix.com
121 Albright Way | Los Gatos, CA 95032
On Fri, May 1, 2020 at 2:11 PM Sebastian Moeller <moeller0@gmx.de> wrote:
> Hi Sergey,
>
>
>
> > On May 1, 2020, at 22:09, Sergey Fedorov <sfedorov@netflix.com> wrote:
> >
> > Great review, Sebastian!
> >
> > NETFLIX: fast.com.
> > Pros: allows selection of upload testing, supposedly decent
> back-end, duration configurable
> > allows unloaded, loaded download and loaded upload RTT
> measurements (but reports sinlge numbers for loaded and unloaded RTT, that
> are not the max)
> > Cons: RTT report as two numbers one for the loaded and one for
> unloaded RTT, time-course of RTTs missing
> > BUFFERBLOAT verdict: incomplete, but oh, so close...
> > Just a note that I have a plan to separate the loaded latency into
> upload/download. It's not great UX now they way it's implemented.
>
> Great! I really appreciate the way fast.com evolves carefully to
> not confuse the intended users and to stay true to its core mission while
> it still gaining additional features that are not directly part of Netflix
> business case to operate that test in the first place. Don't get me wrong,
> I absolutely love that I can easily understand why you should be interested
> in getting reliable robust speedtests from all existing or potential
> customers to your back-end; and unlike an ISP's internal speedtest, you are
> not likely to sugar coat things ;) as your goal and the end-user's goal are
> fully aligned.
>
> > The timeline view is a bit more nuanced, in the spirit of the simplistic
> UX, but I've been thinking on a good way to show that for super users as
> well.
>
> Great again! I see the beauty of keeping things simple while maybe
> hiding optional information behind an additional "click".
>
> > Two latency numbers - that's more user friendly, we want the general
> user to understand the meaning.
>
> +1; for normal users that is already bliss. For de-bloating a link
> however a bit more time resolution generally makes things a bit easier to
> reason about ;)
>
> > And latency under load is much easier than bufferbloat.
>
> +1; as far as I can tell that term sort of was a decent
> description of the observed phenomenon that then got a life of its own; in
> retrospect it was not the most self explanatory term. I like to talk about
> the latency-under-load-increase when helping people to debloat their links,
> but that also is a tad on the long side.
>
> >
> > As a side note, if our backend is decent, I'm curious what are the
> backends for the speed tests that exist that are great :)
>
> Ah, I might have tried too hard at understatement, this was the
> only back-end worth mentioning in the "pros" section...
> (well, I also like how breitbandmessung.de deals with their purposefully
> limited backend (all located in a single" data center in Germany located in
> an AS that is not directly owned by any ISP, it's the german regulators
> official speedtest for germany against which we can effectively measure and
> get an early exit from contracts if the ISPs can not deliver the contracted
> rates (with a bit of slack)))
>
> Best Regards
> Sebastian
>
> >
> > SERGEY FEDOROV
> > Director of Engineering
> > sfedorov@netflix.com
> > 121 Albright Way | Los Gatos, CA 95032
> >
> >
> >
> > On Fri, May 1, 2020 at 12:48 PM Sebastian Moeller <moeller0@gmx.de>
> wrote:
> > Hi Dave,
> >
> > well, it was a free service and it lasted a long time. I want to raise a
> toast to Justin and convey my sincere thanks for years of investing into
> the "good" of the internet.
> >
> > Now, the question is which test is going to be the rightful successor?
> >
> > Short of running netperf/irtt/iper2/iperf3 on a hosted server, I see
> lots of potential but none of the tests are really there yet (grievances in
> now particular order):
> >
> > OOKLA: speedtest.net.
> > Pros: ubiquitious, allows selection of single flow versus
> multi-flow test, allows server selection
> > Cons: only IPv4, only static unloaded RTT measurement, no
> control over measurement duration
> > BUFFERBLOAT verdict: incomplete, maybe usable as load generator
> >
> >
> > NETFLIX: fast.com.
> > Pros: allows selection of upload testing, supposedly decent
> back-end, duration configurable
> > allows unloaded, loaded download and loaded upload RTT
> measurements (but reports sinlge numbers for loaded and unloaded RTT, that
> are not the max)
> > Cons: RTT report as two numbers one for the loaded and one for
> unloaded RTT, time-course of RTTs missing
> > BUFFERBLOAT verdict: incomplete, but oh, so close...
> >
> >
> > NPERF: nperf.com
> > Pros: allows server selection, RTT measurement and report as
> time course, also reports average rates and static RTT/jitter for Up- and
> Download
> > Cons: RTT measurement for unloaded only, reported RTT static
> only , no control over measurement duration
> > BUFFERBLOAT verdict: incomplete,
> >
> >
> > THINKBROADBAND: www.thinkbroadband.com/speedtest
> > Pros: IPv6, reports coarse RTT time courses for all three
> measurement phases
> > Cons: only static unloaded RTT report in final results, time
> courses only visible immediately after testing, no control over measurement
> duration
> > BUFFERBLOAT verdict: a bit coarse, might work for users within a
> reasonable distance to the UK for acute de-bloating sessions (history
> reporting is bad though)
> >
> >
> > honorable mentioning:
> > BREITBANDMESSUNG: breitbandmessung.de
> > Pros: query of contracted internet access speed before
> measurement, with a scheduler that will only start a test when the backend
> has sufficient capacity to saturate the user-supplied contracted rates,
> IPv6 (happy-eyeballs)
> > Cons: only static unloaded RTT measurement, no control over
> measurement duration
> > BUFFERBLOAT verdict: unsuitable, exceot as load generator, but
> the bandwidth reservation feature is quite nice.
> >
> > Best Regards
> > Sebastian
> >
> >
> > > On May 1, 2020, at 18:44, Dave Taht <dave.taht@gmail.com> wrote:
> > >
> > >
> https://www.reddit.com/r/HomeNetworking/comments/gbd6g0/dsl_reports_speed_test_no_longer_free/
> > >
> > > They ran out of bandwidth.
> > >
> > > Message to users here:
> > >
> > > http://www.dslreports.com/speedtest
> > >
> > >
> > > --
> > > Make Music, Not War
> > >
> > > Dave Täht
> > > CTO, TekLibre, LLC
> > > http://www.teklibre.com
> > > Tel: 1-831-435-0729
> > > _______________________________________________
> > > Cake mailing list
> > > Cake@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/cake
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
[-- Attachment #2: Type: text/html, Size: 12423 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] dslreports is no longer free
[not found] ` <mailman.170.1588363787.24343.bloat@lists.bufferbloat.net>
@ 2020-05-01 22:07 ` Michael Richardson
2020-05-01 23:35 ` Sergey Fedorov
2020-05-02 1:14 ` [Make-wifi-fast] " Jannie Hanekom
0 siblings, 2 replies; 26+ messages in thread
From: Michael Richardson @ 2020-05-01 22:07 UTC (permalink / raw)
To: Sergey Fedorov, Sebastian Moeller, Cake List, Make-Wifi-fast,
cerowrt-devel, bloat
[-- Attachment #1: Type: text/plain, Size: 1440 bytes --]
{Do I need all the lists?}
Sergey Fedorov via Bloat <bloat@lists.bufferbloat.net> wrote:
> Just a note that I have a plan to separate the loaded latency into
> upload/download. It's not great UX now they way it's implemented.
> The timeline view is a bit more nuanced, in the spirit of the simplistic
> UX, but I've been thinking on a good way to show that for super users as
> well.
> Two latency numbers - that's more user friendly, we want the general user
> to understand the meaning. And latency under load is much easier than
> bufferbloat.
> As a side note, if our backend is decent, I'm curious what are the backends
> for the speed tests that exist that are great :)
Does it find/use my nearest Netflix cache?
As others asked, it would be great if we could put the settings into a URL,
and having the "latency under upload" is probably the most important number
that people trying to videoconference need to know.
(it's also the thing that they can mostly directly/cheaply fix)
> SERGEY FEDOROV
> Director of Engineering
> sfedorov@netflix.com
> 121 Albright Way | Los Gatos, CA 95032
Very happy that you are looped in here.
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | IoT architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Bloat] [Cake] dslreports is no longer free
2020-05-01 22:07 ` Michael Richardson
@ 2020-05-01 23:35 ` Sergey Fedorov
2020-05-02 1:14 ` [Make-wifi-fast] " Jannie Hanekom
1 sibling, 0 replies; 26+ messages in thread
From: Sergey Fedorov @ 2020-05-01 23:35 UTC (permalink / raw)
To: Michael Richardson
Cc: Sebastian Moeller, Cake List, Make-Wifi-fast, cerowrt-devel, bloat
[-- Attachment #1: Type: text/plain, Size: 1917 bytes --]
Hi Michael,
This blog post <https://netflixtechblog.com/building-fast-com-4857fe0f8adb>
describes how
the test steers to the server(s).
Noted on the other thread, I hope to add the url param option reasonably
soon.
SERGEY FEDOROV
Director of Engineering
sfedorov@netflix.com
121 Albright Way | Los Gatos, CA 95032
On Fri, May 1, 2020 at 3:07 PM Michael Richardson <mcr@sandelman.ca> wrote:
>
> {Do I need all the lists?}
>
> Sergey Fedorov via Bloat <bloat@lists.bufferbloat.net> wrote:
> > Just a note that I have a plan to separate the loaded latency into
> > upload/download. It's not great UX now they way it's implemented.
> > The timeline view is a bit more nuanced, in the spirit of the
> simplistic
> > UX, but I've been thinking on a good way to show that for super
> users as
> > well.
> > Two latency numbers - that's more user friendly, we want the general
> user
> > to understand the meaning. And latency under load is much easier than
> > bufferbloat.
>
> > As a side note, if our backend is decent, I'm curious what are the
> backends
> > for the speed tests that exist that are great :)
>
> Does it find/use my nearest Netflix cache?
>
> As others asked, it would be great if we could put the settings into a URL,
> and having the "latency under upload" is probably the most important number
> that people trying to videoconference need to know.
>
> (it's also the thing that they can mostly directly/cheaply fix)
>
> > SERGEY FEDOROV
> > Director of Engineering
> > sfedorov@netflix.com
> > 121 Albright Way | Los Gatos, CA 95032
>
> Very happy that you are looped in here.
>
> --
> ] Never tell me the odds! | ipv6 mesh
> networks [
> ] Michael Richardson, Sandelman Software Works | IoT
> architect [
> ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on
> rails [
>
>
[-- Attachment #2: Type: text/html, Size: 4039 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] dslreports is no longer free
[not found] ` <mailman.191.1588369068.24343.bloat@lists.bufferbloat.net>
@ 2020-05-01 23:59 ` Michael Richardson
0 siblings, 0 replies; 26+ messages in thread
From: Michael Richardson @ 2020-05-01 23:59 UTC (permalink / raw)
To: Cake List, Make-Wifi-fast, cerowrt-devel, bloat
[-- Attachment #1: Type: text/plain, Size: 681 bytes --]
Given QUIC uses UDP and does congestion control essentially within the
browser, it seems that maybe one could built latency under load measuring
into the QUIC infrastructure in the browser.
Maybe we don't have to create JS tools like fast.com to get good and
regular measurements of bufferbloat. Maybe it could be a part of
browsers. Maybe web site designers could ask for the current
"latency-under-load" value from the browser DOM.
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | IoT architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] dslreports is no longer free
2020-05-01 22:07 ` Michael Richardson
2020-05-01 23:35 ` Sergey Fedorov
@ 2020-05-02 1:14 ` Jannie Hanekom
2020-05-02 16:37 ` [Make-wifi-fast] [Cake] [Bloat] " Benjamin Cronce
1 sibling, 1 reply; 26+ messages in thread
From: Jannie Hanekom @ 2020-05-02 1:14 UTC (permalink / raw)
To: 'Michael Richardson'
Cc: 'Sergey Fedorov', 'Sebastian Moeller',
'Cake List', 'Make-Wifi-fast', 'bloat'
Michael Richardson <mcr@sandelman.ca>:
> Does it find/use my nearest Netflix cache?
Thankfully, it appears so. The DSLReports bloat test was interesting, but
the jitter on the ~240ms base latency from South Africa (and other parts of
the world) was significant enough that the figures returned were often
unreliable and largely unusable - at least in my experience.
Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms and
mentions servers located in local cities. I finally have a test I can share
with local non-technical people!
(Agreed, upload test would be nice, but this is a huge step forward from
what I had access to before.)
Jannie Hanekom
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
2020-05-02 1:14 ` [Make-wifi-fast] " Jannie Hanekom
@ 2020-05-02 16:37 ` Benjamin Cronce
2020-05-02 16:52 ` Dave Taht
0 siblings, 1 reply; 26+ messages in thread
From: Benjamin Cronce @ 2020-05-02 16:37 UTC (permalink / raw)
To: Jannie Hanekom
Cc: Michael Richardson, Cake List, Sergey Fedorov, bloat, Make-Wifi-fast
[-- Attachment #1: Type: text/plain, Size: 1246 bytes --]
> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
For download, I show 6ms unloaded and 6-7 loaded. But for upload the loaded
shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using
any traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the
bloat would be nice.
On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom <jannie@hanekom.net> wrote:
> Michael Richardson <mcr@sandelman.ca>:
> > Does it find/use my nearest Netflix cache?
>
> Thankfully, it appears so. The DSLReports bloat test was interesting, but
> the jitter on the ~240ms base latency from South Africa (and other parts of
> the world) was significant enough that the figures returned were often
> unreliable and largely unusable - at least in my experience.
>
> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms and
> mentions servers located in local cities. I finally have a test I can
> share
> with local non-technical people!
>
> (Agreed, upload test would be nice, but this is a huge step forward from
> what I had access to before.)
>
> Jannie Hanekom
>
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
>
[-- Attachment #2: Type: text/html, Size: 1875 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
2020-05-02 16:37 ` [Make-wifi-fast] [Cake] [Bloat] " Benjamin Cronce
@ 2020-05-02 16:52 ` Dave Taht
2020-05-02 17:38 ` David P. Reed
0 siblings, 1 reply; 26+ messages in thread
From: Dave Taht @ 2020-05-02 16:52 UTC (permalink / raw)
To: Benjamin Cronce
Cc: Jannie Hanekom, Cake List, Sergey Fedorov, Make-Wifi-fast,
Michael Richardson, bloat
On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <bcronce@gmail.com> wrote:
>
> > Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
I guess one of my questions is that with a switch to BBR netflix is
going to do pretty well. If fast.com is using bbr, well... that
excludes much of the current side of the internet.
> For download, I show 6ms unloaded and 6-7 loaded. But for upload the loaded shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using any traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the bloat would be nice.
The tests do need to last a fairly long time.
> On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom <jannie@hanekom.net> wrote:
>>
>> Michael Richardson <mcr@sandelman.ca>:
>> > Does it find/use my nearest Netflix cache?
>>
>> Thankfully, it appears so. The DSLReports bloat test was interesting, but
>> the jitter on the ~240ms base latency from South Africa (and other parts of
>> the world) was significant enough that the figures returned were often
>> unreliable and largely unusable - at least in my experience.
>>
>> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms and
>> mentions servers located in local cities. I finally have a test I can share
>> with local non-technical people!
>>
>> (Agreed, upload test would be nice, but this is a huge step forward from
>> what I had access to before.)
>>
>> Jannie Hanekom
>>
>> _______________________________________________
>> Cake mailing list
>> Cake@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cake
>
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
--
Make Music, Not War
Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-435-0729
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
2020-05-02 16:52 ` Dave Taht
@ 2020-05-02 17:38 ` David P. Reed
2020-05-02 19:00 ` [Cake] [Make-wifi-fast] " Sergey Fedorov
2020-05-02 20:19 ` [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free Sebastian Moeller
0 siblings, 2 replies; 26+ messages in thread
From: David P. Reed @ 2020-05-02 17:38 UTC (permalink / raw)
To: Dave Taht
Cc: Benjamin Cronce, Michael Richardson, Jannie Hanekom, bloat,
Cake List, Sergey Fedorov, Make-Wifi-fast
[-- Attachment #1: Type: text/plain, Size: 4600 bytes --]
I am still a bit worried about properly defining "latency under load" for a NAT routed situation. If the test is based on ICMP Ping packets *from the server*, it will NOT be measuring the full path latency, and if the potential congestion is in the uplink path from the access provider's residential box to the access provider's router/switch, it will NOT measure congestion caused by bufferbloat reliably on either side, since the bufferbloat will be outside the ICMP Ping path.
I realize that a browser based speed test has to be basically run from the "server" end, because browsers are not that good at time measurement on a packet basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a cooperative server.
I once built a test that fixed this issue reasonably well. It carefully created a TCP based RTT measurement channel (over HTTP) that made the echo have to traverse the whole end-to-end path, which is the best and only way to accurately define lag under load from the user's perspective. The client end of an unloaded TCP connection can depend on TCP (properly prepared by getting it past slowstart) to generate a single packet response.
This "TCP ping" is thus compatible with getting the end-to-end measurement on the server end of a true RTT.
It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes into thinking this is a real, serious packet, not an optional low priority packet.
The same issue comes up with non-browser-based techniques for measuring true lag-under-load.
Now as we move HTTP to QUIC, this actually gets easier to do.
One other opportunity I haven't explored, but which is pregnant with potential is the use of WebRTC, which runs over UDP internally. Since JavaScript has direct access to create WebRTC connections (multiple ones), this makes detailed testing in the browser quite reasonable.
And the time measurements can resolve well below 100 microseconds, if the JS is based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine code speed if the code is restricted and in a loop). Then again, there is Web Assembly if you want to write C code that runs in the brower fast. WebAssembly is a low level language that compiles to machine code in the browser execution, and still has access to all the browser networking facilities.
On Saturday, May 2, 2020 12:52pm, "Dave Taht" <dave.taht@gmail.com> said:
> On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <bcronce@gmail.com> wrote:
> >
> > > Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
>
> I guess one of my questions is that with a switch to BBR netflix is
> going to do pretty well. If fast.com is using bbr, well... that
> excludes much of the current side of the internet.
>
> > For download, I show 6ms unloaded and 6-7 loaded. But for upload the loaded
> shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using any
> traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the bloat would
> be nice.
>
> The tests do need to last a fairly long time.
>
> > On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom <jannie@hanekom.net>
> wrote:
> >>
> >> Michael Richardson <mcr@sandelman.ca>:
> >> > Does it find/use my nearest Netflix cache?
> >>
> >> Thankfully, it appears so. The DSLReports bloat test was interesting,
> but
> >> the jitter on the ~240ms base latency from South Africa (and other parts
> of
> >> the world) was significant enough that the figures returned were often
> >> unreliable and largely unusable - at least in my experience.
> >>
> >> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
> and
> >> mentions servers located in local cities. I finally have a test I can
> share
> >> with local non-technical people!
> >>
> >> (Agreed, upload test would be nice, but this is a huge step forward from
> >> what I had access to before.)
> >>
> >> Jannie Hanekom
> >>
> >> _______________________________________________
> >> Cake mailing list
> >> Cake@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/cake
> >
> > _______________________________________________
> > Cake mailing list
> > Cake@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cake
>
>
>
> --
> Make Music, Not War
>
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
>
[-- Attachment #2: Type: text/html, Size: 7316 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Make-wifi-fast] [Bloat] dslreports is no longer free
2020-05-02 17:38 ` David P. Reed
@ 2020-05-02 19:00 ` Sergey Fedorov
2020-05-02 23:23 ` [Make-wifi-fast] [Cake] " David P. Reed
2020-05-03 15:31 ` [Make-wifi-fast] fast.com quality David P. Reed
2020-05-02 20:19 ` [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free Sebastian Moeller
1 sibling, 2 replies; 26+ messages in thread
From: Sergey Fedorov @ 2020-05-02 19:00 UTC (permalink / raw)
To: David P. Reed
Cc: Dave Taht, Benjamin Cronce, Michael Richardson, Jannie Hanekom,
bloat, Cake List, Make-Wifi-fast
[-- Attachment #1: Type: text/plain, Size: 6670 bytes --]
Dave, thanks for sharing interesting thoughts and context.
> I am still a bit worried about properly defining "latency under load" for
> a NAT routed situation. If the test is based on ICMP Ping packets *from the
> server*, it will NOT be measuring the full path latency, and if the
> potential congestion is in the uplink path from the access provider's
> residential box to the access provider's router/switch, it will NOT measure
> congestion caused by bufferbloat reliably on either side, since the
> bufferbloat will be outside the ICMP Ping path.
>
> I realize that a browser based speed test has to be basically run from the
> "server" end, because browsers are not that good at time measurement on a
> packet basis. However, there are ways to solve this and avoid the ICMP Ping
> issue, with a cooperative server.
This erroneously assumes that fast.com measures latency from the server
side. It does not. The measurements are done from the client, over http,
with a parallel connection(s) to the same or similar set of servers, by
sending empty requests over a previously established connection (you can
see that in the browser web inspector).
It should be noted that the value is not precisely the "RTT on a
TCP/UDP flow that is loaded with traffic", but "user delay given the
presence of heavy parallel flows". With that, some of the challenges you
mentioned do not apply.
In line with another point I've shared earlier - the goal is to measure and
explain the user experience, not to be a diagnostic tool showing internal
transport metrics.
SERGEY FEDOROV
Director of Engineering
sfedorov@netflix.com
121 Albright Way | Los Gatos, CA 95032
On Sat, May 2, 2020 at 10:38 AM David P. Reed <dpreed@deepplum.com> wrote:
> I am still a bit worried about properly defining "latency under load" for
> a NAT routed situation. If the test is based on ICMP Ping packets *from the
> server*, it will NOT be measuring the full path latency, and if the
> potential congestion is in the uplink path from the access provider's
> residential box to the access provider's router/switch, it will NOT measure
> congestion caused by bufferbloat reliably on either side, since the
> bufferbloat will be outside the ICMP Ping path.
>
>
>
> I realize that a browser based speed test has to be basically run from the
> "server" end, because browsers are not that good at time measurement on a
> packet basis. However, there are ways to solve this and avoid the ICMP Ping
> issue, with a cooperative server.
>
>
>
> I once built a test that fixed this issue reasonably well. It carefully
> created a TCP based RTT measurement channel (over HTTP) that made the echo
> have to traverse the whole end-to-end path, which is the best and only way
> to accurately define lag under load from the user's perspective. The client
> end of an unloaded TCP connection can depend on TCP (properly prepared by
> getting it past slowstart) to generate a single packet response.
>
>
>
> This "TCP ping" is thus compatible with getting the end-to-end measurement
> on the server end of a true RTT.
>
>
>
> It's like tcp-traceroute tool, in that it tricks anyone in the middle
> boxes into thinking this is a real, serious packet, not an optional low
> priority packet.
>
>
>
> The same issue comes up with non-browser-based techniques for measuring
> true lag-under-load.
>
>
>
> Now as we move HTTP to QUIC, this actually gets easier to do.
>
>
>
> One other opportunity I haven't explored, but which is pregnant with
> potential is the use of WebRTC, which runs over UDP internally. Since
> JavaScript has direct access to create WebRTC connections (multiple ones),
> this makes detailed testing in the browser quite reasonable.
>
>
>
> And the time measurements can resolve well below 100 microseconds, if the
> JS is based on modern JIT compilation (Chrome, Firefox, Edge all compile to
> machine code speed if the code is restricted and in a loop). Then again,
> there is Web Assembly if you want to write C code that runs in the brower
> fast. WebAssembly is a low level language that compiles to machine code in
> the browser execution, and still has access to all the browser networking
> facilities.
>
>
>
> On Saturday, May 2, 2020 12:52pm, "Dave Taht" <dave.taht@gmail.com> said:
>
> > On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <bcronce@gmail.com>
> wrote:
> > >
> > > > Fast.com reports my unloaded latency as 4ms, my loaded latency as
> ~7ms
> >
> > I guess one of my questions is that with a switch to BBR netflix is
> > going to do pretty well. If fast.com is using bbr, well... that
> > excludes much of the current side of the internet.
> >
> > > For download, I show 6ms unloaded and 6-7 loaded. But for upload the
> loaded
> > shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using
> any
> > traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the
> bloat would
> > be nice.
> >
> > The tests do need to last a fairly long time.
> >
> > > On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom <jannie@hanekom.net>
> > wrote:
> > >>
> > >> Michael Richardson <mcr@sandelman.ca>:
> > >> > Does it find/use my nearest Netflix cache?
> > >>
> > >> Thankfully, it appears so. The DSLReports bloat test was interesting,
> > but
> > >> the jitter on the ~240ms base latency from South Africa (and other
> parts
> > of
> > >> the world) was significant enough that the figures returned were often
> > >> unreliable and largely unusable - at least in my experience.
> > >>
> > >> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
> > and
> > >> mentions servers located in local cities. I finally have a test I can
> > share
> > >> with local non-technical people!
> > >>
> > >> (Agreed, upload test would be nice, but this is a huge step forward
> from
> > >> what I had access to before.)
> > >>
> > >> Jannie Hanekom
> > >>
> > >> _______________________________________________
> > >> Cake mailing list
> > >> Cake@lists.bufferbloat.net
> > >> https://lists.bufferbloat.net/listinfo/cake
> > >
> > > _______________________________________________
> > > Cake mailing list
> > > Cake@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/cake
> >
> >
> >
> > --
> > Make Music, Not War
> >
> > Dave Täht
> > CTO, TekLibre, LLC
> > http://www.teklibre.com
> > Tel: 1-831-435-0729
> > _______________________________________________
> > Cake mailing list
> > Cake@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cake
> >
>
[-- Attachment #2: Type: text/html, Size: 11006 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
2020-05-02 17:38 ` David P. Reed
2020-05-02 19:00 ` [Cake] [Make-wifi-fast] " Sergey Fedorov
@ 2020-05-02 20:19 ` Sebastian Moeller
1 sibling, 0 replies; 26+ messages in thread
From: Sebastian Moeller @ 2020-05-02 20:19 UTC (permalink / raw)
To: David P. Reed
Cc: Dave Täht, Michael Richardson, Make-Wifi-fast,
Jannie Hanekom, Cake List, Sergey Fedorov, bloat
Hi David,
in principle I agree, a NATed IPv4 ICMP probe will be at best reflected at the NAT router (CPE) (some commercial home gateways do not respond to ICMP echo requests in the name of security theatre). So it is pretty hard to measure the full end to end path in that configuration. I believe that IPv6 should make that easier/simpler in that NAT hopefully will be out of the path (but let's see what ingenuity ISPs will come up with).
Then again, traditionally the relevant bottlenecks often are a) the internet access link itself and there the CPE is in a reasonable position as a reflector on the other side of the bottleneck as seen from an internet server, b) the home network between CPE and end-host, often with variable rate wifi, here I agree reflecting echos at the CPE hides part of the issue.
> On May 2, 2020, at 19:38, David P. Reed <dpreed@deepplum.com> wrote:
>
> I am still a bit worried about properly defining "latency under load" for a NAT routed situation. If the test is based on ICMP Ping packets *from the server*, it will NOT be measuring the full path latency, and if the potential congestion is in the uplink path from the access provider's residential box to the access provider's router/switch, it will NOT measure congestion caused by bufferbloat reliably on either side, since the bufferbloat will be outside the ICMP Ping path.
Puzzled, as i believe it is going to be the residential box that will respond here, or will it be the AFTRs for CG-NAT that reflect the ICMP echo requests?
>
> I realize that a browser based speed test has to be basically run from the "server" end, because browsers are not that good at time measurement on a packet basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a cooperative server.
>
> I once built a test that fixed this issue reasonably well. It carefully created a TCP based RTT measurement channel (over HTTP) that made the echo have to traverse the whole end-to-end path, which is the best and only way to accurately define lag under load from the user's perspective. The client end of an unloaded TCP connection can depend on TCP (properly prepared by getting it past slowstart) to generate a single packet response.
>
> This "TCP ping" is thus compatible with getting the end-to-end measurement on the server end of a true RTT.
>
> It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes into thinking this is a real, serious packet, not an optional low priority packet.
>
> The same issue comes up with non-browser-based techniques for measuring true lag-under-load.
>
> Now as we move HTTP to QUIC, this actually gets easier to do.
>
> One other opportunity I haven't explored, but which is pregnant with potential is the use of WebRTC, which runs over UDP internally. Since JavaScript has direct access to create WebRTC connections (multiple ones), this makes detailed testing in the browser quite reasonable.
>
> And the time measurements can resolve well below 100 microseconds, if the JS is based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine code speed if the code is restricted and in a loop). Then again, there is Web Assembly if you want to write C code that runs in the brower fast. WebAssembly is a low level language that compiles to machine code in the browser execution, and still has access to all the browser networking facilities.
Mmmh, according to https://github.com/w3c/hr-time/issues/56 due to spectre side-channel vulnerabilities many browsers seemed to have lowered the timer resolution, but even the ~1ms resolution should be fine for typical RTTs.
Best Regards
Sebastian
P.S.: I assume that I simply do not see/understand the full scope of the issue at hand yet.
>
> On Saturday, May 2, 2020 12:52pm, "Dave Taht" <dave.taht@gmail.com> said:
>
> > On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <bcronce@gmail.com> wrote:
> > >
> > > > Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
> >
> > I guess one of my questions is that with a switch to BBR netflix is
> > going to do pretty well. If fast.com is using bbr, well... that
> > excludes much of the current side of the internet.
> >
> > > For download, I show 6ms unloaded and 6-7 loaded. But for upload the loaded
> > shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using any
> > traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the bloat would
> > be nice.
> >
> > The tests do need to last a fairly long time.
> >
> > > On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom <jannie@hanekom.net>
> > wrote:
> > >>
> > >> Michael Richardson <mcr@sandelman.ca>:
> > >> > Does it find/use my nearest Netflix cache?
> > >>
> > >> Thankfully, it appears so. The DSLReports bloat test was interesting,
> > but
> > >> the jitter on the ~240ms base latency from South Africa (and other parts
> > of
> > >> the world) was significant enough that the figures returned were often
> > >> unreliable and largely unusable - at least in my experience.
> > >>
> > >> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
> > and
> > >> mentions servers located in local cities. I finally have a test I can
> > share
> > >> with local non-technical people!
> > >>
> > >> (Agreed, upload test would be nice, but this is a huge step forward from
> > >> what I had access to before.)
> > >>
> > >> Jannie Hanekom
> > >>
> > >> _______________________________________________
> > >> Cake mailing list
> > >> Cake@lists.bufferbloat.net
> > >> https://lists.bufferbloat.net/listinfo/cake
> > >
> > > _______________________________________________
> > > Cake mailing list
> > > Cake@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/cake
> >
> >
> >
> > --
> > Make Music, Not War
> >
> > Dave Täht
> > CTO, TekLibre, LLC
> > http://www.teklibre.com
> > Tel: 1-831-435-0729
> > _______________________________________________
> > Cake mailing list
> > Cake@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cake
> >
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
2020-05-02 19:00 ` [Cake] [Make-wifi-fast] " Sergey Fedorov
@ 2020-05-02 23:23 ` David P. Reed
2020-05-03 15:31 ` [Make-wifi-fast] fast.com quality David P. Reed
1 sibling, 0 replies; 26+ messages in thread
From: David P. Reed @ 2020-05-02 23:23 UTC (permalink / raw)
To: Sergey Fedorov
Cc: Dave Taht, Benjamin Cronce, Michael Richardson, Jannie Hanekom,
bloat, Cake List, Make-Wifi-fast
[-- Attachment #1: Type: text/plain, Size: 7376 bytes --]
Sergey - I wasn't assuming anything about fast.com. The document you shared wasn't clear about the methodology's details here. Others sadly, have actually used ICMP pings in the way I described. I was making a generic comment of concern.
That said, it sounds like what you are doing is really helpful (esp. given that your measure is aimed at end user experiential qualities).
Good luck!
On Saturday, May 2, 2020 3:00pm, "Sergey Fedorov" <sfedorov@netflix.com> said:
Dave, thanks for sharing interesting thoughts and context. I am still a bit worried about properly defining "latency under load" for a NAT routed situation. If the test is based on ICMP Ping packets *from the server*, it will NOT be measuring the full path latency, and if the potential congestion is in the uplink path from the access provider's residential box to the access provider's router/switch, it will NOT measure congestion caused by bufferbloat reliably on either side, since the bufferbloat will be outside the ICMP Ping path.
I realize that a browser based speed test has to be basically run from the "server" end, because browsers are not that good at time measurement on a packet basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a cooperative server.
This erroneously assumes that [ fast.com ]( http://fast.com ) measures latency from the server side. It does not. The measurements are done from the client, over http, with a parallel connection(s) to the same or similar set of servers, by sending empty requests over a previously established connection (you can see that in the browser web inspector).
It should be noted that the value is not precisely the "RTT on a TCP/UDP flow that is loaded with traffic", but "user delay given the presence of heavy parallel flows". With that, some of the challenges you mentioned do not apply.
In line with another point I've shared earlier - the goal is to measure and explain the user experience, not to be a diagnostic tool showing internal transport metrics.
SERGEY FEDOROV
Director of Engineering
[ sfedorov@netflix.com ]( mailto:sfedorov@netflix.com )
121 Albright Way | Los Gatos, CA 95032
On Sat, May 2, 2020 at 10:38 AM David P. Reed <[ dpreed@deepplum.com ]( mailto:dpreed@deepplum.com )> wrote:
I am still a bit worried about properly defining "latency under load" for a NAT routed situation. If the test is based on ICMP Ping packets *from the server*, it will NOT be measuring the full path latency, and if the potential congestion is in the uplink path from the access provider's residential box to the access provider's router/switch, it will NOT measure congestion caused by bufferbloat reliably on either side, since the bufferbloat will be outside the ICMP Ping path.
I realize that a browser based speed test has to be basically run from the "server" end, because browsers are not that good at time measurement on a packet basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a cooperative server.
I once built a test that fixed this issue reasonably well. It carefully created a TCP based RTT measurement channel (over HTTP) that made the echo have to traverse the whole end-to-end path, which is the best and only way to accurately define lag under load from the user's perspective. The client end of an unloaded TCP connection can depend on TCP (properly prepared by getting it past slowstart) to generate a single packet response.
This "TCP ping" is thus compatible with getting the end-to-end measurement on the server end of a true RTT.
It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes into thinking this is a real, serious packet, not an optional low priority packet.
The same issue comes up with non-browser-based techniques for measuring true lag-under-load.
Now as we move HTTP to QUIC, this actually gets easier to do.
One other opportunity I haven't explored, but which is pregnant with potential is the use of WebRTC, which runs over UDP internally. Since JavaScript has direct access to create WebRTC connections (multiple ones), this makes detailed testing in the browser quite reasonable.
And the time measurements can resolve well below 100 microseconds, if the JS is based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine code speed if the code is restricted and in a loop). Then again, there is Web Assembly if you want to write C code that runs in the brower fast. WebAssembly is a low level language that compiles to machine code in the browser execution, and still has access to all the browser networking facilities.
On Saturday, May 2, 2020 12:52pm, "Dave Taht" <[ dave.taht@gmail.com ]( mailto:dave.taht@gmail.com )> said:
> On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <[ bcronce@gmail.com ]( mailto:bcronce@gmail.com )> wrote:
> >
> > > Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
>
> I guess one of my questions is that with a switch to BBR netflix is
> going to do pretty well. If [ fast.com ]( http://fast.com ) is using bbr, well... that
> excludes much of the current side of the internet.
>
> > For download, I show 6ms unloaded and 6-7 loaded. But for upload the loaded
> shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using any
> traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the bloat would
> be nice.
>
> The tests do need to last a fairly long time.
>
> > On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom <[ jannie@hanekom.net ]( mailto:jannie@hanekom.net )>
> wrote:
> >>
> >> Michael Richardson <[ mcr@sandelman.ca ]( mailto:mcr@sandelman.ca )>:
> >> > Does it find/use my nearest Netflix cache?
> >>
> >> Thankfully, it appears so. The DSLReports bloat test was interesting,
> but
> >> the jitter on the ~240ms base latency from South Africa (and other parts
> of
> >> the world) was significant enough that the figures returned were often
> >> unreliable and largely unusable - at least in my experience.
> >>
> >> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
> and
> >> mentions servers located in local cities. I finally have a test I can
> share
> >> with local non-technical people!
> >>
> >> (Agreed, upload test would be nice, but this is a huge step forward from
> >> what I had access to before.)
> >>
> >> Jannie Hanekom
> >>
> >> _______________________________________________
> >> Cake mailing list
> >> [ Cake@lists.bufferbloat.net ]( mailto:Cake@lists.bufferbloat.net )
> >> [ https://lists.bufferbloat.net/listinfo/cake ]( https://lists.bufferbloat.net/listinfo/cake )
> >
> > _______________________________________________
> > Cake mailing list
> > [ Cake@lists.bufferbloat.net ]( mailto:Cake@lists.bufferbloat.net )
> > [ https://lists.bufferbloat.net/listinfo/cake ]( https://lists.bufferbloat.net/listinfo/cake )
>
>
>
> --
> Make Music, Not War
>
> Dave Täht
> CTO, TekLibre, LLC
> [ http://www.teklibre.com ]( http://www.teklibre.com )
> Tel: 1-831-435-0729
> _______________________________________________
> Cake mailing list
> [ Cake@lists.bufferbloat.net ]( mailto:Cake@lists.bufferbloat.net )
> [ https://lists.bufferbloat.net/listinfo/cake ]( https://lists.bufferbloat.net/listinfo/cake )
>
[-- Attachment #2: Type: text/html, Size: 13135 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* [Make-wifi-fast] fast.com quality
2020-05-02 19:00 ` [Cake] [Make-wifi-fast] " Sergey Fedorov
2020-05-02 23:23 ` [Make-wifi-fast] [Cake] " David P. Reed
@ 2020-05-03 15:31 ` David P. Reed
2020-05-03 15:37 ` Dave Taht
1 sibling, 1 reply; 26+ messages in thread
From: David P. Reed @ 2020-05-03 15:31 UTC (permalink / raw)
To: Sergey Fedorov
Cc: Dave Taht, Benjamin Cronce, Michael Richardson, Jannie Hanekom,
bloat, Cake List, Make-Wifi-fast
[-- Attachment #1: Type: text/plain, Size: 7571 bytes --]
Sergey -
I am very happy to report that fast.com reports the following from my inexpensive Chromebook, over 802.11ac, my Linux-on-Celeron cake entry router setup, through RCN's "Gigabit service". It's a little surprising, only in how good it is.
460 Mbps down/17 Mbps up, 11 ms. unloaded, 18 ms. loaded.
I'm a little bit curious about the extra 7 ms. due to load. I'm wondering if it is in my WiFi path, or whether Cake is building a queue.
The 11 ms. to South Boston from my Needham home seems a bit high. I used to be about 7 msec. away from that switch. But I'm not complaiing.
On Saturday, May 2, 2020 3:00pm, "Sergey Fedorov" <sfedorov@netflix.com> said:
Dave, thanks for sharing interesting thoughts and context. I am still a bit worried about properly defining "latency under load" for a NAT routed situation. If the test is based on ICMP Ping packets *from the server*, it will NOT be measuring the full path latency, and if the potential congestion is in the uplink path from the access provider's residential box to the access provider's router/switch, it will NOT measure congestion caused by bufferbloat reliably on either side, since the bufferbloat will be outside the ICMP Ping path.
I realize that a browser based speed test has to be basically run from the "server" end, because browsers are not that good at time measurement on a packet basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a cooperative server.
This erroneously assumes that [ fast.com ]( http://fast.com ) measures latency from the server side. It does not. The measurements are done from the client, over http, with a parallel connection(s) to the same or similar set of servers, by sending empty requests over a previously established connection (you can see that in the browser web inspector).
It should be noted that the value is not precisely the "RTT on a TCP/UDP flow that is loaded with traffic", but "user delay given the presence of heavy parallel flows". With that, some of the challenges you mentioned do not apply.
In line with another point I've shared earlier - the goal is to measure and explain the user experience, not to be a diagnostic tool showing internal transport metrics.
SERGEY FEDOROV
Director of Engineering
[ sfedorov@netflix.com ]( mailto:sfedorov@netflix.com )
121 Albright Way | Los Gatos, CA 95032
On Sat, May 2, 2020 at 10:38 AM David P. Reed <[ dpreed@deepplum.com ]( mailto:dpreed@deepplum.com )> wrote:
I am still a bit worried about properly defining "latency under load" for a NAT routed situation. If the test is based on ICMP Ping packets *from the server*, it will NOT be measuring the full path latency, and if the potential congestion is in the uplink path from the access provider's residential box to the access provider's router/switch, it will NOT measure congestion caused by bufferbloat reliably on either side, since the bufferbloat will be outside the ICMP Ping path.
I realize that a browser based speed test has to be basically run from the "server" end, because browsers are not that good at time measurement on a packet basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a cooperative server.
I once built a test that fixed this issue reasonably well. It carefully created a TCP based RTT measurement channel (over HTTP) that made the echo have to traverse the whole end-to-end path, which is the best and only way to accurately define lag under load from the user's perspective. The client end of an unloaded TCP connection can depend on TCP (properly prepared by getting it past slowstart) to generate a single packet response.
This "TCP ping" is thus compatible with getting the end-to-end measurement on the server end of a true RTT.
It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes into thinking this is a real, serious packet, not an optional low priority packet.
The same issue comes up with non-browser-based techniques for measuring true lag-under-load.
Now as we move HTTP to QUIC, this actually gets easier to do.
One other opportunity I haven't explored, but which is pregnant with potential is the use of WebRTC, which runs over UDP internally. Since JavaScript has direct access to create WebRTC connections (multiple ones), this makes detailed testing in the browser quite reasonable.
And the time measurements can resolve well below 100 microseconds, if the JS is based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine code speed if the code is restricted and in a loop). Then again, there is Web Assembly if you want to write C code that runs in the brower fast. WebAssembly is a low level language that compiles to machine code in the browser execution, and still has access to all the browser networking facilities.
On Saturday, May 2, 2020 12:52pm, "Dave Taht" <[ dave.taht@gmail.com ]( mailto:dave.taht@gmail.com )> said:
> On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <[ bcronce@gmail.com ]( mailto:bcronce@gmail.com )> wrote:
> >
> > > Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
>
> I guess one of my questions is that with a switch to BBR netflix is
> going to do pretty well. If [ fast.com ]( http://fast.com ) is using bbr, well... that
> excludes much of the current side of the internet.
>
> > For download, I show 6ms unloaded and 6-7 loaded. But for upload the loaded
> shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using any
> traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the bloat would
> be nice.
>
> The tests do need to last a fairly long time.
>
> > On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom <[ jannie@hanekom.net ]( mailto:jannie@hanekom.net )>
> wrote:
> >>
> >> Michael Richardson <[ mcr@sandelman.ca ]( mailto:mcr@sandelman.ca )>:
> >> > Does it find/use my nearest Netflix cache?
> >>
> >> Thankfully, it appears so. The DSLReports bloat test was interesting,
> but
> >> the jitter on the ~240ms base latency from South Africa (and other parts
> of
> >> the world) was significant enough that the figures returned were often
> >> unreliable and largely unusable - at least in my experience.
> >>
> >> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
> and
> >> mentions servers located in local cities. I finally have a test I can
> share
> >> with local non-technical people!
> >>
> >> (Agreed, upload test would be nice, but this is a huge step forward from
> >> what I had access to before.)
> >>
> >> Jannie Hanekom
> >>
> >> _______________________________________________
> >> Cake mailing list
> >> [ Cake@lists.bufferbloat.net ]( mailto:Cake@lists.bufferbloat.net )
> >> [ https://lists.bufferbloat.net/listinfo/cake ]( https://lists.bufferbloat.net/listinfo/cake )
> >
> > _______________________________________________
> > Cake mailing list
> > [ Cake@lists.bufferbloat.net ]( mailto:Cake@lists.bufferbloat.net )
> > [ https://lists.bufferbloat.net/listinfo/cake ]( https://lists.bufferbloat.net/listinfo/cake )
>
>
>
> --
> Make Music, Not War
>
> Dave Täht
> CTO, TekLibre, LLC
> [ http://www.teklibre.com ]( http://www.teklibre.com )
> Tel: 1-831-435-0729
> _______________________________________________
> Cake mailing list
> [ Cake@lists.bufferbloat.net ]( mailto:Cake@lists.bufferbloat.net )
> [ https://lists.bufferbloat.net/listinfo/cake ]( https://lists.bufferbloat.net/listinfo/cake )
>
[-- Attachment #2: Type: text/html, Size: 13526 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] fast.com quality
2020-05-03 15:31 ` [Make-wifi-fast] fast.com quality David P. Reed
@ 2020-05-03 15:37 ` Dave Taht
0 siblings, 0 replies; 26+ messages in thread
From: Dave Taht @ 2020-05-03 15:37 UTC (permalink / raw)
To: David P. Reed
Cc: Sergey Fedorov, Benjamin Cronce, Michael Richardson,
Jannie Hanekom, bloat, Cake List, Make-Wifi-fast
turn off cake, do it over wired. :) TAKE a packet cap of before and after.Thx.
On Sun, May 3, 2020 at 8:31 AM David P. Reed <dpreed@deepplum.com> wrote:
>
> Sergey -
>
>
>
> I am very happy to report that fast.com reports the following from my inexpensive Chromebook, over 802.11ac, my Linux-on-Celeron cake entry router setup, through RCN's "Gigabit service". It's a little surprising, only in how good it is.
>
>
>
> 460 Mbps down/17 Mbps up, 11 ms. unloaded, 18 ms. loaded.
>
>
>
> I'm a little bit curious about the extra 7 ms. due to load. I'm wondering if it is in my WiFi path, or whether Cake is building a queue.
>
>
>
> The 11 ms. to South Boston from my Needham home seems a bit high. I used to be about 7 msec. away from that switch. But I'm not complaiing.
>
> On Saturday, May 2, 2020 3:00pm, "Sergey Fedorov" <sfedorov@netflix.com> said:
>
> Dave, thanks for sharing interesting thoughts and context.
>>
>> I am still a bit worried about properly defining "latency under load" for a NAT routed situation. If the test is based on ICMP Ping packets *from the server*, it will NOT be measuring the full path latency, and if the potential congestion is in the uplink path from the access provider's residential box to the access provider's router/switch, it will NOT measure congestion caused by bufferbloat reliably on either side, since the bufferbloat will be outside the ICMP Ping path.
>>
>> I realize that a browser based speed test has to be basically run from the "server" end, because browsers are not that good at time measurement on a packet basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a cooperative server.
>
> This erroneously assumes that fast.com measures latency from the server side. It does not. The measurements are done from the client, over http, with a parallel connection(s) to the same or similar set of servers, by sending empty requests over a previously established connection (you can see that in the browser web inspector).
> It should be noted that the value is not precisely the "RTT on a TCP/UDP flow that is loaded with traffic", but "user delay given the presence of heavy parallel flows". With that, some of the challenges you mentioned do not apply.
> In line with another point I've shared earlier - the goal is to measure and explain the user experience, not to be a diagnostic tool showing internal transport metrics.
>
> SERGEY FEDOROV
>
> Director of Engineering
>
> sfedorov@netflix.com
>
> 121 Albright Way | Los Gatos, CA 95032
>
>
> On Sat, May 2, 2020 at 10:38 AM David P. Reed <dpreed@deepplum.com> wrote:
>>
>> I am still a bit worried about properly defining "latency under load" for a NAT routed situation. If the test is based on ICMP Ping packets *from the server*, it will NOT be measuring the full path latency, and if the potential congestion is in the uplink path from the access provider's residential box to the access provider's router/switch, it will NOT measure congestion caused by bufferbloat reliably on either side, since the bufferbloat will be outside the ICMP Ping path.
>>
>>
>>
>> I realize that a browser based speed test has to be basically run from the "server" end, because browsers are not that good at time measurement on a packet basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a cooperative server.
>>
>>
>>
>> I once built a test that fixed this issue reasonably well. It carefully created a TCP based RTT measurement channel (over HTTP) that made the echo have to traverse the whole end-to-end path, which is the best and only way to accurately define lag under load from the user's perspective. The client end of an unloaded TCP connection can depend on TCP (properly prepared by getting it past slowstart) to generate a single packet response.
>>
>>
>>
>> This "TCP ping" is thus compatible with getting the end-to-end measurement on the server end of a true RTT.
>>
>>
>>
>> It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes into thinking this is a real, serious packet, not an optional low priority packet.
>>
>>
>>
>> The same issue comes up with non-browser-based techniques for measuring true lag-under-load.
>>
>>
>>
>> Now as we move HTTP to QUIC, this actually gets easier to do.
>>
>>
>>
>> One other opportunity I haven't explored, but which is pregnant with potential is the use of WebRTC, which runs over UDP internally. Since JavaScript has direct access to create WebRTC connections (multiple ones), this makes detailed testing in the browser quite reasonable.
>>
>>
>>
>> And the time measurements can resolve well below 100 microseconds, if the JS is based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine code speed if the code is restricted and in a loop). Then again, there is Web Assembly if you want to write C code that runs in the brower fast. WebAssembly is a low level language that compiles to machine code in the browser execution, and still has access to all the browser networking facilities.
>>
>>
>>
>> On Saturday, May 2, 2020 12:52pm, "Dave Taht" <dave.taht@gmail.com> said:
>>
>> > On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <bcronce@gmail.com> wrote:
>> > >
>> > > > Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
>> >
>> > I guess one of my questions is that with a switch to BBR netflix is
>> > going to do pretty well. If fast.com is using bbr, well... that
>> > excludes much of the current side of the internet.
>> >
>> > > For download, I show 6ms unloaded and 6-7 loaded. But for upload the loaded
>> > shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using any
>> > traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the bloat would
>> > be nice.
>> >
>> > The tests do need to last a fairly long time.
>> >
>> > > On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom <jannie@hanekom.net>
>> > wrote:
>> > >>
>> > >> Michael Richardson <mcr@sandelman.ca>:
>> > >> > Does it find/use my nearest Netflix cache?
>> > >>
>> > >> Thankfully, it appears so. The DSLReports bloat test was interesting,
>> > but
>> > >> the jitter on the ~240ms base latency from South Africa (and other parts
>> > of
>> > >> the world) was significant enough that the figures returned were often
>> > >> unreliable and largely unusable - at least in my experience.
>> > >>
>> > >> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
>> > and
>> > >> mentions servers located in local cities. I finally have a test I can
>> > share
>> > >> with local non-technical people!
>> > >>
>> > >> (Agreed, upload test would be nice, but this is a huge step forward from
>> > >> what I had access to before.)
>> > >>
>> > >> Jannie Hanekom
>> > >>
>> > >> _______________________________________________
>> > >> Cake mailing list
>> > >> Cake@lists.bufferbloat.net
>> > >> https://lists.bufferbloat.net/listinfo/cake
>> > >
>> > > _______________________________________________
>> > > Cake mailing list
>> > > Cake@lists.bufferbloat.net
>> > > https://lists.bufferbloat.net/listinfo/cake
>> >
>> >
>> >
>> > --
>> > Make Music, Not War
>> >
>> > Dave Täht
>> > CTO, TekLibre, LLC
>> > http://www.teklibre.com
>> > Tel: 1-831-435-0729
>> > _______________________________________________
>> > Cake mailing list
>> > Cake@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/cake
>> >
--
Make Music, Not War
Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-435-0729
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] dslreports is no longer free
2020-05-01 19:48 ` [Make-wifi-fast] [Cake] " Sebastian Moeller
2020-05-01 20:09 ` [Bloat] " Sergey Fedorov
[not found] ` <mailman.170.1588363787.24343.bloat@lists.bufferbloat.net>
@ 2020-05-27 9:08 ` Matthew Ford
2020-05-27 9:28 ` Toke Høiland-Jørgensen
2020-05-27 9:32 ` Sebastian Moeller
2 siblings, 2 replies; 26+ messages in thread
From: Matthew Ford @ 2020-05-27 9:08 UTC (permalink / raw)
To: Sebastian Moeller
Cc: Dave Täht, Cake List, Make-Wifi-fast, cerowrt-devel, bloat
What's the bufferbloat verdict on https://speed.cloudflare.com/ ?
Mat
> On 1 May 2020, at 20:48, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Dave,
>
> well, it was a free service and it lasted a long time. I want to raise a toast to Justin and convey my sincere thanks for years of investing into the "good" of the internet.
>
> Now, the question is which test is going to be the rightful successor?
>
> Short of running netperf/irtt/iper2/iperf3 on a hosted server, I see lots of potential but none of the tests are really there yet (grievances in now particular order):
>
> OOKLA: speedtest.net.
> Pros: ubiquitious, allows selection of single flow versus multi-flow test, allows server selection
> Cons: only IPv4, only static unloaded RTT measurement, no control over measurement duration
> BUFFERBLOAT verdict: incomplete, maybe usable as load generator
>
>
> NETFLIX: fast.com.
> Pros: allows selection of upload testing, supposedly decent back-end, duration configurable
> allows unloaded, loaded download and loaded upload RTT measurements (but reports sinlge numbers for loaded and unloaded RTT, that are not the max)
> Cons: RTT report as two numbers one for the loaded and one for unloaded RTT, time-course of RTTs missing
> BUFFERBLOAT verdict: incomplete, but oh, so close...
>
>
> NPERF: nperf.com
> Pros: allows server selection, RTT measurement and report as time course, also reports average rates and static RTT/jitter for Up- and Download
> Cons: RTT measurement for unloaded only, reported RTT static only , no control over measurement duration
> BUFFERBLOAT verdict: incomplete,
>
>
> THINKBROADBAND: www.thinkbroadband.com/speedtest
> Pros: IPv6, reports coarse RTT time courses for all three measurement phases
> Cons: only static unloaded RTT report in final results, time courses only visible immediately after testing, no control over measurement duration
> BUFFERBLOAT verdict: a bit coarse, might work for users within a reasonable distance to the UK for acute de-bloating sessions (history reporting is bad though)
>
>
> honorable mentioning:
> BREITBANDMESSUNG: breitbandmessung.de
> Pros: query of contracted internet access speed before measurement, with a scheduler that will only start a test when the backend has sufficient capacity to saturate the user-supplied contracted rates, IPv6 (happy-eyeballs)
> Cons: only static unloaded RTT measurement, no control over measurement duration
> BUFFERBLOAT verdict: unsuitable, exceot as load generator, but the bandwidth reservation feature is quite nice.
>
> Best Regards
> Sebastian
>
>
>> On May 1, 2020, at 18:44, Dave Taht <dave.taht@gmail.com> wrote:
>>
>> https://www.reddit.com/r/HomeNetworking/comments/gbd6g0/dsl_reports_speed_test_no_longer_free/
>>
>> They ran out of bandwidth.
>>
>> Message to users here:
>>
>> http://www.dslreports.com/speedtest
>>
>>
>> --
>> Make Music, Not War
>>
>> Dave Täht
>> CTO, TekLibre, LLC
>> http://www.teklibre.com
>> Tel: 1-831-435-0729
>> _______________________________________________
>> Cake mailing list
>> Cake@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cake
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] dslreports is no longer free
2020-05-27 9:08 ` [Make-wifi-fast] [Bloat] [Cake] " Matthew Ford
@ 2020-05-27 9:28 ` Toke Høiland-Jørgensen
2020-05-27 9:32 ` Sebastian Moeller
1 sibling, 0 replies; 26+ messages in thread
From: Toke Høiland-Jørgensen @ 2020-05-27 9:28 UTC (permalink / raw)
To: Matthew Ford, Sebastian Moeller
Cc: Cake List, Make-Wifi-fast, cerowrt-devel, bloat
Matthew Ford <ford@isoc.org> writes:
> What's the bufferbloat verdict on https://speed.cloudflare.com/ ?
Huh, didn't know about that. Seems they're measuring the latency before
the download test, though, so no bufferbloat numbers. If anyone knows
someone at Cloudflare we could try to bug to get this fixed, that would
be awesome!
Their FAQ links to https://www.speedcheck.org/ for "troubleshooting
tips". And of course that page doesn't seem to mention latency or
bufferbloat at all :(
-Toke
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Bloat] [Cake] dslreports is no longer free
2020-05-27 9:08 ` [Make-wifi-fast] [Bloat] [Cake] " Matthew Ford
2020-05-27 9:28 ` Toke Høiland-Jørgensen
@ 2020-05-27 9:32 ` Sebastian Moeller
1 sibling, 0 replies; 26+ messages in thread
From: Sebastian Moeller @ 2020-05-27 9:32 UTC (permalink / raw)
To: Matthew Ford
Cc: Dave Täht, Cake List, Make-Wifi-fast, cerowrt-devel, bloat
Hi Mat,
> On May 27, 2020, at 11:08, Matthew Ford <ford@isoc.org> wrote:
>
> What's the bufferbloat verdict on https://speed.cloudflare.com/ ?
Not a verdict per se, but this has potential, but is not there yet.
Pros: Decent reporting of the Download rates including intermediate values
Decent reporting for the idle latency (I like the box whisker plots, ans the details revealed on mouse-over, as well as the individual samples)
Cons: Upload seems missing
Latency is only measured for a pre-download idle phase, that is important, but for bufferbloat testing we really need to see the latency-under-load numbers (separately for down- and upload).
Test duration not configurable. A number of ISP techniques, like power-boost can give higher throughput for a limited amount of time, which often accidentally coincides with typical durations of speedtests*, so being able to confirm bufferbloat remedies at longer test run times is really helpful (nothing crazy, but if a test can run 30-60 seconds instead of just 10-20 seconds that already helps a lot).
Best Regards
Sebastian
*) I believe this to be accidental, as the duration for "fair" power-boosting are naturally in the same few dozends of seconds range as typical speedtests take, nothing nefarious here.
>
> Mat
>
>> On 1 May 2020, at 20:48, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Dave,
>>
>> well, it was a free service and it lasted a long time. I want to raise a toast to Justin and convey my sincere thanks for years of investing into the "good" of the internet.
>>
>> Now, the question is which test is going to be the rightful successor?
>>
>> Short of running netperf/irtt/iper2/iperf3 on a hosted server, I see lots of potential but none of the tests are really there yet (grievances in now particular order):
>>
>> OOKLA: speedtest.net.
>> Pros: ubiquitious, allows selection of single flow versus multi-flow test, allows server selection
>> Cons: only IPv4, only static unloaded RTT measurement, no control over measurement duration
>> BUFFERBLOAT verdict: incomplete, maybe usable as load generator
>>
>>
>> NETFLIX: fast.com.
>> Pros: allows selection of upload testing, supposedly decent back-end, duration configurable
>> allows unloaded, loaded download and loaded upload RTT measurements (but reports sinlge numbers for loaded and unloaded RTT, that are not the max)
>> Cons: RTT report as two numbers one for the loaded and one for unloaded RTT, time-course of RTTs missing
>> BUFFERBLOAT verdict: incomplete, but oh, so close...
>>
>>
>> NPERF: nperf.com
>> Pros: allows server selection, RTT measurement and report as time course, also reports average rates and static RTT/jitter for Up- and Download
>> Cons: RTT measurement for unloaded only, reported RTT static only , no control over measurement duration
>> BUFFERBLOAT verdict: incomplete,
>>
>>
>> THINKBROADBAND: www.thinkbroadband.com/speedtest
>> Pros: IPv6, reports coarse RTT time courses for all three measurement phases
>> Cons: only static unloaded RTT report in final results, time courses only visible immediately after testing, no control over measurement duration
>> BUFFERBLOAT verdict: a bit coarse, might work for users within a reasonable distance to the UK for acute de-bloating sessions (history reporting is bad though)
>>
>>
>> honorable mentioning:
>> BREITBANDMESSUNG: breitbandmessung.de
>> Pros: query of contracted internet access speed before measurement, with a scheduler that will only start a test when the backend has sufficient capacity to saturate the user-supplied contracted rates, IPv6 (happy-eyeballs)
>> Cons: only static unloaded RTT measurement, no control over measurement duration
>> BUFFERBLOAT verdict: unsuitable, exceot as load generator, but the bandwidth reservation feature is quite nice.
>>
>> Best Regards
>> Sebastian
>>
>>
>>> On May 1, 2020, at 18:44, Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>> https://www.reddit.com/r/HomeNetworking/comments/gbd6g0/dsl_reports_speed_test_no_longer_free/
>>>
>>> They ran out of bandwidth.
>>>
>>> Message to users here:
>>>
>>> http://www.dslreports.com/speedtest
>>>
>>>
>>> --
>>> Make Music, Not War
>>>
>>> Dave Täht
>>> CTO, TekLibre, LLC
>>> http://www.teklibre.com
>>> Tel: 1-831-435-0729
>>> _______________________________________________
>>> Cake mailing list
>>> Cake@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cake
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
2020-05-04 17:04 ` [Cake] [Make-wifi-fast] " Sergey Fedorov
2020-05-05 21:02 ` [Make-wifi-fast] [Cake] " David P. Reed
@ 2020-05-06 8:19 ` Sebastian Moeller
1 sibling, 0 replies; 26+ messages in thread
From: Sebastian Moeller @ 2020-05-06 8:19 UTC (permalink / raw)
To: Sergey Fedorov
Cc: David P. Reed, Dave Täht, Make-Wifi-fast, Jannie Hanekom,
Cake List, bloat
Hi Sergey,
> On May 4, 2020, at 19:04, Sergey Fedorov <sfedorov@netflix.com> wrote:
>
> Sergey - I wasn't assuming anything about fast.com. The document you shared wasn't clear about the methodology's details here. Others sadly, have actually used ICMP pings in the way I described. I was making a generic comment of concern.
>
> That said, it sounds like what you are doing is really helpful (esp. given that your measure is aimed at end user experiential qualities).
> David - my apologies, I incorrectly interpreted your statement as being said in context of fast.com measurements. The blog post linked indeed doesn't provide the latency measurement details - was written before we added the extra metrics. We'll see if we can publish an update.
>
> 1) a clear definition of lag under load that is from end-to-end in latency, and involves, ideally, independent traffic from multiple sources through the bottleneck.
> Curious if by multiple sources you mean multiple clients (devices) or multiple connections sending data?
Not trying to speak for David obviously, but the dslreports speedtest, when using multiple streams mostly recruited streams for different server locations and reported these locations in some of the detailed report parts. For normal use that level of detail is overkill, but for problematic cases that was really elucidating (the reported the retransmit count for up to 5 server sites):
Server Nett Speed Avg RTT / Jitter Avg Re-xmit Avg Cwnd
Singapore (softlayer) d1 7.3 Mb/s 200.5±7ms 0.1% 154
Houston, USA (softlayer) d3 3.07 Mb/s 157.6±3.6ms 0.4% 125
Dallas, USA (softlayer) d3 2.65 Mb/s 150.1±3.3ms 0.6% 131
San Jose, USA (softlayer) d3 2.77 Mb/s 185.6±5ms 0.5% 126
Nashville, TN, USA (Twinlakes coop) d3 2.34 Mb/s 127.6±4ms 0.6% 76
Run Log:
0.00s setting download file size to 40mb max for Safari
0.00s Start testing DSL
00.43s Servers available: 10
00.46s pinging 10 locations
01.66s geo location failed
05.47s 19ms Amsterdam, Netherlands, EU
05.47s 63ms Nashville, TN, USA
05.47s 72ms Dallas, USA
05.47s 75ms Houston, USA
05.47s 89ms San Jose, USA
05.47s 96ms Singapore
05.47s could not reach Silver Spring, MD, USA https://t70.dslreports.com
05.47s could not reach Newcastle, Delaware, USA https://t68.dslreports.com
05.47s could not reach Westland, Michigan, USA https://t67.dslreports.com
05.47s could not reach Beaverton, Oregon, USA https://t69.dslreports.com
05.48s 5 seconds measuring idle buffer bloat
10.96s Trial download normal
10.99s Using GET for upload testing
10.99s preference https set to 1
10.99s preference fixrids set to 1
10.99s preference streamsDown set to 16
10.99s preference dnmethod set to websocket
10.99s preference upmethod set to websocket
10.99s preference upduration set to 30
10.99s preference streamsUp set to 16
10.99s preference dnduration set to 30
10.99s preference bloathf set to 1
10.99s preference rids set to [object Object]
10.99s preference compress set to 1
19.11s stream0 4.71 megabit Amsterdam, Netherlands, EU
19.11s stream1 2.74 megabit Dallas, USA
19.11s stream2 4.68 megabit Singapore
19.11s stream3 2.23 megabit Dallas, USA
19.11s stream4 3.31 megabit Houston, USA
19.11s stream5 3.19 megabit Houston, USA
19.11s stream6 2.83 megabit Amsterdam, Netherlands, EU
19.11s stream7 1.13 megabit Dallas, USA
19.11s stream8 2.15 megabit Amsterdam, Netherlands, EU
19.11s stream9 2.35 megabit San Jose, USA
19.11s stream10 1.46 megabit Nashville, TN, USA
19.11s stream11 1.42 megabit Nashville, TN, USA
19.11s stream12 2.92 megabit Nashville, TN, USA
19.11s stream13 2.19 megabit Houston, USA
19.11s stream14 2.16 megabit San Jose, USA
19.11s stream15 1.2 megabit San Jose, USA
41.26s End of download testing. Starting upload in 2 seconds
43.27s Capping upload streams to 6 because of download result
43.27s starting websocket upload with 16 streams
43.27s minimum upload speed of 0.3 per stream
43.48s sent first packet to t56.dslreports.com
44.08s sent first packet to t59.dslreports.com
44.48s sent first packet to t59.dslreports.com
44.48s sent first packet to t57.dslreports.com
44.68s sent first packet to t56.dslreports.com
44.78s sent first packet to t58.dslreports.com
44.79s got first reply from t56.dslreports.com 221580
44.98s sent first packet to t58.dslreports.com
45.08s sent first packet to t56.dslreports.com
45.14s got first reply from t59.dslreports.com 221580
45.28s sent first packet to t59.dslreports.com
45.53s got first reply from t59.dslreports.com 155106
45.55s got first reply from t57.dslreports.com 70167
45.78s got first reply from t58.dslreports.com 210501
45.85s got first reply from t56.dslreports.com 162492
45.88s sent first packet to t60.dslreports.com
45.88s sent first packet to t71.dslreports.com
46.00s got first reply from t58.dslreports.com 44316
46.08s sent first packet to t71.dslreports.com
46.26s got first reply from t56.dslreports.com 177264
46.28s sent first packet to t71.dslreports.com
46.41s got first reply from t59.dslreports.com 221580
46.58s sent first packet to t58.dslreports.com
46.88s sent first packet to t60.dslreports.com
46.89s got first reply from t60.dslreports.com 99711
47.08s sent first packet to t60.dslreports.com
47.61s got first reply from t58.dslreports.com 221580
47.93s got first reply from t60.dslreports.com 158799
48.09s got first reply from t60.dslreports.com 107097
62.87s Recording upload 21.45
62.87s Timer drops: frames=0 total ms=0
62.87s END TEST
64.88s Total megabytes consumed: 198.8 (down:155 up:43.8)
Not sure how trust-worthy these numbers were, but high retransmit counts correlated with relative low measured goodput...
I realize that this level of detail is explicitly out of scope for fast.com, but if you collect similar data, exposing it for interested parties following a chain of links would be swell. I am thinking along the lines of Dougles Adams' "It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard." here ;)
Best Regards
Sebastian
>
> SERGEY FEDOROV
> Director of Engineering
> sfedorov@netflix.com
> 121 Albright Way | Los Gatos, CA 95032
>
>
>
>
> On Sun, May 3, 2020 at 8:07 AM David P. Reed <dpreed@deepplum.com> wrote:
> Thanks Sebastian. I do agree that in many cases, reflecting the ICMP off the entry device that has the external IP address for the NAT gets most of the RTT measure, and if there's no queueing built up in the NAT device, that's a reasonable measure. But...
>
> However, if the router has "taken up the queueing delay" by rate limiting its uplink traffic to slightly less than the capacity (as with Cake and other TC shaping that isn't as good as cake), then there is a queue in the TC layer itself. This is what concerns me as a distortion in the measurement that can fool one into thinking the TC shaper is doing a good job, when in fact, lag under load may be quite high from inside the routed domain (the home).
>
> As you point out this unmeasured queueing delay can also be a problem with WiFi inside the home. But it isn't limited to that.
>
> A badly set up shaping/congestion management subsystem inside the NAT can look "very good" in its echo of ICMP packets, but be terrible in response time to trivial HTTP requests from inside, or equally terrible in twitch games and video conferencing.
>
> So, for example, for tuning settings with "Cake" it is useless.
>
> To be fair, usually the Access Provider has no control of what is done after the cable is terminated at the home, so as a way to decide if the provider is badly engineering its side, a ping from a server is a reasonable quality measure of the provider.
>
> But not a good measure of the user experience, and if the provider provides the NAT box, even if it has a good shaper in it, like Cake or fq_codel, it will just confuse the user and create the opportunity for a "finger pointing" argument where neither side understands what is going on.
>
> This is why we need
>
> 1) a clear definition of lag under load that is from end-to-end in latency, and involves, ideally, independent traffic from multiple sources through the bottleneck.
>
> 2) ideally, a better way to localize where the queues are building up and present that to users and access providers. The flent graphs are not interpretable by most non-experts. What we need is a simple visualization of a sketch-map of the path (like traceroute might provide) with queueing delay measures shown at key points that the user can understand.
> On Saturday, May 2, 2020 4:19pm, "Sebastian Moeller" <moeller0@gmx.de> said:
>
>> Hi David,
>>
>> in principle I agree, a NATed IPv4 ICMP probe will be at best reflected at the NAT
>> router (CPE) (some commercial home gateways do not respond to ICMP echo requests
>> in the name of security theatre). So it is pretty hard to measure the full end to
>> end path in that configuration. I believe that IPv6 should make that
>> easier/simpler in that NAT hopefully will be out of the path (but let's see what
>> ingenuity ISPs will come up with).
>> Then again, traditionally the relevant bottlenecks often are a) the internet
>> access link itself and there the CPE is in a reasonable position as a reflector on
>> the other side of the bottleneck as seen from an internet server, b) the home
>> network between CPE and end-host, often with variable rate wifi, here I agree
>> reflecting echos at the CPE hides part of the issue.
>>
>>
>>
>>> On May 2, 2020, at 19:38, David P. Reed <dpreed@deepplum.com> wrote:
>>>
>>> I am still a bit worried about properly defining "latency under load" for a
>> NAT routed situation. If the test is based on ICMP Ping packets *from the server*,
>> it will NOT be measuring the full path latency, and if the potential congestion
>> is in the uplink path from the access provider's residential box to the access
>> provider's router/switch, it will NOT measure congestion caused by bufferbloat
>> reliably on either side, since the bufferbloat will be outside the ICMP Ping
>> path.
>>
>> Puzzled, as i believe it is going to be the residential box that will respond
>> here, or will it be the AFTRs for CG-NAT that reflect the ICMP echo requests?
>>
>>>
>>> I realize that a browser based speed test has to be basically run from the
>> "server" end, because browsers are not that good at time measurement on a packet
>> basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a
>> cooperative server.
>>>
>>> I once built a test that fixed this issue reasonably well. It carefully
>> created a TCP based RTT measurement channel (over HTTP) that made the echo have to
>> traverse the whole end-to-end path, which is the best and only way to accurately
>> define lag under load from the user's perspective. The client end of an unloaded
>> TCP connection can depend on TCP (properly prepared by getting it past slowstart)
>> to generate a single packet response.
>>>
>>> This "TCP ping" is thus compatible with getting the end-to-end measurement on
>> the server end of a true RTT.
>>>
>>> It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes
>> into thinking this is a real, serious packet, not an optional low priority
>> packet.
>>>
>>> The same issue comes up with non-browser-based techniques for measuring true
>> lag-under-load.
>>>
>>> Now as we move HTTP to QUIC, this actually gets easier to do.
>>>
>>> One other opportunity I haven't explored, but which is pregnant with
>> potential is the use of WebRTC, which runs over UDP internally. Since JavaScript
>> has direct access to create WebRTC connections (multiple ones), this makes
>> detailed testing in the browser quite reasonable.
>>>
>>> And the time measurements can resolve well below 100 microseconds, if the JS
>> is based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine
>> code speed if the code is restricted and in a loop). Then again, there is Web
>> Assembly if you want to write C code that runs in the brower fast. WebAssembly is
>> a low level language that compiles to machine code in the browser execution, and
>> still has access to all the browser networking facilities.
>>
>> Mmmh, according to https://github.com/w3c/hr-time/issues/56 due to spectre
>> side-channel vulnerabilities many browsers seemed to have lowered the timer
>> resolution, but even the ~1ms resolution should be fine for typical RTTs.
>>
>> Best Regards
>> Sebastian
>>
>> P.S.: I assume that I simply do not see/understand the full scope of the issue at
>> hand yet.
>>
>>
>>>
>>> On Saturday, May 2, 2020 12:52pm, "Dave Taht" <dave.taht@gmail.com>
>> said:
>>>
>>>> On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <bcronce@gmail.com>
>> wrote:
>>>>>
>>>>>> Fast.com reports my unloaded latency as 4ms, my loaded latency
>> as ~7ms
>>>>
>>>> I guess one of my questions is that with a switch to BBR netflix is
>>>> going to do pretty well. If fast.com is using bbr, well... that
>>>> excludes much of the current side of the internet.
>>>>
>>>>> For download, I show 6ms unloaded and 6-7 loaded. But for upload
>> the loaded
>>>> shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using
>> any
>>>> traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the
>> bloat would
>>>> be nice.
>>>>
>>>> The tests do need to last a fairly long time.
>>>>
>>>>> On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom
>> <jannie@hanekom.net>
>>>> wrote:
>>>>>>
>>>>>> Michael Richardson <mcr@sandelman.ca>:
>>>>>>> Does it find/use my nearest Netflix cache?
>>>>>>
>>>>>> Thankfully, it appears so. The DSLReports bloat test was
>> interesting,
>>>> but
>>>>>> the jitter on the ~240ms base latency from South Africa (and
>> other parts
>>>> of
>>>>>> the world) was significant enough that the figures returned
>> were often
>>>>>> unreliable and largely unusable - at least in my experience.
>>>>>>
>>>>>> Fast.com reports my unloaded latency as 4ms, my loaded latency
>> as ~7ms
>>>> and
>>>>>> mentions servers located in local cities. I finally have a test
>> I can
>>>> share
>>>>>> with local non-technical people!
>>>>>>
>>>>>> (Agreed, upload test would be nice, but this is a huge step
>> forward from
>>>>>> what I had access to before.)
>>>>>>
>>>>>> Jannie Hanekom
>>>>>>
>>>>>> _______________________________________________
>>>>>> Cake mailing list
>>>>>> Cake@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/cake
>>>>>
>>>>> _______________________________________________
>>>>> Cake mailing list
>>>>> Cake@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/cake
>>>>
>>>>
>>>>
>>>> --
>>>> Make Music, Not War
>>>>
>>>> Dave Täht
>>>> CTO, TekLibre, LLC
>>>> http://www.teklibre.com
>>>> Tel: 1-831-435-0729
>>>> _______________________________________________
>>>> Cake mailing list
>>>> Cake@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/cake
>>>>
>>> _______________________________________________
>>> Cake mailing list
>>> Cake@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cake
>>
>>
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
2020-05-03 15:06 [Make-wifi-fast] [Cake] [Bloat] " David P. Reed
` (2 preceding siblings ...)
[not found] ` <mailman.253.1588611897.24343.make-wifi-fast@lists.bufferbloat.net>
@ 2020-05-06 8:08 ` Sebastian Moeller
3 siblings, 0 replies; 26+ messages in thread
From: Sebastian Moeller @ 2020-05-06 8:08 UTC (permalink / raw)
To: David P. Reed
Cc: Dave Täht, Make-Wifi-fast, Jannie Hanekom, Cake List,
Sergey Fedorov, bloat, Toke Høiland-Jørgensen
Dear David,
Thanks for the elaboration below, and indeed I was not appreciating the full scope of the challenge.
> On May 3, 2020, at 17:06, David P. Reed <dpreed@deepplum.com> wrote:
>
> Thanks Sebastian. I do agree that in many cases, reflecting the ICMP off the entry device that has the external IP address for the NAT gets most of the RTT measure, and if there's no queueing built up in the NAT device, that's a reasonable measure. But...
Yes, I see; I really hope that with IPv6 coming more and more online, and hence less NAT, end-to-end RTT measurements will be simpler in the future. But cue the people who will for example recommend to drop/ignore ICMP in the name of security theater... Its the same mindset that basically recommends to ignore ICMP and/or IP timestamps, because "information leakage", while all the information that leaks for a standards conformant host is the time since midnight UTC (and potentially an idea about the difference between the local clock setting)... I fail to understand the rationale thread model behind eschewing this... For our purpoes one-way timestamps would be most excellent to have to be able to assess on which "leg" overload actually happens.
>
> However, if the router has "taken up the queueing delay" by rate limiting its uplink traffic to slightly less than the capacity (as with Cake and other TC shaping that isn't as good as cake), then there is a queue in the TC layer itself. This is what concerns me as a distortion in the measurement that can fool one into thinking the TC shaper is doing a good job, when in fact, lag under load may be quite high from inside the routed domain (the home).
As long as the shaper is instantiated on the NAT box, the latency probes reflected by that NAT-box will also travel through the shaper; but now you mention it, in SQM we do ingress shaping via an IFB and hence will also shape the incoming latency probes, but I started to recommend to do ingress shaping as egress-shaping on the LAN-wards interface of a router (to avoid the computational cost of the IFB redirection dance, and to allow people to use iptables for ingress*), and in such a configuration router reflected/emitted WAN-probes will avoid the ingress TC-queues...
*) With nftables having a hook at ingress, that second rationale will become moot in the near future...
>
> As you point out this unmeasured queueing delay can also be a problem with WiFi inside the home. But it isn't limited to that.
>
> A badly set up shaping/congestion management subsystem inside the NAT can look "very good" in its echo of ICMP packets, but be terrible in response time to trivial HTTP requests from inside, or equally terrible in twitch games and video conferencing.
Good point, and one of Dave's pet peeves, in former time people recommended to up-priritize ICMP packets to make RTT look good, falling exactly into the trap you described.
>
> So, for example, for tuning settings with "Cake" it is useless.
I believe that at least for the way we instantiate things by default in SQM-scripts we avoid that pit-fall. What do you think @Toke?
>
> To be fair, usually the Access Provider has no control of what is done after the cable is terminated at the home, so as a way to decide if the provider is badly engineering its side, a ping from a server is a reasonable quality measure of the provider.
Most providers in Germany will try to steer customers to rent a wifi router from the ISP, so bloat in the wifi link would also be under the responsibility of the ISP to some degree, no?
>
> But not a good measure of the user experience, and if the provider provides the NAT box, even if it has a good shaper in it, like Cake or fq_codel, it will just confuse the user and create the opportunity for a "finger pointing" argument where neither side understands what is going on.
>
> This is why we need
>
> 1) a clear definition of lag under load that is from end-to-end in latency, and involves, ideally, independent traffic from multiple sources through the bottleneck.
I am all for it, in addition in the past we also reasoned that this definition needs to be relative simple so it can be easily explained to turn naive layperson into informed amateurs ;) The multiple sources thing is something that dslreports did welll, they typically tried to serve from multiple server sites and reported some stats per site. Now with its basically gone, it becomes clear how much clue went into that speedtest, a pitty that most of the competition did not follow their lead yet (I am especially looking at you Ookla...).
>
> 2) ideally, a better way to localize where the queues are building up and present that to users and access providers.
Yes. Now how to do this robustly and reliably escapes me, albeit enabling one-way timestamps might help, then a saturating speedtest could be accompanied not by conceptually a "simple" IVMP echo request, but by a repeated traceroute that gets there-and-back delay measurements for the approximated path (approximated because of the complications of understanding traceroute results).
> The flent graphs are not interpretable by most non-experts.
And sometimes not even by experts ;)
> What we need is a simple visualization of a sketch-map of the path (like traceroute might provide) with queueing delay measures shown at key points that the user can understand.
I am on the fence, personally I would absolutely love that, but I am not sure how the rest of my family would receive something like that? I guess it depends on the simplicity of the representation and probably, following fast.com's lead, a way tp also compress that expanded results into a reasonable one-number representation. I hate on-number-representations for complex issues, but people generally will come up with one themselves if none is supplied. (And I get this, outside our areas of expertise we all prefer the world to be simple)
Best Regards
Sebastian
> On Saturday, May 2, 2020 4:19pm, "Sebastian Moeller" <moeller0@gmx.de> said:
>
>> Hi David,
>>
>> in principle I agree, a NATed IPv4 ICMP probe will be at best reflected at the NAT
>> router (CPE) (some commercial home gateways do not respond to ICMP echo requests
>> in the name of security theatre). So it is pretty hard to measure the full end to
>> end path in that configuration. I believe that IPv6 should make that
>> easier/simpler in that NAT hopefully will be out of the path (but let's see what
>> ingenuity ISPs will come up with).
>> Then again, traditionally the relevant bottlenecks often are a) the internet
>> access link itself and there the CPE is in a reasonable position as a reflector on
>> the other side of the bottleneck as seen from an internet server, b) the home
>> network between CPE and end-host, often with variable rate wifi, here I agree
>> reflecting echos at the CPE hides part of the issue.
>>
>>
>>
>>> On May 2, 2020, at 19:38, David P. Reed <dpreed@deepplum.com> wrote:
>>>
>>> I am still a bit worried about properly defining "latency under load" for a
>> NAT routed situation. If the test is based on ICMP Ping packets *from the server*,
>> it will NOT be measuring the full path latency, and if the potential congestion
>> is in the uplink path from the access provider's residential box to the access
>> provider's router/switch, it will NOT measure congestion caused by bufferbloat
>> reliably on either side, since the bufferbloat will be outside the ICMP Ping
>> path.
>>
>> Puzzled, as i believe it is going to be the residential box that will respond
>> here, or will it be the AFTRs for CG-NAT that reflect the ICMP echo requests?
>>
>>>
>>> I realize that a browser based speed test has to be basically run from the
>> "server" end, because browsers are not that good at time measurement on a packet
>> basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a
>> cooperative server.
>>>
>>> I once built a test that fixed this issue reasonably well. It carefully
>> created a TCP based RTT measurement channel (over HTTP) that made the echo have to
>> traverse the whole end-to-end path, which is the best and only way to accurately
>> define lag under load from the user's perspective. The client end of an unloaded
>> TCP connection can depend on TCP (properly prepared by getting it past slowstart)
>> to generate a single packet response.
>>>
>>> This "TCP ping" is thus compatible with getting the end-to-end measurement on
>> the server end of a true RTT.
>>>
>>> It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes
>> into thinking this is a real, serious packet, not an optional low priority
>> packet.
>>>
>>> The same issue comes up with non-browser-based techniques for measuring true
>> lag-under-load.
>>>
>>> Now as we move HTTP to QUIC, this actually gets easier to do.
>>>
>>> One other opportunity I haven't explored, but which is pregnant with
>> potential is the use of WebRTC, which runs over UDP internally. Since JavaScript
>> has direct access to create WebRTC connections (multiple ones), this makes
>> detailed testing in the browser quite reasonable.
>>>
>>> And the time measurements can resolve well below 100 microseconds, if the JS
>> is based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine
>> code speed if the code is restricted and in a loop). Then again, there is Web
>> Assembly if you want to write C code that runs in the brower fast. WebAssembly is
>> a low level language that compiles to machine code in the browser execution, and
>> still has access to all the browser networking facilities.
>>
>> Mmmh, according to https://github.com/w3c/hr-time/issues/56 due to spectre
>> side-channel vulnerabilities many browsers seemed to have lowered the timer
>> resolution, but even the ~1ms resolution should be fine for typical RTTs.
>>
>> Best Regards
>> Sebastian
>>
>> P.S.: I assume that I simply do not see/understand the full scope of the issue at
>> hand yet.
>>
>>
>>>
>>> On Saturday, May 2, 2020 12:52pm, "Dave Taht" <dave.taht@gmail.com>
>> said:
>>>
>>>> On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <bcronce@gmail.com>
>> wrote:
>>>>>
>>>>>> Fast.com reports my unloaded latency as 4ms, my loaded latency
>> as ~7ms
>>>>
>>>> I guess one of my questions is that with a switch to BBR netflix is
>>>> going to do pretty well. If fast.com is using bbr, well... that
>>>> excludes much of the current side of the internet.
>>>>
>>>>> For download, I show 6ms unloaded and 6-7 loaded. But for upload
>> the loaded
>>>> shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using
>> any
>>>> traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the
>> bloat would
>>>> be nice.
>>>>
>>>> The tests do need to last a fairly long time.
>>>>
>>>>> On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom
>> <jannie@hanekom.net>
>>>> wrote:
>>>>>>
>>>>>> Michael Richardson <mcr@sandelman.ca>:
>>>>>>> Does it find/use my nearest Netflix cache?
>>>>>>
>>>>>> Thankfully, it appears so. The DSLReports bloat test was
>> interesting,
>>>> but
>>>>>> the jitter on the ~240ms base latency from South Africa (and
>> other parts
>>>> of
>>>>>> the world) was significant enough that the figures returned
>> were often
>>>>>> unreliable and largely unusable - at least in my experience.
>>>>>>
>>>>>> Fast.com reports my unloaded latency as 4ms, my loaded latency
>> as ~7ms
>>>> and
>>>>>> mentions servers located in local cities. I finally have a test
>> I can
>>>> share
>>>>>> with local non-technical people!
>>>>>>
>>>>>> (Agreed, upload test would be nice, but this is a huge step
>> forward from
>>>>>> what I had access to before.)
>>>>>>
>>>>>> Jannie Hanekom
>>>>>>
>>>>>> _______________________________________________
>>>>>> Cake mailing list
>>>>>> Cake@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/cake
>>>>>
>>>>> _______________________________________________
>>>>> Cake mailing list
>>>>> Cake@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/cake
>>>>
>>>>
>>>>
>>>> --
>>>> Make Music, Not War
>>>>
>>>> Dave Täht
>>>> CTO, TekLibre, LLC
>>>> http://www.teklibre.com
>>>> Tel: 1-831-435-0729
>>>> _______________________________________________
>>>> Cake mailing list
>>>> Cake@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/cake
>>>>
>>> _______________________________________________
>>> Cake mailing list
>>> Cake@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cake
>>
>>
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
2020-05-04 17:04 ` [Cake] [Make-wifi-fast] " Sergey Fedorov
@ 2020-05-05 21:02 ` David P. Reed
2020-05-06 8:19 ` Sebastian Moeller
1 sibling, 0 replies; 26+ messages in thread
From: David P. Reed @ 2020-05-05 21:02 UTC (permalink / raw)
To: Sergey Fedorov
Cc: Sebastian Moeller, Dave Täht, Michael Richardson,
Make-Wifi-fast, Jannie Hanekom, Cake List, bloat
[-- Attachment #1: Type: text/plain, Size: 12966 bytes --]
I think the real test should be multiple clients, not multiple sources, but coordinating is hard. The middleboxes on the way may treat distinct IP host addresses specially, and of course there is an edge case because a single NIC by definition never sends two datagrams at once, which distort things as you look at edge performance issues.
The classic problem (Jim Gettys' "Daddy why is the Internet broken?" when uploading a big file from Dad's computer affects the web performance of the kid in the kid's bedroom) is an example of a UX issue that *really matters*. At HP Cambridge Research Lab, I used to have the local network management come to my office and yell at me because I was often uploading huge datasets to other HP locations, and it absolutely destroyed every other person's web usability when I did. (as usual, RTT went to multiple seconds, not affecting my file uploads at all, but it was the first example of what was later called Bufferbloat that got me focused on the issue of overbuffering.) Turned out that that problem was in choosing to use a Frame Relay link with the "don't ever discard packets" setting.
That was ALSO the first time I encountered "network experts" who absolutely denied that more buffering was bad. They thought that more buffering was GOOD. This was shocking, after I realized that almost no-one understood congestion was about excess queueing delay.
I still see badly misconfigured networks that destroy the ability to do Zoom or any other teleconferencing when someone is uploading files. And for some weird, weird reason, the work done by the Bloat team is constantly disparaged at IETF, to the point that their work isn't influencing anyone outside the Linux-based-router community. (Including Arista Networks, where they build overbuffered high speed switches and claim that is "a feature", and Andy Bechtolsheim refuses to listen to me or anyone else about it).
On Monday, May 4, 2020 1:04pm, "Sergey Fedorov" <sfedorov@netflix.com> said:
Sergey - I wasn't assuming anything about [ fast.com ]( http://fast.com/ ). The document you shared wasn't clear about the methodology's details here. Others sadly, have actually used ICMP pings in the way I described. I was making a generic comment of concern.
That said, it sounds like what you are doing is really helpful (esp. given that your measure is aimed at end user experiential qualities).
David - my apologies, I incorrectly interpreted your statement as being said in context of [ fast.com ]( http://fast.com ) measurements. The blog post linked indeed doesn't provide the latency measurement details - was written before we added the extra metrics. We'll see if we can publish an update. 1) a clear definition of lag under load that is from end-to-end in latency, and involves, ideally, independent traffic from multiple sources through the bottleneck.
Curious if by multiple sources you mean multiple clients (devices) or multiple connections sending data?
SERGEY FEDOROV
Director of Engineering
[ sfedorov@netflix.com ]( mailto:sfedorov@netflix.com )
121 Albright Way | Los Gatos, CA 95032
On Sun, May 3, 2020 at 8:07 AM David P. Reed <[ dpreed@deepplum.com ]( mailto:dpreed@deepplum.com )> wrote:
Thanks Sebastian. I do agree that in many cases, reflecting the ICMP off the entry device that has the external IP address for the NAT gets most of the RTT measure, and if there's no queueing built up in the NAT device, that's a reasonable measure. But...
However, if the router has "taken up the queueing delay" by rate limiting its uplink traffic to slightly less than the capacity (as with Cake and other TC shaping that isn't as good as cake), then there is a queue in the TC layer itself. This is what concerns me as a distortion in the measurement that can fool one into thinking the TC shaper is doing a good job, when in fact, lag under load may be quite high from inside the routed domain (the home).
As you point out this unmeasured queueing delay can also be a problem with WiFi inside the home. But it isn't limited to that.
A badly set up shaping/congestion management subsystem inside the NAT can look "very good" in its echo of ICMP packets, but be terrible in response time to trivial HTTP requests from inside, or equally terrible in twitch games and video conferencing.
So, for example, for tuning settings with "Cake" it is useless.
To be fair, usually the Access Provider has no control of what is done after the cable is terminated at the home, so as a way to decide if the provider is badly engineering its side, a ping from a server is a reasonable quality measure of the provider.
But not a good measure of the user experience, and if the provider provides the NAT box, even if it has a good shaper in it, like Cake or fq_codel, it will just confuse the user and create the opportunity for a "finger pointing" argument where neither side understands what is going on.
This is why we need
1) a clear definition of lag under load that is from end-to-end in latency, and involves, ideally, independent traffic from multiple sources through the bottleneck.
2) ideally, a better way to localize where the queues are building up and present that to users and access providers. The flent graphs are not interpretable by most non-experts. What we need is a simple visualization of a sketch-map of the path (like traceroute might provide) with queueing delay measures shown at key points that the user can understand.
On Saturday, May 2, 2020 4:19pm, "Sebastian Moeller" <[ moeller0@gmx.de ]( mailto:moeller0@gmx.de )> said:
> Hi David,
>
> in principle I agree, a NATed IPv4 ICMP probe will be at best reflected at the NAT
> router (CPE) (some commercial home gateways do not respond to ICMP echo requests
> in the name of security theatre). So it is pretty hard to measure the full end to
> end path in that configuration. I believe that IPv6 should make that
> easier/simpler in that NAT hopefully will be out of the path (but let's see what
> ingenuity ISPs will come up with).
> Then again, traditionally the relevant bottlenecks often are a) the internet
> access link itself and there the CPE is in a reasonable position as a reflector on
> the other side of the bottleneck as seen from an internet server, b) the home
> network between CPE and end-host, often with variable rate wifi, here I agree
> reflecting echos at the CPE hides part of the issue.
>
>
>
> > On May 2, 2020, at 19:38, David P. Reed <[ dpreed@deepplum.com ]( mailto:dpreed@deepplum.com )> wrote:
> >
> > I am still a bit worried about properly defining "latency under load" for a
> NAT routed situation. If the test is based on ICMP Ping packets *from the server*,
> it will NOT be measuring the full path latency, and if the potential congestion
> is in the uplink path from the access provider's residential box to the access
> provider's router/switch, it will NOT measure congestion caused by bufferbloat
> reliably on either side, since the bufferbloat will be outside the ICMP Ping
> path.
>
> Puzzled, as i believe it is going to be the residential box that will respond
> here, or will it be the AFTRs for CG-NAT that reflect the ICMP echo requests?
>
> >
> > I realize that a browser based speed test has to be basically run from the
> "server" end, because browsers are not that good at time measurement on a packet
> basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a
> cooperative server.
> >
> > I once built a test that fixed this issue reasonably well. It carefully
> created a TCP based RTT measurement channel (over HTTP) that made the echo have to
> traverse the whole end-to-end path, which is the best and only way to accurately
> define lag under load from the user's perspective. The client end of an unloaded
> TCP connection can depend on TCP (properly prepared by getting it past slowstart)
> to generate a single packet response.
> >
> > This "TCP ping" is thus compatible with getting the end-to-end measurement on
> the server end of a true RTT.
> >
> > It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes
> into thinking this is a real, serious packet, not an optional low priority
> packet.
> >
> > The same issue comes up with non-browser-based techniques for measuring true
> lag-under-load.
> >
> > Now as we move HTTP to QUIC, this actually gets easier to do.
> >
> > One other opportunity I haven't explored, but which is pregnant with
> potential is the use of WebRTC, which runs over UDP internally. Since JavaScript
> has direct access to create WebRTC connections (multiple ones), this makes
> detailed testing in the browser quite reasonable.
> >
> > And the time measurements can resolve well below 100 microseconds, if the JS
> is based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine
> code speed if the code is restricted and in a loop). Then again, there is Web
> Assembly if you want to write C code that runs in the brower fast. WebAssembly is
> a low level language that compiles to machine code in the browser execution, and
> still has access to all the browser networking facilities.
>
> Mmmh, according to [ https://github.com/w3c/hr-time/issues/56 ]( https://github.com/w3c/hr-time/issues/56 ) due to spectre
> side-channel vulnerabilities many browsers seemed to have lowered the timer
> resolution, but even the ~1ms resolution should be fine for typical RTTs.
>
> Best Regards
> Sebastian
>
> P.S.: I assume that I simply do not see/understand the full scope of the issue at
> hand yet.
>
>
> >
> > On Saturday, May 2, 2020 12:52pm, "Dave Taht" <[ dave.taht@gmail.com ]( mailto:dave.taht@gmail.com )>
> said:
> >
> > > On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <[ bcronce@gmail.com ]( mailto:bcronce@gmail.com )>
> wrote:
> > > >
> > > > > Fast.com reports my unloaded latency as 4ms, my loaded latency
> as ~7ms
> > >
> > > I guess one of my questions is that with a switch to BBR netflix is
> > > going to do pretty well. If [ fast.com ]( http://fast.com ) is using bbr, well... that
> > > excludes much of the current side of the internet.
> > >
> > > > For download, I show 6ms unloaded and 6-7 loaded. But for upload
> the loaded
> > > shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using
> any
> > > traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the
> bloat would
> > > be nice.
> > >
> > > The tests do need to last a fairly long time.
> > >
> > > > On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom
> <[ jannie@hanekom.net ]( mailto:jannie@hanekom.net )>
> > > wrote:
> > > >>
> > > >> Michael Richardson <[ mcr@sandelman.ca ]( mailto:mcr@sandelman.ca )>:
> > > >> > Does it find/use my nearest Netflix cache?
> > > >>
> > > >> Thankfully, it appears so. The DSLReports bloat test was
> interesting,
> > > but
> > > >> the jitter on the ~240ms base latency from South Africa (and
> other parts
> > > of
> > > >> the world) was significant enough that the figures returned
> were often
> > > >> unreliable and largely unusable - at least in my experience.
> > > >>
> > > >> Fast.com reports my unloaded latency as 4ms, my loaded latency
> as ~7ms
> > > and
> > > >> mentions servers located in local cities. I finally have a test
> I can
> > > share
> > > >> with local non-technical people!
> > > >>
> > > >> (Agreed, upload test would be nice, but this is a huge step
> forward from
> > > >> what I had access to before.)
> > > >>
> > > >> Jannie Hanekom
> > > >>
> > > >> _______________________________________________
> > > >> Cake mailing list
> > > >> [ Cake@lists.bufferbloat.net ]( mailto:Cake@lists.bufferbloat.net )
> > > >> [ https://lists.bufferbloat.net/listinfo/cake ]( https://lists.bufferbloat.net/listinfo/cake )
> > > >
> > > > _______________________________________________
> > > > Cake mailing list
> > > > [ Cake@lists.bufferbloat.net ]( mailto:Cake@lists.bufferbloat.net )
> > > > [ https://lists.bufferbloat.net/listinfo/cake ]( https://lists.bufferbloat.net/listinfo/cake )
> > >
> > >
> > >
> > > --
> > > Make Music, Not War
> > >
> > > Dave Täht
> > > CTO, TekLibre, LLC
> > > [ http://www.teklibre.com ]( http://www.teklibre.com )
> > > Tel: 1-831-435-0729
> > > _______________________________________________
> > > Cake mailing list
> > > [ Cake@lists.bufferbloat.net ]( mailto:Cake@lists.bufferbloat.net )
> > > [ https://lists.bufferbloat.net/listinfo/cake ]( https://lists.bufferbloat.net/listinfo/cake )
> > >
> > _______________________________________________
> > Cake mailing list
> > [ Cake@lists.bufferbloat.net ]( mailto:Cake@lists.bufferbloat.net )
> > [ https://lists.bufferbloat.net/listinfo/cake ]( https://lists.bufferbloat.net/listinfo/cake )
>
>
[-- Attachment #2: Type: text/html, Size: 20682 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
[not found] ` <mailman.253.1588611897.24343.make-wifi-fast@lists.bufferbloat.net>
@ 2020-05-05 0:03 ` Bob McMahon
0 siblings, 0 replies; 26+ messages in thread
From: Bob McMahon @ 2020-05-05 0:03 UTC (permalink / raw)
To: Sergey Fedorov
Cc: David P. Reed, Michael Richardson, Make-Wifi-fast, bloat,
Cake List, Jannie Hanekom
[-- Attachment #1: Type: text/plain, Size: 13260 bytes --]
Sorry for being a bit off topic but we find average latency not all that
useful. A full CDF is. The next best is a box plot with outliers which
can be presented parametrically as a few numbers. Most customers want
visibility into the PDF tail.
Also, we're moving to socket write() to read() latencies for our end/end
measurements (using the iperf 2.0.14 --trip-times option assumes
synchronized clocks.). We also now measure TCP connects (3WHS) as well.
Finally, since we have trip times and the application write rates we can
compute the amount of "end/end bytes in queue" per Little's law.
For fault isolation, in-band network telemetry (or something similar) can
be useful. https://p4.org/assets/INT-current-spec.pdf
Bob
On Mon, May 4, 2020 at 10:05 AM Sergey Fedorov via Make-wifi-fast <
make-wifi-fast@lists.bufferbloat.net> wrote:
>
>
>
> ---------- Forwarded message ----------
> From: Sergey Fedorov <sfedorov@netflix.com>
> To: "David P. Reed" <dpreed@deepplum.com>
> Cc: Sebastian Moeller <moeller0@gmx.de>, "Dave Täht" <dave.taht@gmail.com>,
> Michael Richardson <mcr@sandelman.ca>, Make-Wifi-fast <
> make-wifi-fast@lists.bufferbloat.net>, Jannie Hanekom <jannie@hanekom.net>,
> Cake List <cake@lists.bufferbloat.net>, bloat <bloat@lists.bufferbloat.net
> >
> Bcc:
> Date: Mon, 4 May 2020 10:04:19 -0700
> Subject: Re: [Cake] [Make-wifi-fast] [Bloat] dslreports is no longer free
>
>> Sergey - I wasn't assuming anything about fast.com. The document you
>> shared wasn't clear about the methodology's details here. Others sadly,
>> have actually used ICMP pings in the way I described. I was making a
>> generic comment of concern.
>>
>> That said, it sounds like what you are doing is really helpful (esp.
>> given that your measure is aimed at end user experiential qualities).
>
> David - my apologies, I incorrectly interpreted your statement as being
> said in context of fast.com measurements. The blog post linked indeed
> doesn't provide the latency measurement details - was written before we
> added the extra metrics. We'll see if we can publish an update.
>
> 1) a clear definition of lag under load that is from end-to-end in
>> latency, and involves, ideally, independent traffic from multiple sources
>> through the bottleneck.
>
> Curious if by multiple sources you mean multiple clients (devices) or
> multiple connections sending data?
>
>
> SERGEY FEDOROV
>
> Director of Engineering
>
> sfedorov@netflix.com
>
> 121 Albright Way | Los Gatos, CA 95032
>
>
>
>
> On Sun, May 3, 2020 at 8:07 AM David P. Reed <dpreed@deepplum.com> wrote:
>
>> Thanks Sebastian. I do agree that in many cases, reflecting the ICMP off
>> the entry device that has the external IP address for the NAT gets most of
>> the RTT measure, and if there's no queueing built up in the NAT device,
>> that's a reasonable measure. But...
>>
>>
>>
>> However, if the router has "taken up the queueing delay" by rate limiting
>> its uplink traffic to slightly less than the capacity (as with Cake and
>> other TC shaping that isn't as good as cake), then there is a queue in the
>> TC layer itself. This is what concerns me as a distortion in the
>> measurement that can fool one into thinking the TC shaper is doing a good
>> job, when in fact, lag under load may be quite high from inside the routed
>> domain (the home).
>>
>>
>>
>> As you point out this unmeasured queueing delay can also be a problem
>> with WiFi inside the home. But it isn't limited to that.
>>
>>
>>
>> A badly set up shaping/congestion management subsystem inside the NAT can
>> look "very good" in its echo of ICMP packets, but be terrible in response
>> time to trivial HTTP requests from inside, or equally terrible in twitch
>> games and video conferencing.
>>
>>
>>
>> So, for example, for tuning settings with "Cake" it is useless.
>>
>>
>>
>> To be fair, usually the Access Provider has no control of what is done
>> after the cable is terminated at the home, so as a way to decide if the
>> provider is badly engineering its side, a ping from a server is a
>> reasonable quality measure of the provider.
>>
>>
>>
>> But not a good measure of the user experience, and if the provider
>> provides the NAT box, even if it has a good shaper in it, like Cake or
>> fq_codel, it will just confuse the user and create the opportunity for a
>> "finger pointing" argument where neither side understands what is going on.
>>
>>
>>
>> This is why we need
>>
>>
>>
>> 1) a clear definition of lag under load that is from end-to-end in
>> latency, and involves, ideally, independent traffic from multiple sources
>> through the bottleneck.
>>
>>
>>
>> 2) ideally, a better way to localize where the queues are building up and
>> present that to users and access providers. The flent graphs are not
>> interpretable by most non-experts. What we need is a simple visualization
>> of a sketch-map of the path (like traceroute might provide) with queueing
>> delay measures shown at key points that the user can understand.
>>
>> On Saturday, May 2, 2020 4:19pm, "Sebastian Moeller" <moeller0@gmx.de>
>> said:
>>
>> > Hi David,
>> >
>> > in principle I agree, a NATed IPv4 ICMP probe will be at best reflected
>> at the NAT
>> > router (CPE) (some commercial home gateways do not respond to ICMP echo
>> requests
>> > in the name of security theatre). So it is pretty hard to measure the
>> full end to
>> > end path in that configuration. I believe that IPv6 should make that
>> > easier/simpler in that NAT hopefully will be out of the path (but let's
>> see what
>> > ingenuity ISPs will come up with).
>> > Then again, traditionally the relevant bottlenecks often are a) the
>> internet
>> > access link itself and there the CPE is in a reasonable position as a
>> reflector on
>> > the other side of the bottleneck as seen from an internet server, b)
>> the home
>> > network between CPE and end-host, often with variable rate wifi, here I
>> agree
>> > reflecting echos at the CPE hides part of the issue.
>> >
>> >
>> >
>> > > On May 2, 2020, at 19:38, David P. Reed <dpreed@deepplum.com> wrote:
>> > >
>> > > I am still a bit worried about properly defining "latency under load"
>> for a
>> > NAT routed situation. If the test is based on ICMP Ping packets *from
>> the server*,
>> > it will NOT be measuring the full path latency, and if the potential
>> congestion
>> > is in the uplink path from the access provider's residential box to the
>> access
>> > provider's router/switch, it will NOT measure congestion caused by
>> bufferbloat
>> > reliably on either side, since the bufferbloat will be outside the ICMP
>> Ping
>> > path.
>> >
>> > Puzzled, as i believe it is going to be the residential box that will
>> respond
>> > here, or will it be the AFTRs for CG-NAT that reflect the ICMP echo
>> requests?
>> >
>> > >
>> > > I realize that a browser based speed test has to be basically run
>> from the
>> > "server" end, because browsers are not that good at time measurement on
>> a packet
>> > basis. However, there are ways to solve this and avoid the ICMP Ping
>> issue, with a
>> > cooperative server.
>> > >
>> > > I once built a test that fixed this issue reasonably well. It
>> carefully
>> > created a TCP based RTT measurement channel (over HTTP) that made the
>> echo have to
>> > traverse the whole end-to-end path, which is the best and only way to
>> accurately
>> > define lag under load from the user's perspective. The client end of an
>> unloaded
>> > TCP connection can depend on TCP (properly prepared by getting it past
>> slowstart)
>> > to generate a single packet response.
>> > >
>> > > This "TCP ping" is thus compatible with getting the end-to-end
>> measurement on
>> > the server end of a true RTT.
>> > >
>> > > It's like tcp-traceroute tool, in that it tricks anyone in the middle
>> boxes
>> > into thinking this is a real, serious packet, not an optional low
>> priority
>> > packet.
>> > >
>> > > The same issue comes up with non-browser-based techniques for
>> measuring true
>> > lag-under-load.
>> > >
>> > > Now as we move HTTP to QUIC, this actually gets easier to do.
>> > >
>> > > One other opportunity I haven't explored, but which is pregnant with
>> > potential is the use of WebRTC, which runs over UDP internally. Since
>> JavaScript
>> > has direct access to create WebRTC connections (multiple ones), this
>> makes
>> > detailed testing in the browser quite reasonable.
>> > >
>> > > And the time measurements can resolve well below 100 microseconds, if
>> the JS
>> > is based on modern JIT compilation (Chrome, Firefox, Edge all compile
>> to machine
>> > code speed if the code is restricted and in a loop). Then again, there
>> is Web
>> > Assembly if you want to write C code that runs in the brower fast.
>> WebAssembly is
>> > a low level language that compiles to machine code in the browser
>> execution, and
>> > still has access to all the browser networking facilities.
>> >
>> > Mmmh, according to https://github.com/w3c/hr-time/issues/56 due to
>> spectre
>> > side-channel vulnerabilities many browsers seemed to have lowered the
>> timer
>> > resolution, but even the ~1ms resolution should be fine for typical
>> RTTs.
>> >
>> > Best Regards
>> > Sebastian
>> >
>> > P.S.: I assume that I simply do not see/understand the full scope of
>> the issue at
>> > hand yet.
>> >
>> >
>> > >
>> > > On Saturday, May 2, 2020 12:52pm, "Dave Taht" <dave.taht@gmail.com>
>> > said:
>> > >
>> > > > On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <bcronce@gmail.com>
>> > wrote:
>> > > > >
>> > > > > > Fast.com reports my unloaded latency as 4ms, my loaded latency
>> > as ~7ms
>> > > >
>> > > > I guess one of my questions is that with a switch to BBR netflix is
>> > > > going to do pretty well. If fast.com is using bbr, well... that
>> > > > excludes much of the current side of the internet.
>> > > >
>> > > > > For download, I show 6ms unloaded and 6-7 loaded. But for upload
>> > the loaded
>> > > > shows as 7-8 and I see it blip upwards of 12ms. But I am no longer
>> using
>> > any
>> > > > traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the
>> > bloat would
>> > > > be nice.
>> > > >
>> > > > The tests do need to last a fairly long time.
>> > > >
>> > > > > On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom
>> > <jannie@hanekom.net>
>> > > > wrote:
>> > > > >>
>> > > > >> Michael Richardson <mcr@sandelman.ca>:
>> > > > >> > Does it find/use my nearest Netflix cache?
>> > > > >>
>> > > > >> Thankfully, it appears so. The DSLReports bloat test was
>> > interesting,
>> > > > but
>> > > > >> the jitter on the ~240ms base latency from South Africa (and
>> > other parts
>> > > > of
>> > > > >> the world) was significant enough that the figures returned
>> > were often
>> > > > >> unreliable and largely unusable - at least in my experience.
>> > > > >>
>> > > > >> Fast.com reports my unloaded latency as 4ms, my loaded latency
>> > as ~7ms
>> > > > and
>> > > > >> mentions servers located in local cities. I finally have a test
>> > I can
>> > > > share
>> > > > >> with local non-technical people!
>> > > > >>
>> > > > >> (Agreed, upload test would be nice, but this is a huge step
>> > forward from
>> > > > >> what I had access to before.)
>> > > > >>
>> > > > >> Jannie Hanekom
>> > > > >>
>> > > > >> _______________________________________________
>> > > > >> Cake mailing list
>> > > > >> Cake@lists.bufferbloat.net
>> > > > >> https://lists.bufferbloat.net/listinfo/cake
>> > > > >
>> > > > > _______________________________________________
>> > > > > Cake mailing list
>> > > > > Cake@lists.bufferbloat.net
>> > > > > https://lists.bufferbloat.net/listinfo/cake
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > Make Music, Not War
>> > > >
>> > > > Dave Täht
>> > > > CTO, TekLibre, LLC
>> > > > http://www.teklibre.com
>> > > > Tel: 1-831-435-0729
>> > > > _______________________________________________
>> > > > Cake mailing list
>> > > > Cake@lists.bufferbloat.net
>> > > > https://lists.bufferbloat.net/listinfo/cake
>> > > >
>> > > _______________________________________________
>> > > Cake mailing list
>> > > Cake@lists.bufferbloat.net
>> > > https://lists.bufferbloat.net/listinfo/cake
>> >
>> >
>>
>>
>>
>
>
>
> ---------- Forwarded message ----------
> From: Sergey Fedorov via Make-wifi-fast <
> make-wifi-fast@lists.bufferbloat.net>
> To: "David P. Reed" <dpreed@deepplum.com>
> Cc: Michael Richardson <mcr@sandelman.ca>, Make-Wifi-fast <
> make-wifi-fast@lists.bufferbloat.net>, bloat <bloat@lists.bufferbloat.net>,
> Cake List <cake@lists.bufferbloat.net>, Jannie Hanekom <jannie@hanekom.net
> >
> Bcc:
> Date: Mon, 04 May 2020 10:05:04 -0700 (PDT)
> Subject: Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
[-- Attachment #2: Type: text/html, Size: 20487 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
2020-05-03 15:06 [Make-wifi-fast] [Cake] [Bloat] " David P. Reed
@ 2020-05-03 17:22 ` Dave Taht
2020-05-04 17:04 ` [Cake] [Make-wifi-fast] " Sergey Fedorov
` (2 subsequent siblings)
3 siblings, 0 replies; 26+ messages in thread
From: Dave Taht @ 2020-05-03 17:22 UTC (permalink / raw)
To: David P. Reed
Cc: Sebastian Moeller, Michael Richardson, Make-Wifi-fast,
Jannie Hanekom, Cake List, Sergey Fedorov, bloat
Hmm. Can webrtc set/see the ttl field? dscp? ecn?
I figure it might be able to on linux and osx, but not windows.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free
@ 2020-05-03 15:06 David P. Reed
2020-05-03 17:22 ` Dave Taht
` (3 more replies)
0 siblings, 4 replies; 26+ messages in thread
From: David P. Reed @ 2020-05-03 15:06 UTC (permalink / raw)
To: Sebastian Moeller
Cc: Dave Täht, Michael Richardson, Make-Wifi-fast,
Jannie Hanekom, Cake List, Sergey Fedorov, bloat
[-- Attachment #1: Type: text/plain, Size: 9033 bytes --]
Thanks Sebastian. I do agree that in many cases, reflecting the ICMP off the entry device that has the external IP address for the NAT gets most of the RTT measure, and if there's no queueing built up in the NAT device, that's a reasonable measure. But...
However, if the router has "taken up the queueing delay" by rate limiting its uplink traffic to slightly less than the capacity (as with Cake and other TC shaping that isn't as good as cake), then there is a queue in the TC layer itself. This is what concerns me as a distortion in the measurement that can fool one into thinking the TC shaper is doing a good job, when in fact, lag under load may be quite high from inside the routed domain (the home).
As you point out this unmeasured queueing delay can also be a problem with WiFi inside the home. But it isn't limited to that.
A badly set up shaping/congestion management subsystem inside the NAT can look "very good" in its echo of ICMP packets, but be terrible in response time to trivial HTTP requests from inside, or equally terrible in twitch games and video conferencing.
So, for example, for tuning settings with "Cake" it is useless.
To be fair, usually the Access Provider has no control of what is done after the cable is terminated at the home, so as a way to decide if the provider is badly engineering its side, a ping from a server is a reasonable quality measure of the provider.
But not a good measure of the user experience, and if the provider provides the NAT box, even if it has a good shaper in it, like Cake or fq_codel, it will just confuse the user and create the opportunity for a "finger pointing" argument where neither side understands what is going on.
This is why we need
1) a clear definition of lag under load that is from end-to-end in latency, and involves, ideally, independent traffic from multiple sources through the bottleneck.
2) ideally, a better way to localize where the queues are building up and present that to users and access providers. The flent graphs are not interpretable by most non-experts. What we need is a simple visualization of a sketch-map of the path (like traceroute might provide) with queueing delay measures shown at key points that the user can understand.
On Saturday, May 2, 2020 4:19pm, "Sebastian Moeller" <moeller0@gmx.de> said:
> Hi David,
>
> in principle I agree, a NATed IPv4 ICMP probe will be at best reflected at the NAT
> router (CPE) (some commercial home gateways do not respond to ICMP echo requests
> in the name of security theatre). So it is pretty hard to measure the full end to
> end path in that configuration. I believe that IPv6 should make that
> easier/simpler in that NAT hopefully will be out of the path (but let's see what
> ingenuity ISPs will come up with).
> Then again, traditionally the relevant bottlenecks often are a) the internet
> access link itself and there the CPE is in a reasonable position as a reflector on
> the other side of the bottleneck as seen from an internet server, b) the home
> network between CPE and end-host, often with variable rate wifi, here I agree
> reflecting echos at the CPE hides part of the issue.
>
>
>
> > On May 2, 2020, at 19:38, David P. Reed <dpreed@deepplum.com> wrote:
> >
> > I am still a bit worried about properly defining "latency under load" for a
> NAT routed situation. If the test is based on ICMP Ping packets *from the server*,
> it will NOT be measuring the full path latency, and if the potential congestion
> is in the uplink path from the access provider's residential box to the access
> provider's router/switch, it will NOT measure congestion caused by bufferbloat
> reliably on either side, since the bufferbloat will be outside the ICMP Ping
> path.
>
> Puzzled, as i believe it is going to be the residential box that will respond
> here, or will it be the AFTRs for CG-NAT that reflect the ICMP echo requests?
>
> >
> > I realize that a browser based speed test has to be basically run from the
> "server" end, because browsers are not that good at time measurement on a packet
> basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a
> cooperative server.
> >
> > I once built a test that fixed this issue reasonably well. It carefully
> created a TCP based RTT measurement channel (over HTTP) that made the echo have to
> traverse the whole end-to-end path, which is the best and only way to accurately
> define lag under load from the user's perspective. The client end of an unloaded
> TCP connection can depend on TCP (properly prepared by getting it past slowstart)
> to generate a single packet response.
> >
> > This "TCP ping" is thus compatible with getting the end-to-end measurement on
> the server end of a true RTT.
> >
> > It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes
> into thinking this is a real, serious packet, not an optional low priority
> packet.
> >
> > The same issue comes up with non-browser-based techniques for measuring true
> lag-under-load.
> >
> > Now as we move HTTP to QUIC, this actually gets easier to do.
> >
> > One other opportunity I haven't explored, but which is pregnant with
> potential is the use of WebRTC, which runs over UDP internally. Since JavaScript
> has direct access to create WebRTC connections (multiple ones), this makes
> detailed testing in the browser quite reasonable.
> >
> > And the time measurements can resolve well below 100 microseconds, if the JS
> is based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine
> code speed if the code is restricted and in a loop). Then again, there is Web
> Assembly if you want to write C code that runs in the brower fast. WebAssembly is
> a low level language that compiles to machine code in the browser execution, and
> still has access to all the browser networking facilities.
>
> Mmmh, according to https://github.com/w3c/hr-time/issues/56 due to spectre
> side-channel vulnerabilities many browsers seemed to have lowered the timer
> resolution, but even the ~1ms resolution should be fine for typical RTTs.
>
> Best Regards
> Sebastian
>
> P.S.: I assume that I simply do not see/understand the full scope of the issue at
> hand yet.
>
>
> >
> > On Saturday, May 2, 2020 12:52pm, "Dave Taht" <dave.taht@gmail.com>
> said:
> >
> > > On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <bcronce@gmail.com>
> wrote:
> > > >
> > > > > Fast.com reports my unloaded latency as 4ms, my loaded latency
> as ~7ms
> > >
> > > I guess one of my questions is that with a switch to BBR netflix is
> > > going to do pretty well. If fast.com is using bbr, well... that
> > > excludes much of the current side of the internet.
> > >
> > > > For download, I show 6ms unloaded and 6-7 loaded. But for upload
> the loaded
> > > shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using
> any
> > > traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the
> bloat would
> > > be nice.
> > >
> > > The tests do need to last a fairly long time.
> > >
> > > > On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom
> <jannie@hanekom.net>
> > > wrote:
> > > >>
> > > >> Michael Richardson <mcr@sandelman.ca>:
> > > >> > Does it find/use my nearest Netflix cache?
> > > >>
> > > >> Thankfully, it appears so. The DSLReports bloat test was
> interesting,
> > > but
> > > >> the jitter on the ~240ms base latency from South Africa (and
> other parts
> > > of
> > > >> the world) was significant enough that the figures returned
> were often
> > > >> unreliable and largely unusable - at least in my experience.
> > > >>
> > > >> Fast.com reports my unloaded latency as 4ms, my loaded latency
> as ~7ms
> > > and
> > > >> mentions servers located in local cities. I finally have a test
> I can
> > > share
> > > >> with local non-technical people!
> > > >>
> > > >> (Agreed, upload test would be nice, but this is a huge step
> forward from
> > > >> what I had access to before.)
> > > >>
> > > >> Jannie Hanekom
> > > >>
> > > >> _______________________________________________
> > > >> Cake mailing list
> > > >> Cake@lists.bufferbloat.net
> > > >> https://lists.bufferbloat.net/listinfo/cake
> > > >
> > > > _______________________________________________
> > > > Cake mailing list
> > > > Cake@lists.bufferbloat.net
> > > > https://lists.bufferbloat.net/listinfo/cake
> > >
> > >
> > >
> > > --
> > > Make Music, Not War
> > >
> > > Dave Täht
> > > CTO, TekLibre, LLC
> > > http://www.teklibre.com
> > > Tel: 1-831-435-0729
> > > _______________________________________________
> > > Cake mailing list
> > > Cake@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/cake
> > >
> > _______________________________________________
> > Cake mailing list
> > Cake@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cake
>
>
[-- Attachment #2: Type: text/html, Size: 13609 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2020-05-27 9:32 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-01 16:44 [Make-wifi-fast] dslreports is no longer free Dave Taht
2020-05-01 19:48 ` [Make-wifi-fast] [Cake] " Sebastian Moeller
2020-05-01 20:09 ` [Bloat] " Sergey Fedorov
2020-05-01 21:11 ` [Make-wifi-fast] " Sebastian Moeller
2020-05-01 21:37 ` Sergey Fedorov
[not found] ` <mailman.191.1588369068.24343.bloat@lists.bufferbloat.net>
2020-05-01 23:59 ` [Make-wifi-fast] " Michael Richardson
[not found] ` <mailman.170.1588363787.24343.bloat@lists.bufferbloat.net>
2020-05-01 22:07 ` Michael Richardson
2020-05-01 23:35 ` Sergey Fedorov
2020-05-02 1:14 ` [Make-wifi-fast] " Jannie Hanekom
2020-05-02 16:37 ` [Make-wifi-fast] [Cake] [Bloat] " Benjamin Cronce
2020-05-02 16:52 ` Dave Taht
2020-05-02 17:38 ` David P. Reed
2020-05-02 19:00 ` [Cake] [Make-wifi-fast] " Sergey Fedorov
2020-05-02 23:23 ` [Make-wifi-fast] [Cake] " David P. Reed
2020-05-03 15:31 ` [Make-wifi-fast] fast.com quality David P. Reed
2020-05-03 15:37 ` Dave Taht
2020-05-02 20:19 ` [Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free Sebastian Moeller
2020-05-27 9:08 ` [Make-wifi-fast] [Bloat] [Cake] " Matthew Ford
2020-05-27 9:28 ` Toke Høiland-Jørgensen
2020-05-27 9:32 ` Sebastian Moeller
2020-05-03 15:06 [Make-wifi-fast] [Cake] [Bloat] " David P. Reed
2020-05-03 17:22 ` Dave Taht
2020-05-04 17:04 ` [Cake] [Make-wifi-fast] " Sergey Fedorov
2020-05-05 21:02 ` [Make-wifi-fast] [Cake] " David P. Reed
2020-05-06 8:19 ` Sebastian Moeller
[not found] ` <mailman.253.1588611897.24343.make-wifi-fast@lists.bufferbloat.net>
2020-05-05 0:03 ` Bob McMahon
2020-05-06 8:08 ` Sebastian Moeller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox