[Bloat] fast.com quality
Dave Taht
dave.taht at gmail.com
Sun May 3 11:37:43 EDT 2020
turn off cake, do it over wired. :) TAKE a packet cap of before and after.Thx.
On Sun, May 3, 2020 at 8:31 AM David P. Reed <dpreed at deepplum.com> wrote:
>
> Sergey -
>
>
>
> I am very happy to report that fast.com reports the following from my inexpensive Chromebook, over 802.11ac, my Linux-on-Celeron cake entry router setup, through RCN's "Gigabit service". It's a little surprising, only in how good it is.
>
>
>
> 460 Mbps down/17 Mbps up, 11 ms. unloaded, 18 ms. loaded.
>
>
>
> I'm a little bit curious about the extra 7 ms. due to load. I'm wondering if it is in my WiFi path, or whether Cake is building a queue.
>
>
>
> The 11 ms. to South Boston from my Needham home seems a bit high. I used to be about 7 msec. away from that switch. But I'm not complaiing.
>
> On Saturday, May 2, 2020 3:00pm, "Sergey Fedorov" <sfedorov at netflix.com> said:
>
> Dave, thanks for sharing interesting thoughts and context.
>>
>> I am still a bit worried about properly defining "latency under load" for a NAT routed situation. If the test is based on ICMP Ping packets *from the server*, it will NOT be measuring the full path latency, and if the potential congestion is in the uplink path from the access provider's residential box to the access provider's router/switch, it will NOT measure congestion caused by bufferbloat reliably on either side, since the bufferbloat will be outside the ICMP Ping path.
>>
>> I realize that a browser based speed test has to be basically run from the "server" end, because browsers are not that good at time measurement on a packet basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a cooperative server.
>
> This erroneously assumes that fast.com measures latency from the server side. It does not. The measurements are done from the client, over http, with a parallel connection(s) to the same or similar set of servers, by sending empty requests over a previously established connection (you can see that in the browser web inspector).
> It should be noted that the value is not precisely the "RTT on a TCP/UDP flow that is loaded with traffic", but "user delay given the presence of heavy parallel flows". With that, some of the challenges you mentioned do not apply.
> In line with another point I've shared earlier - the goal is to measure and explain the user experience, not to be a diagnostic tool showing internal transport metrics.
>
> SERGEY FEDOROV
>
> Director of Engineering
>
> sfedorov at netflix.com
>
> 121 Albright Way | Los Gatos, CA 95032
>
>
> On Sat, May 2, 2020 at 10:38 AM David P. Reed <dpreed at deepplum.com> wrote:
>>
>> I am still a bit worried about properly defining "latency under load" for a NAT routed situation. If the test is based on ICMP Ping packets *from the server*, it will NOT be measuring the full path latency, and if the potential congestion is in the uplink path from the access provider's residential box to the access provider's router/switch, it will NOT measure congestion caused by bufferbloat reliably on either side, since the bufferbloat will be outside the ICMP Ping path.
>>
>>
>>
>> I realize that a browser based speed test has to be basically run from the "server" end, because browsers are not that good at time measurement on a packet basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a cooperative server.
>>
>>
>>
>> I once built a test that fixed this issue reasonably well. It carefully created a TCP based RTT measurement channel (over HTTP) that made the echo have to traverse the whole end-to-end path, which is the best and only way to accurately define lag under load from the user's perspective. The client end of an unloaded TCP connection can depend on TCP (properly prepared by getting it past slowstart) to generate a single packet response.
>>
>>
>>
>> This "TCP ping" is thus compatible with getting the end-to-end measurement on the server end of a true RTT.
>>
>>
>>
>> It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes into thinking this is a real, serious packet, not an optional low priority packet.
>>
>>
>>
>> The same issue comes up with non-browser-based techniques for measuring true lag-under-load.
>>
>>
>>
>> Now as we move HTTP to QUIC, this actually gets easier to do.
>>
>>
>>
>> One other opportunity I haven't explored, but which is pregnant with potential is the use of WebRTC, which runs over UDP internally. Since JavaScript has direct access to create WebRTC connections (multiple ones), this makes detailed testing in the browser quite reasonable.
>>
>>
>>
>> And the time measurements can resolve well below 100 microseconds, if the JS is based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine code speed if the code is restricted and in a loop). Then again, there is Web Assembly if you want to write C code that runs in the brower fast. WebAssembly is a low level language that compiles to machine code in the browser execution, and still has access to all the browser networking facilities.
>>
>>
>>
>> On Saturday, May 2, 2020 12:52pm, "Dave Taht" <dave.taht at gmail.com> said:
>>
>> > On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <bcronce at gmail.com> wrote:
>> > >
>> > > > Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
>> >
>> > I guess one of my questions is that with a switch to BBR netflix is
>> > going to do pretty well. If fast.com is using bbr, well... that
>> > excludes much of the current side of the internet.
>> >
>> > > For download, I show 6ms unloaded and 6-7 loaded. But for upload the loaded
>> > shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using any
>> > traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the bloat would
>> > be nice.
>> >
>> > The tests do need to last a fairly long time.
>> >
>> > > On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom <jannie at hanekom.net>
>> > wrote:
>> > >>
>> > >> Michael Richardson <mcr at sandelman.ca>:
>> > >> > Does it find/use my nearest Netflix cache?
>> > >>
>> > >> Thankfully, it appears so. The DSLReports bloat test was interesting,
>> > but
>> > >> the jitter on the ~240ms base latency from South Africa (and other parts
>> > of
>> > >> the world) was significant enough that the figures returned were often
>> > >> unreliable and largely unusable - at least in my experience.
>> > >>
>> > >> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
>> > and
>> > >> mentions servers located in local cities. I finally have a test I can
>> > share
>> > >> with local non-technical people!
>> > >>
>> > >> (Agreed, upload test would be nice, but this is a huge step forward from
>> > >> what I had access to before.)
>> > >>
>> > >> Jannie Hanekom
>> > >>
>> > >> _______________________________________________
>> > >> Cake mailing list
>> > >> Cake at lists.bufferbloat.net
>> > >> https://lists.bufferbloat.net/listinfo/cake
>> > >
>> > > _______________________________________________
>> > > Cake mailing list
>> > > Cake at lists.bufferbloat.net
>> > > https://lists.bufferbloat.net/listinfo/cake
>> >
>> >
>> >
>> > --
>> > Make Music, Not War
>> >
>> > Dave Täht
>> > CTO, TekLibre, LLC
>> > http://www.teklibre.com
>> > Tel: 1-831-435-0729
>> > _______________________________________________
>> > Cake mailing list
>> > Cake at lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/cake
>> >
--
Make Music, Not War
Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-435-0729
More information about the Bloat
mailing list