From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp86.iad3a.emailsrvr.com (smtp86.iad3a.emailsrvr.com [173.203.187.86]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 8C9EA3CB37 for ; Sun, 3 May 2020 11:31:56 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=g001.emailsrvr.com; s=20190322-9u7zjiwi; t=1588519916; bh=kJf2JqdFLE6MQAG1czh2af+tMOvuwM2EPRt4SFFClcU=; h=Date:Subject:From:To:From; b=DYZcu9FEQNWut18NPwRZO1mTzIjITBAfNJLBXia1xuO0qRjHMqm5FV1L3O6hJlOsU jMuV5Ag62aqhIdPAF3GiQuF7a9jlCSHNtx18PejqGoeVaDf6FrgLLOu73BS5I2q+h8 /DhH+3KJN7NQe+vbFFox4ORJoQ69I8hFz0Cf3aiI= Received: from app39.wa-webapps.iad3a (relay-webapps.rsapps.net [172.27.255.140]) by smtp27.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id 22E8E239C8; Sun, 3 May 2020 11:31:56 -0400 (EDT) X-Sender-Id: dpreed@deepplum.com Received: from app39.wa-webapps.iad3a (relay-webapps.rsapps.net [172.27.255.140]) by 0.0.0.0:25 (trex/5.7.12); Sun, 03 May 2020 11:31:56 -0400 Received: from deepplum.com (localhost.localdomain [127.0.0.1]) by app39.wa-webapps.iad3a (Postfix) with ESMTP id 16CE920687; Sun, 3 May 2020 11:31:52 -0400 (EDT) Received: by apps.rackspace.com (Authenticated sender: dpreed@deepplum.com, from: dpreed@deepplum.com) with HTTP; Sun, 3 May 2020 11:31:52 -0400 (EDT) X-Auth-ID: dpreed@deepplum.com Date: Sun, 3 May 2020 11:31:52 -0400 (EDT) From: "David P. Reed" To: "Sergey Fedorov" Cc: "Dave Taht" , "Benjamin Cronce" , "Michael Richardson" , "Jannie Hanekom" , "bloat" , "Cake List" , "Make-Wifi-fast" MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_20200503113152000000_85510" Importance: Normal X-Priority: 3 (Normal) X-Type: html In-Reply-To: References: <05410663-5E50-4CF5-8ADE-3BBB985E32B1@gmx.de> <24457.1588370840@localhost> <013601d6201f$04c7db50$0e5791f0$@hanekom.net> <1588441128.39172345@apps.rackspace.com> Message-ID: <1588519912.070420298@apps.rackspace.com> X-Mailer: webmail/17.3.10-RC X-Classification-ID: abcba3cb-02e7-48c7-849d-afcca5f9e3c9-1-1 Subject: [Bloat] fast.com quality X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 03 May 2020 15:31:56 -0000 ------=_20200503113152000000_85510 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable =0ASergey -=0A =0AI am very happy to report that fast.com reports the follo= wing from my inexpensive Chromebook, over 802.11ac, my Linux-on-Celeron cak= e entry router setup, through RCN's "Gigabit service". It's a little surpri= sing, only in how good it is.=0A =0A460 Mbps down/17 Mbps up, 11 ms. unload= ed, 18 ms. loaded.=0A =0AI'm a little bit curious about the extra 7 ms. due= to load. I'm wondering if it is in my WiFi path, or whether Cake is buildi= ng a queue.=0A =0AThe 11 ms. to South Boston from my Needham home seems a b= it high. I used to be about 7 msec. away from that switch. But I'm not comp= laiing.=0AOn Saturday, May 2, 2020 3:00pm, "Sergey Fedorov" said:=0A=0A=0A=0A=0A=0ADave, thanks for sharing interesting thought= s and context. I am still a bit worried about properly defining "latency un= der load" for a NAT routed situation. If the test is based on ICMP Ping pac= kets *from the server*, it will NOT be measuring the full path latency, an= d if the potential congestion is in the uplink path from the access provide= r's residential box to the access provider's router/switch, it will NOT mea= sure congestion caused by bufferbloat reliably on either side, since the bu= fferbloat will be outside the ICMP Ping path.=0A =0AI realize that a browse= r based speed test has to be basically run from the "server" end, because b= rowsers are not that good at time measurement on a packet basis. However, t= here are ways to solve this and avoid the ICMP Ping issue, with a cooperati= ve server.=0AThis erroneously assumes that [ fast.com ]( http://fast.com ) = measures latency from the server side. It does not. The measurements are do= ne from the client, over http, with a parallel connection(s) to the same or= similar set of servers, by sending empty requests over a previously establ= ished connection (you can see that in the browser web inspector).=0AIt shou= ld be noted that the value is not precisely the "RTT on a TCP/UDP flow that= is loaded with traffic", but "user delay given the presence of heavy paral= lel flows". With that, some of the challenges you mentioned do not apply.= =0AIn line with another point I've shared earlier - the goal is to measure = and explain the user experience, not to be a diagnostic tool showing intern= al transport metrics.=0A=0A=0A=0A=0A=0A=0ASERGEY FEDOROV=0ADirector of Engi= neering=0A[ sfedorov@netflix.com ]( mailto:sfedorov@netflix.com )=0A121 Alb= right Way | Los Gatos, CA 95032=0A=0A=0AOn Sat, May 2, 2020 at 10:38 AM Dav= id P. Reed <[ dpreed@deepplum.com ]( mailto:dpreed@deepplum.com )> wrote:= =0AI am still a bit worried about properly defining "latency under load" fo= r a NAT routed situation. If the test is based on ICMP Ping packets *from t= he server*, it will NOT be measuring the full path latency, and if the pot= ential congestion is in the uplink path from the access provider's resident= ial box to the access provider's router/switch, it will NOT measure congest= ion caused by bufferbloat reliably on either side, since the bufferbloat wi= ll be outside the ICMP Ping path.=0A =0AI realize that a browser based spee= d test has to be basically run from the "server" end, because browsers are = not that good at time measurement on a packet basis. However, there are way= s to solve this and avoid the ICMP Ping issue, with a cooperative server.= =0A =0AI once built a test that fixed this issue reasonably well. It carefu= lly created a TCP based RTT measurement channel (over HTTP) that made the e= cho have to traverse the whole end-to-end path, which is the best and only = way to accurately define lag under load from the user's perspective. The cl= ient end of an unloaded TCP connection can depend on TCP (properly prepared= by getting it past slowstart) to generate a single packet response.=0A =0A= This "TCP ping" is thus compatible with getting the end-to-end measurement = on the server end of a true RTT.=0A =0AIt's like tcp-traceroute tool, in th= at it tricks anyone in the middle boxes into thinking this is a real, serio= us packet, not an optional low priority packet.=0A =0AThe same issue comes = up with non-browser-based techniques for measuring true lag-under-load.=0A = =0ANow as we move HTTP to QUIC, this actually gets easier to do.=0A =0AOne = other opportunity I haven't explored, but which is pregnant with potential = is the use of WebRTC, which runs over UDP internally. Since JavaScript has = direct access to create WebRTC connections (multiple ones), this makes deta= iled testing in the browser quite reasonable.=0A =0AAnd the time measuremen= ts can resolve well below 100 microseconds, if the JS is based on modern JI= T compilation (Chrome, Firefox, Edge all compile to machine code speed if t= he code is restricted and in a loop). Then again, there is Web Assembly if = you want to write C code that runs in the brower fast. WebAssembly is a low= level language that compiles to machine code in the browser execution, and= still has access to all the browser networking facilities.=0A =0AOn Saturd= ay, May 2, 2020 12:52pm, "Dave Taht" <[ dave.taht@gmail.com ]( mailto:dave.= taht@gmail.com )> said:=0A=0A=0A=0A> On Sat, May 2, 2020 at 9:37 AM Benjami= n Cronce <[ bcronce@gmail.com ]( mailto:bcronce@gmail.com )> wrote:=0A> >= =0A> > > Fast.com reports my unloaded latency as 4ms, my loaded latency as = ~7ms=0A> =0A> I guess one of my questions is that with a switch to BBR netf= lix is=0A> going to do pretty well. If [ fast.com ]( http://fast.com ) is u= sing bbr, well... that=0A> excludes much of the current side of the interne= t.=0A> =0A> > For download, I show 6ms unloaded and 6-7 loaded. But for upl= oad the loaded=0A> shows as 7-8 and I see it blip upwards of 12ms. But I am= no longer using any=0A> traffic shaping. Any anti-bufferbloat is from my I= SP. A graph of the bloat would=0A> be nice.=0A> =0A> The tests do need to l= ast a fairly long time.=0A> =0A> > On Sat, May 2, 2020 at 9:51 AM Jannie Ha= nekom <[ jannie@hanekom.net ]( mailto:jannie@hanekom.net )>=0A> wrote:=0A> = >>=0A> >> Michael Richardson <[ mcr@sandelman.ca ]( mailto:mcr@sandelman.ca= )>:=0A> >> > Does it find/use my nearest Netflix cache?=0A> >>=0A> >> Than= kfully, it appears so. The DSLReports bloat test was interesting,=0A> but= =0A> >> the jitter on the ~240ms base latency from South Africa (and other = parts=0A> of=0A> >> the world) was significant enough that the figures retu= rned were often=0A> >> unreliable and largely unusable - at least in my exp= erience.=0A> >>=0A> >> Fast.com reports my unloaded latency as 4ms, my load= ed latency as ~7ms=0A> and=0A> >> mentions servers located in local cities.= I finally have a test I can=0A> share=0A> >> with local non-technical peop= le!=0A> >>=0A> >> (Agreed, upload test would be nice, but this is a huge st= ep forward from=0A> >> what I had access to before.)=0A> >>=0A> >> Jannie H= anekom=0A> >>=0A> >> _______________________________________________=0A> >>= Cake mailing list=0A> >> [ Cake@lists.bufferbloat.net ]( mailto:Cake@lists= .bufferbloat.net )=0A> >> [ https://lists.bufferbloat.net/listinfo/cake ]( = https://lists.bufferbloat.net/listinfo/cake )=0A> >=0A> > _________________= ______________________________=0A> > Cake mailing list=0A> > [ Cake@lists.b= ufferbloat.net ]( mailto:Cake@lists.bufferbloat.net )=0A> > [ https://lists= .bufferbloat.net/listinfo/cake ]( https://lists.bufferbloat.net/listinfo/ca= ke )=0A> =0A> =0A> =0A> --=0A> Make Music, Not War=0A> =0A> Dave T=C3=A4ht= =0A> CTO, TekLibre, LLC=0A> [ http://www.teklibre.com ]( http://www.teklibr= e.com )=0A> Tel: 1-831-435-0729=0A> _______________________________________= ________=0A> Cake mailing list=0A> [ Cake@lists.bufferbloat.net ]( mailto:C= ake@lists.bufferbloat.net )=0A> [ https://lists.bufferbloat.net/listinfo/ca= ke ]( https://lists.bufferbloat.net/listinfo/cake )=0A> ------=_20200503113152000000_85510 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

Sergey -

=0A

 

=0A

I am very happy to report t= hat fast.com reports the following from my inexpensive Chromebook, over 802= .11ac, my Linux-on-Celeron cake entry router setup, through RCN's "Gigabit = service". It's a little surprising, only in how good it is.

=0A

 

=0A

460 Mbps down/17 Mbps up, 11= ms. unloaded, 18 ms. loaded.

=0A

 

=0A

I'm a little bit curious about the extra 7 ms. due to load.= I'm wondering if it is in my WiFi path, or whether Cake is building a queu= e.

=0A

 

=0A

The 11 ms. = to South Boston from my Needham home seems a bit high. I used to be about 7= msec. away from that switch. But I'm not complaiing.

=0A

On Saturday, May 2, 2020 3:00pm, "Sergey Fedorov" <sfedorov@netfli= x.com> said:

=0A
=0A
=0A
=0A
Dave, thanks for sharing interesting thoughts a= nd context. 
=0A
I = am still a bit worried about properly defining "latency under load" for a N= AT routed situation. If the test is based on ICMP Ping packets *from the se= rver*,  it will NOT be measuring the full path latency, and if the pot= ential congestion is in the uplink path from the access provider's resident= ial box to the access provider's router/switch, it will NOT measure congest= ion caused by bufferbloat reliably on either side, since the bufferbloat wi= ll be outside the ICMP Ping path.
 
I realize that a browser= based speed test has to be basically run from the "server" end, because br= owsers are not that good at time measurement on a packet basis. However, th= ere are ways to solve this and avoid the ICMP Ping issue, with a cooperativ= e server.
=0A
=0A
This erroneously assumes that fast.com measures latency from the server side. I= t does not. The measurements are done from the client, over http, with a pa= rallel connection(s) to the same or similar set of servers, by sending= empty requests over a previously established connection (you can see = that in the browser web inspector).
=0A
It should be noted that th= e value is not precisely the "RTT on a TCP/UDP flow that is loaded wit= h traffic", but "user delay given the presence of heavy parallel flows= ". With that, some of the challenges you mentioned do not apply.
= =0A
In line with another point I've shared earlier - the goal is t= o measure and explain the user experience, not to be a diagnostic tool show= ing internal transport metrics.
=0A
=0A
=0A
=0A
=0A
=0A
=0A

SERGEY FEDOROV

=0A

Director of Engineering

=0A

= sfedorov@netflix.com

=0A

3D""

=0A
=0A
=0A
=0A
=0A
=0A
= =0A
=0A
On S= at, May 2, 2020 at 10:38 AM David P. Reed <dpreed@deepplum.com> wrote:
=0A
=0A

I am still a bit worried a= bout properly defining "latency under load" for a NAT routed situation. If = the test is based on ICMP Ping packets *from the server*,  it will NOT= be measuring the full path latency, and if the potential congestion is in = the uplink path from the access provider's residential box to the access pr= ovider's router/switch, it will NOT measure congestion caused by bufferbloa= t reliably on either side, since the bufferbloat will be outside the ICMP P= ing path.

=0A

 

=0A

I re= alize that a browser based speed test has to be basically run from the "ser= ver" end, because browsers are not that good at time measurement on a packe= t basis. However, there are ways to solve this and avoid the ICMP Ping issu= e, with a cooperative server.

=0A

 

=0A

I once built a test that fixed this issue reasonably well. = It carefully created a TCP based RTT measurement channel (over HTTP) that m= ade the echo have to traverse the whole end-to-end path, which is the best = and only way to accurately define lag under load from the user's perspectiv= e. The client end of an unloaded TCP connection can depend on TCP (properly= prepared by getting it past slowstart) to generate a single packet respons= e.

=0A

 

=0A

This "TCP p= ing" is thus compatible with getting the end-to-end measurement on the serv= er end of a true RTT.

=0A

 

=0A

It's like tcp-traceroute tool, in that it tricks anyone in the midd= le boxes into thinking this is a real, serious packet, not an optional low = priority packet.

=0A

 

=0A

The same issue comes up with non-browser-based techniques for measuring = true lag-under-load.

=0A

 

=0A

Now as we move HTTP to QUIC, this actually gets easier to do.

=0A=

 

=0A

One other opportunit= y I haven't explored, but which is pregnant with potential is the use of We= bRTC, which runs over UDP internally. Since JavaScript has direct access to= create WebRTC connections (multiple ones), this makes detailed testing in = the browser quite reasonable.

=0A

 

=0A

And the time measurements can resolve well below 100 micros= econds, if the JS is based on modern JIT compilation (Chrome, Firefox, Edge= all compile to machine code speed if the code is restricted and in a loop)= . Then again, there is Web Assembly if you want to write C code that runs i= n the brower fast. WebAssembly is a low level language that compiles to mac= hine code in the browser execution, and still has access to all the browser= networking facilities.

=0A

 

=0A

On Saturday, May 2, 2020 12:52pm, "Dave Taht" <dave.taht@gmail.com> said:=

=0A
=0A

> On Sat, May 2, 2020 at 9:37 AM Benjamin C= ronce <bcronce@gm= ail.com> wrote:
> >
> > > Fast.com reports = my unloaded latency as 4ms, my loaded latency as ~7ms
>
> = I guess one of my questions is that with a switch to BBR netflix is
&g= t; going to do pretty well. If fast.com is using bbr, well... that
> excludes much of the cu= rrent side of the internet.
>
> > For download, I show = 6ms unloaded and 6-7 loaded. But for upload the loaded
> shows as 7= -8 and I see it blip upwards of 12ms. But I am no longer using any
>= ; traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the bloa= t would
> be nice.
>
> The tests do need to last a= fairly long time.
>
> > On Sat, May 2, 2020 at 9:51 AM= Jannie Hanekom <jannie@hanekom.net>
> wrote:
> >>
> &= gt;> Michael Richardson <mcr@sandelman.ca>:
> >> > Does it find/= use my nearest Netflix cache?
> >>
> >> Thankfu= lly, it appears so. The DSLReports bloat test was interesting,
> bu= t
> >> the jitter on the ~240ms base latency from South Afric= a (and other parts
> of
> >> the world) was significa= nt enough that the figures returned were often
> >> unreliabl= e and largely unusable - at least in my experience.
> >>
> >> Fast.com reports my unloaded latency as 4ms, my loaded laten= cy as ~7ms
> and
> >> mentions servers located in loc= al cities. I finally have a test I can
> share
> >> w= ith local non-technical people!
> >>
> >> (Agre= ed, upload test would be nice, but this is a huge step forward from
&g= t; >> what I had access to before.)
> >>
> >= > Jannie Hanekom
> >>
> >> _________________= ______________________________
> >> Cake mailing list
&g= t; >> Cake@lists.bufferbloat.net
> >> https://lists.bufferbloat= .net/listinfo/cake
> >
> > ______________________= _________________________
> > Cake mailing list
> > <= a href=3D"mailto:Cake@lists.bufferbloat.net" target=3D"_blank">Cake@lists.b= ufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cak= e
>
>
>
> --
> Make Music, N= ot War
>
> Dave T=C3=A4ht
> CTO, TekLibre, LLC
> http://www.tek= libre.com
> Tel: 1-831-435-0729
> _____________________= __________________________
> Cake mailing list
> Cake@lists.bufferbloa= t.net
> https://lists.bufferbloat.net/listinfo/cake
>= ;

=0A
=0A
=0A
=0A
------=_20200503113152000000_85510--