[Make-wifi-fast] [Cake] [Bloat] dslreports is no longer free

Sebastian Moeller moeller0 at gmx.de
Wed May 6 04:08:53 EDT 2020


Dear David,

Thanks for the elaboration below, and indeed I was not appreciating the full scope of the challenge.

> On May 3, 2020, at 17:06, David P. Reed <dpreed at deepplum.com> wrote:
> 
> Thanks Sebastian. I do agree that in many cases, reflecting the ICMP off the entry device that has the external IP address for the NAT gets most of the RTT measure, and if there's no queueing built up in the NAT device, that's a reasonable measure. But...

	Yes, I see; I really hope that with IPv6 coming more and more online, and hence less NAT, end-to-end RTT measurements will be simpler in the future. But cue the people who will for example recommend to drop/ignore ICMP in the name of security theater... Its the same mindset that basically recommends to ignore ICMP and/or IP timestamps, because "information leakage", while all the information that leaks for a standards conformant host is the time since midnight UTC (and potentially an idea about the difference between the local clock setting)... I fail to understand the rationale thread model behind eschewing this... For our purpoes one-way timestamps would be most excellent to have to be able to assess on which "leg" overload actually happens.

> 
> However, if the router has "taken up the queueing delay" by rate limiting its uplink traffic to slightly less than the capacity (as with Cake and other TC shaping that isn't as good as cake), then there is a queue in the TC layer itself. This is what concerns me as a distortion in the measurement that can fool one into thinking the TC shaper is doing a good job, when in fact, lag under load may be quite high from inside the routed domain (the home).

	As long as the shaper is instantiated on the NAT box, the latency probes reflected by that NAT-box will also travel through the shaper; but now you mention it, in SQM we do ingress shaping via an IFB and hence will also shape the incoming latency probes, but I started to recommend to do ingress shaping as egress-shaping on the LAN-wards interface of a router (to avoid the computational cost of the IFB redirection dance, and to allow people to use iptables for ingress*), and in such a configuration router reflected/emitted WAN-probes will avoid the ingress TC-queues... 

*) With nftables having a hook at ingress, that second rationale will become moot in the near future...


> 
> As you point out this unmeasured queueing delay can also be a problem with WiFi inside the home. But it isn't limited to that.
> 
> A badly set up shaping/congestion management subsystem inside the NAT can look "very good" in its echo of ICMP packets, but be terrible in response time to trivial HTTP requests from inside, or equally terrible in twitch games and video conferencing.

	Good point, and one of Dave's pet peeves, in former time people recommended to up-priritize ICMP packets to make RTT look good, falling exactly into the trap you described.

> 
> So, for example, for tuning settings with "Cake" it is useless.

	I believe that at least for the way we instantiate things by default in SQM-scripts we avoid that pit-fall. What do you think @Toke?

> 
> To be fair, usually the Access Provider has no control of what is done after the cable is terminated at the home, so as a way to decide if the provider is badly engineering its side, a ping from a server is a reasonable quality measure of the provider. 

	Most providers in Germany will try to steer customers to rent a wifi router from the ISP, so bloat in the wifi link would also be under the responsibility of the ISP to some degree, no?


> 
> But not a good measure of the user experience, and if the provider provides the NAT box, even if it has a good shaper in it, like Cake or fq_codel, it will just confuse the user and create the opportunity for a "finger pointing" argument where neither side understands what is going on.
> 
> This is why we need 
> 
> 1) a clear definition of lag under load that is from end-to-end in latency, and involves, ideally, independent traffic from multiple sources through the bottleneck.

	I am all for it, in addition in the past we also reasoned that this definition needs to be relative simple so it can be easily explained to turn naive layperson into informed amateurs ;) The multiple sources thing is something that dslreports did welll, they typically tried to serve from multiple server sites and reported some stats per site. Now with its basically gone, it becomes clear how much clue went into that speedtest, a pitty that most of the competition did not follow their lead yet (I am especially looking at you Ookla...).

> 
> 2) ideally, a better way to localize where the queues are building up and present that to users and access providers.  

	Yes. Now how to do this robustly and reliably escapes me, albeit enabling one-way timestamps might help, then a saturating speedtest could be accompanied not by conceptually a "simple" IVMP echo request, but by a repeated traceroute that gets there-and-back delay measurements for the approximated path (approximated because of the complications of understanding traceroute results).


> The flent graphs are not interpretable by most non-experts.

	And sometimes not even by experts ;)

> What we need is a simple visualization of a sketch-map of the path (like traceroute might provide) with queueing delay measures  shown at key points that the user can understand.

	I am on the fence, personally I would absolutely love that, but I am not sure how the rest of my family would receive something like that? I guess it depends on the simplicity of the representation and probably, following fast.com's lead, a way tp also compress that expanded results into a reasonable one-number representation. I hate on-number-representations for complex issues, but people generally will come up with one themselves if none is supplied. (And I get this, outside our areas of expertise we all prefer the world to be simple)

Best Regards
	Sebastian


> On Saturday, May 2, 2020 4:19pm, "Sebastian Moeller" <moeller0 at gmx.de> said:
> 
>> Hi David,
>> 
>> in principle I agree, a NATed IPv4 ICMP probe will be at best reflected at the NAT
>> router (CPE) (some commercial home gateways do not respond to ICMP echo requests
>> in the name of security theatre). So it is pretty hard to measure the full end to
>> end path in that configuration. I believe that IPv6 should make that
>> easier/simpler in that NAT hopefully will be out of the path (but let's see what
>> ingenuity ISPs will come up with).
>> Then again, traditionally the relevant bottlenecks often are a) the internet
>> access link itself and there the CPE is in a reasonable position as a reflector on
>> the other side of the bottleneck as seen from an internet server, b) the home
>> network between CPE and end-host, often with variable rate wifi, here I agree
>> reflecting echos at the CPE hides part of the issue.
>> 
>> 
>> 
>>> On May 2, 2020, at 19:38, David P. Reed <dpreed at deepplum.com> wrote:
>>> 
>>> I am still a bit worried about properly defining "latency under load" for a
>> NAT routed situation. If the test is based on ICMP Ping packets *from the server*,
>> it will NOT be measuring the full path latency, and if the potential congestion
>> is in the uplink path from the access provider's residential box to the access
>> provider's router/switch, it will NOT measure congestion caused by bufferbloat
>> reliably on either side, since the bufferbloat will be outside the ICMP Ping
>> path.
>> 
>> Puzzled, as i believe it is going to be the residential box that will respond
>> here, or will it be the AFTRs for CG-NAT that reflect the ICMP echo requests?
>> 
>>> 
>>> I realize that a browser based speed test has to be basically run from the
>> "server" end, because browsers are not that good at time measurement on a packet
>> basis. However, there are ways to solve this and avoid the ICMP Ping issue, with a
>> cooperative server.
>>> 
>>> I once built a test that fixed this issue reasonably well. It carefully
>> created a TCP based RTT measurement channel (over HTTP) that made the echo have to
>> traverse the whole end-to-end path, which is the best and only way to accurately
>> define lag under load from the user's perspective. The client end of an unloaded
>> TCP connection can depend on TCP (properly prepared by getting it past slowstart)
>> to generate a single packet response.
>>> 
>>> This "TCP ping" is thus compatible with getting the end-to-end measurement on
>> the server end of a true RTT.
>>> 
>>> It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes
>> into thinking this is a real, serious packet, not an optional low priority
>> packet.
>>> 
>>> The same issue comes up with non-browser-based techniques for measuring true
>> lag-under-load.
>>> 
>>> Now as we move HTTP to QUIC, this actually gets easier to do.
>>> 
>>> One other opportunity I haven't explored, but which is pregnant with
>> potential is the use of WebRTC, which runs over UDP internally. Since JavaScript
>> has direct access to create WebRTC connections (multiple ones), this makes
>> detailed testing in the browser quite reasonable.
>>> 
>>> And the time measurements can resolve well below 100 microseconds, if the JS
>> is based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine
>> code speed if the code is restricted and in a loop). Then again, there is Web
>> Assembly if you want to write C code that runs in the brower fast. WebAssembly is
>> a low level language that compiles to machine code in the browser execution, and
>> still has access to all the browser networking facilities.
>> 
>> Mmmh, according to https://github.com/w3c/hr-time/issues/56 due to spectre
>> side-channel vulnerabilities many browsers seemed to have lowered the timer
>> resolution, but even the ~1ms resolution should be fine for typical RTTs.
>> 
>> Best Regards
>> Sebastian
>> 
>> P.S.: I assume that I simply do not see/understand the full scope of the issue at
>> hand yet.
>> 
>> 
>>> 
>>> On Saturday, May 2, 2020 12:52pm, "Dave Taht" <dave.taht at gmail.com>
>> said:
>>> 
>>>> On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <bcronce at gmail.com>
>> wrote:
>>>>> 
>>>>>> Fast.com reports my unloaded latency as 4ms, my loaded latency
>> as ~7ms
>>>> 
>>>> I guess one of my questions is that with a switch to BBR netflix is
>>>> going to do pretty well. If fast.com is using bbr, well... that
>>>> excludes much of the current side of the internet.
>>>> 
>>>>> For download, I show 6ms unloaded and 6-7 loaded. But for upload
>> the loaded
>>>> shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using
>> any
>>>> traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the
>> bloat would
>>>> be nice.
>>>> 
>>>> The tests do need to last a fairly long time.
>>>> 
>>>>> On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom
>> <jannie at hanekom.net>
>>>> wrote:
>>>>>> 
>>>>>> Michael Richardson <mcr at sandelman.ca>:
>>>>>>> Does it find/use my nearest Netflix cache?
>>>>>> 
>>>>>> Thankfully, it appears so. The DSLReports bloat test was
>> interesting,
>>>> but
>>>>>> the jitter on the ~240ms base latency from South Africa (and
>> other parts
>>>> of
>>>>>> the world) was significant enough that the figures returned
>> were often
>>>>>> unreliable and largely unusable - at least in my experience.
>>>>>> 
>>>>>> Fast.com reports my unloaded latency as 4ms, my loaded latency
>> as ~7ms
>>>> and
>>>>>> mentions servers located in local cities. I finally have a test
>> I can
>>>> share
>>>>>> with local non-technical people!
>>>>>> 
>>>>>> (Agreed, upload test would be nice, but this is a huge step
>> forward from
>>>>>> what I had access to before.)
>>>>>> 
>>>>>> Jannie Hanekom
>>>>>> 
>>>>>> _______________________________________________
>>>>>> Cake mailing list
>>>>>> Cake at lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/cake
>>>>> 
>>>>> _______________________________________________
>>>>> Cake mailing list
>>>>> Cake at lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/cake
>>>> 
>>>> 
>>>> 
>>>> --
>>>> Make Music, Not War
>>>> 
>>>> Dave Täht
>>>> CTO, TekLibre, LLC
>>>> http://www.teklibre.com
>>>> Tel: 1-831-435-0729
>>>> _______________________________________________
>>>> Cake mailing list
>>>> Cake at lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/cake
>>>> 
>>> _______________________________________________
>>> Cake mailing list
>>> Cake at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cake
>> 
>> 
> 



More information about the Make-wifi-fast mailing list