* Re: [Rpm] cloudflare on a roll
2023-04-18 15:31 [Rpm] cloudflare on a roll Dave Taht
@ 2023-04-18 18:05 ` rjmcmahon
2023-04-19 6:36 ` [Rpm] [Bloat] " Sebastian Moeller
1 sibling, 0 replies; 3+ messages in thread
From: rjmcmahon @ 2023-04-18 18:05 UTC (permalink / raw)
To: Dave Taht; +Cc: Rpm, libreqos, bloat
> https://blog.cloudflare.com/making-home-internet-faster/
I wonder if we're all still missing it a bit. We're complaining that
internet providers are using speed applied to a rated link capacity and
then we say to use latency or responsiveness in its place. It's like
saying a road has a speed and a latency. The road really has neither as
direct attributes. It's stationary as are waveguides. Sure, we can come
up with a rating of link capacity and link delay per what's attached to
those waveguides and we also need to add the highly variable "working
conditions" in order to take a synthetic measurement.
Us now saying "speed is the wrong metric" so use network latency can be
equally confusing and equally wrong, e.g. if the app thread is CPU
limited.
I think it's the travel times that matter to the end users. But the user
doesn't know their destinations, A to B so-to-speak, so instead of
execution times, we need to find a metric to hint at the users awaiting
their devices (helping engineers to mitigate and eliminate the dreaded
indeterminate progress indicators which is sad way to spend device
energy.)
An indirect way of measuring travel times may be to measure the thread
write delays. A thread will run as fast as possible (AFAP) when its
threads don't block, e.g. on network i/o.
I've added support for --tcp-write-times in iperf 2. This gives the
amount of time the thread's select() blocks awaiting on the ability to
write (or, with linux, the amount of time awaiting the syscall write()
to complete. This along with --tcp-write-prefetch sets TCP_NOTSENT_LOWAT
should give an idea of the amount of time awaiting network availability
by a thread.
[rjmcmahon@ryzen3950 iperf2-code]$ src/iperf -c mail.rjmcmahon.com
--tcp-write-times --histograms=1m --tcp-write-prefetch 16K -i 1 -t4
------------------------------------------------------------
Client connecting to mail.rjmcmahon.com, TCP port 5001 with pid 212310
(1 flows)
Write buffer size: 131072 Byte (writetimer-enabled)
TCP congestion control using cubic
TOS set to 0x0 (Nagle on)
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
Enabled write histograms bin-width=1.000 ms, bins=100000
------------------------------------------------------------
[ 1] local 192.168.1.99%enp7s0 port 38538 connected with 45.33.58.123
port 5001 (prefetch=16384) (sock=3) (icwnd/mss/irtt=14/1448/12335)
(ct=12.45 ms) on 2023-04-18 10:52:02.956 (PDT)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT NetPwr write-times avg/min/max/stdev (cnt)
[ 1] 0.00-1.00 sec 5.25 MBytes 44.0 Mbits/sec 42/1 0
360K/65206 us 84.43 24.173/13.509/39.347/4.196 ms (42)
[ 1] 0.00-1.00 sec W8-PDF:
bin(w=1ms):cnt(42)=14:1,15:1,19:1,21:2,23:5,24:12,25:11,26:2,27:2,28:1,29:1,33:1,35:1,40:1
(5.00/95.00/99.7%=19/33/40,Outliers=0,obl/obu=0/0)
[ 1] 1.00-2.00 sec 4.75 MBytes 39.8 Mbits/sec 38/0 6
173K/35105 us 142 26.079/22.403/39.766/4.142 ms (38)
[ 1] 1.00-2.00 sec W8-PDF:
bin(w=1ms):cnt(38)=23:6,24:7,25:9,26:6,27:1,28:1,30:1,32:1,33:3,34:1,35:1,40:1
(5.00/95.00/99.7%=23/35/40,Outliers=0,obl/obu=0/0)
[ 1] 2.00-3.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 4
100K/19518 us 262 25.673/22.276/35.668/2.602 ms (39)
[ 1] 2.00-3.00 sec W8-PDF:
bin(w=1ms):cnt(39)=23:2,24:6,25:10,26:9,27:5,28:5,35:1,36:1
(5.00/95.00/99.7%=23/35/36,Outliers=0,obl/obu=0/0)
[ 1] 3.00-4.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 1
101K/19337 us 271 25.073/14.430/35.911/2.864 ms (40)
[ 1] 3.00-4.00 sec W8-PDF:
bin(w=1ms):cnt(40)=15:1,23:3,24:7,25:13,26:4,27:6,28:3,29:1,30:1,36:1
(5.00/95.00/99.7%=23/30/36,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.06 sec 20.0 MBytes 41.3 Mbits/sec 160/2 11
103K/20126 us 257 25.230/13.509/39.766/3.563 ms (160)
[ 1] 0.00-4.06 sec W8(f)-PDF:
bin(w=1ms):cnt(160)=14:1,15:2,19:1,21:2,23:16,24:32,25:43,26:21,27:15,28:10,29:2,30:2,32:1,33:4,34:1,35:3,36:2,40:2
(5.00/95.00/99.7%=23/34/40,Outliers=0,obl/obu=0/0)
Bob
https://developer.android.com/reference/android/widget/ProgressBar
Indeterminate Progress
Use indeterminate mode for the progress bar when you do not know how
long an operation will take. Indeterminate mode is the default for
progress bar and shows a cyclic animation without a specific amount of
progress indicated. The following example shows an indeterminate
progress bar.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Rpm] [Bloat] cloudflare on a roll
2023-04-18 15:31 [Rpm] cloudflare on a roll Dave Taht
2023-04-18 18:05 ` rjmcmahon
@ 2023-04-19 6:36 ` Sebastian Moeller
1 sibling, 0 replies; 3+ messages in thread
From: Sebastian Moeller @ 2023-04-19 6:36 UTC (permalink / raw)
To: Dave Taht, Dave Taht via Bloat, Rpm, libreqos, bloat
[-- Attachment #1: Type: text/plain, Size: 2467 bytes --]
Also this:
https://blog.cloudflare.com/aim-database-for-internet-quality/
A pretty decent article explaining the issue in a very accessible way.
Nitpicks (in case David Tuber is on this list):
A) jitter is important for gaming (if only as it affects the necessary added delay for de-jitter buffers), just think extreme cases like packet reordering, so I think the gaing score should include the jitte sub-score.
B) "You have to make video calls to her clients all day and you sit in the office farthest away from the wireless access point." Probably "calls to your clients" instead.
Also I am a big fan of the cloudflare test including a packetloss test under idle conditions. This was really helpful in getting enough users to chip in loss data to convince my ISP that they had a severe loss problem (sustained packetloss from ~1-10% even under idle conditions like 1AM), which in turn resulted in ISP and some not revealed manufacturer going ona deep debug session and apparently squelching the issue.
Sidenotes.:
A) even at 0.8% random loss TCPs struggle to keep utilisation high, and it takes an ungodly numer of concurrent flows to saturate even a moderate link.
B) tying into an earlier discussion about monitoring latency/loss along network paths and the utility of doing so for endusers, I caught my ISPs attention by posting a series of MTR traces taken at the middle of the night against a set of DNS servers (including my ISP's). All showing increased sustained loss after a single gateway in Hamburg (and no such loss for other gateways of the same ISP). There had been complaints of reduced download rates before, but it apparently it was the longitudinal set of traces that made their NOC take the matter seriously and spring into action. Yes, that ISP should have found/fixed the issue pro-actively, but to paraphrase D. Rumsfeld, you go on the internet with the ISP you have, not the one you want to have....
Regards
Sebastian
On 18 April 2023 17:31:44 CEST, Dave Taht via Bloat <bloat@lists.bufferbloat.net> wrote:
>https://blog.cloudflare.com/making-home-internet-faster/
>
>
>--
>AMA March 31: https://www.broadband.io/c/broadband-grant-events/dave-taht
>Dave Täht CEO, TekLibre, LLC
>_______________________________________________
>Bloat mailing list
>Bloat@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/bloat
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
[-- Attachment #2: Type: text/html, Size: 2767 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread