Starlink has bufferbloat. Bad.
 help / color / mirror / Atom feed
* [Starlink] so great to see ISPs that care
@ 2023-03-12 19:31 Dave Taht
  2023-03-12 20:43 ` [Starlink] [Rpm] " Sebastian Moeller
  0 siblings, 1 reply; 7+ messages in thread
From: Dave Taht @ 2023-03-12 19:31 UTC (permalink / raw)
  To: libreqos, Dave Taht via Starlink, Rpm

https://www.reddit.com/r/HomeNetworking/comments/11pmc9a/comment/jbypj0z/?context=3

-- 
Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Starlink] [Rpm] so great to see ISPs that care
  2023-03-12 19:31 [Starlink] so great to see ISPs that care Dave Taht
@ 2023-03-12 20:43 ` Sebastian Moeller
  2023-03-12 21:02   ` rjmcmahon
  0 siblings, 1 reply; 7+ messages in thread
From: Sebastian Moeller @ 2023-03-12 20:43 UTC (permalink / raw)
  To: Dave Täht; +Cc: Dave Taht via Starlink, Rpm, bloat, Cake List

Dave,

your presentation was awesome, I fully agree with you ;). I very much liked your practical funnel demonstration which was boiled down to the bare minimum (I only partly asked myself, will the liquid spill in in your laptops keyboard, and if so is it water-proof, but you clearly had rehearsed/tried that before).
BTW, I always have to think of this h++ps://www.youtube.com/watch?v=R7yfISlGLNU somehow when you present live from the marina ;)


I am still not through watching all of the presentations and panels, but can already say, team L4S continues to over-promise and under-deliver, but Koen's presentation itself was done well and might (sadly) convince people to buy-in into L4(S) = 2L2L = too little, too late.

Stuart's RPM presentation was great, making a convincing point. (Except for pitching L4S and LLD as "solutions", I will accept them as a step in the right direction, but why not go in all the way and embrace proper scheduling?)

In detail though, I am not fully convinced about the decision of taking the inverse of delay increase as singular measure here as I consider that as a bit of a squandered opportunity at public outreach/education and as comparing idle and working RPM is non-intuitive, while idle and working RTT can immediately subtracted to see the extent of the queueing damage in actionable terms. 

Try the same with RPM values:

123-1234567:~ user$ networkQuality -v
==== SUMMARY ====                                                                                         
Upload capacity: 22.208 Mbps
Download capacity: 88.054 Mbps
Upload flows: 12
Download flows: 12
Responsiveness: High (2622 RPM)
Base RTT: 18
Start: 3/12/23, 21:00:58
End: 3/12/23, 21:01:08
OS Version: Version 12.6.3 (Build 21G419)

here we can divide 60 [sec/minute] * 1000 [ms/sec] by the RPM [1/min] to get: 60000/2622 = 22.88 ms loaded delay and subtract the base RTT of 18 for 60000/2622 - 18 = 4.88 ~5ms of loaded delay which is a useful quantity when managing a delay budget (this test was performed over wired ethernet with competent AQM and traffic shaping on the link, so no surprise about the outcome there). Let's look at the reverse and convert the base RTT into a base RPM score instead: 6000/18 = 333 rpm, what exactly does the delta RPM of 2622-333 = 2289rpm now tell us about the difference between idle and working conditions? [Well, since conversion is not witchcraft, I will be fine as will other interested in actual evoked delay, but we could have gotten a better measure*]

And all for the somewhat unhelpful car analogy... (it is not that for internal combustion engines bigger is necessarily better for RPM, either for torque or fuel efficiency).

I guess that ship has sailed though and RPM it is

*) Stuart notes that milliseconds and Hertz sound to sciency, but they could simply have given the delay increase in milliseconds a fancier name to solve that specific problem... 


> On Mar 12, 2023, at 20:31, Dave Taht via Rpm <rpm@lists.bufferbloat.net> wrote:
> 
> https://www.reddit.com/r/HomeNetworking/comments/11pmc9a/comment/jbypj0z/?context=3
> 
> -- 
> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Starlink] [Rpm] so great to see ISPs that care
  2023-03-12 20:43 ` [Starlink] [Rpm] " Sebastian Moeller
@ 2023-03-12 21:02   ` rjmcmahon
  2023-03-12 21:20     ` rjmcmahon
                       ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: rjmcmahon @ 2023-03-12 21:02 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Dave Täht, Dave Taht via Starlink, Rpm, Cake List, bloat

iperf 2 uses responses per second and also provides the bounce back 
times as well as one way delays.

The hypothesis is that network engineers have to fix KPI issues, 
including latency, ahead of shipping products.

Asking companies to act on consumer complaints is way too late. It's 
also extremely costly. Those running Amazon customer service can explain 
how these consumer calls about their devices cause things like device 
returns (as that's all the call support can provide.) This wastes energy 
to physically ship things back, causes a stack of working items that now 
go to ewaste, etc.

It's really on network operators, suppliers and device mfgs to get ahead 
of this years before consumers get their stuff.

As a side note, many devices select their WiFi chanspec (AP channel+) 
based on the strongest RSSI. The network paths should be based on KPIs 
like low latency. Strong signal just means an AP is yelling to loudly 
and interfering with the neighbors. Try the optimal AP chanspec that has 
10dB separation per spatial dimension and the whole apartment complex 
would be better for it.

We're so focused on buffer bloat we're ignoring everything else where 
incremental engineering has led to poor products & offerings.

[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.72 -i 1 -e 
--bounceback --trip-times
------------------------------------------------------------
Client connecting to 192.168.1.72, TCP port 5001 with pid 3123814 (1 
flows)
Write buffer size:  100 Byte
Bursting:  100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs & 
tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[  1] local 192.168.1.69%enp4s0 port 41336 connected with 192.168.1.72 
port 5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times) 
(sock=3) (icwnd/mss/irtt=14/1448/284) (ct=0.33 ms) on 2023-03-12 
14:01:24.820 (PDT)
[ ID] Interval        Transfer    Bandwidth         BB 
cnt=avg/min/max/stdev         Rtry  Cwnd/RTT    RPS
[  1] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.311/0.209/0.755/0.159 ms    0   14K/202 us    3220 rps
[  1] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.254/0.180/0.335/0.051 ms    0   14K/210 us    3934 rps
[  1] 2.00-3.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.266/0.168/0.468/0.088 ms    0   14K/210 us    3754 rps
[  1] 3.00-4.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.294/0.184/0.442/0.078 ms    0   14K/233 us    3396 rps
[  1] 4.00-5.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.263/0.150/0.427/0.077 ms    0   14K/215 us    3802 rps
[  1] 5.00-6.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.325/0.237/0.409/0.056 ms    0   14K/258 us    3077 rps
[  1] 6.00-7.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.259/0.165/0.410/0.077 ms    0   14K/219 us    3857 rps
[  1] 7.00-8.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.277/0.193/0.415/0.068 ms    0   14K/224 us    3608 rps
[  1] 8.00-9.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.292/0.206/0.465/0.072 ms    0   14K/231 us    3420 rps
[  1] 9.00-10.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.256/0.157/0.439/0.082 ms    0   14K/211 us    3908 rps
[  1] 0.00-10.01 sec  19.5 KBytes  16.0 Kbits/sec    
100=0.280/0.150/0.755/0.085 ms    0   14K/1033 us    3573 rps
[  1] 0.00-10.01 sec  OWD Delays (ms) Cnt=100 To=0.169/0.074/0.318/0.056 
 From=0.105/0.055/0.162/0.024 Asymmetry=0.065/0.000/0.172/0.049    3573 
rps
[  1] 0.00-10.01 sec BB8(f)-PDF: 
bin(w=100us):cnt(100)=2:14,3:57,4:20,5:8,8:1 
(5.00/95.00/99.7%=2/5/8,Outliers=0,obl/obu=0/0)


Bob
> Dave,
> 
> your presentation was awesome, I fully agree with you ;). I very much
> liked your practical funnel demonstration which was boiled down to the
> bare minimum (I only partly asked myself, will the liquid spill in in
> your laptops keyboard, and if so is it water-proof, but you clearly
> had rehearsed/tried that before).
> BTW, I always have to think of this
> h++ps://www.youtube.com/watch?v=R7yfISlGLNU somehow when you present
> live from the marina ;)
> 
> 
> I am still not through watching all of the presentations and panels,
> but can already say, team L4S continues to over-promise and
> under-deliver, but Koen's presentation itself was done well and might
> (sadly) convince people to buy-in into L4(S) = 2L2L = too little, too
> late.
> 
> Stuart's RPM presentation was great, making a convincing point.
> (Except for pitching L4S and LLD as "solutions", I will accept them as
> a step in the right direction, but why not go in all the way and
> embrace proper scheduling?)
> 
> In detail though, I am not fully convinced about the decision of
> taking the inverse of delay increase as singular measure here as I
> consider that as a bit of a squandered opportunity at public
> outreach/education and as comparing idle and working RPM is
> non-intuitive, while idle and working RTT can immediately subtracted
> to see the extent of the queueing damage in actionable terms.
> 
> Try the same with RPM values:
> 
> 123-1234567:~ user$ networkQuality -v
> ==== SUMMARY ====
> 
> Upload capacity: 22.208 Mbps
> Download capacity: 88.054 Mbps
> Upload flows: 12
> Download flows: 12
> Responsiveness: High (2622 RPM)
> Base RTT: 18
> Start: 3/12/23, 21:00:58
> End: 3/12/23, 21:01:08
> OS Version: Version 12.6.3 (Build 21G419)
> 
> here we can divide 60 [sec/minute] * 1000 [ms/sec] by the RPM [1/min]
> to get: 60000/2622 = 22.88 ms loaded delay and subtract the base RTT
> of 18 for 60000/2622 - 18 = 4.88 ~5ms of loaded delay which is a
> useful quantity when managing a delay budget (this test was performed
> over wired ethernet with competent AQM and traffic shaping on the
> link, so no surprise about the outcome there). Let's look at the
> reverse and convert the base RTT into a base RPM score instead:
> 6000/18 = 333 rpm, what exactly does the delta RPM of 2622-333 =
> 2289rpm now tell us about the difference between idle and working
> conditions? [Well, since conversion is not witchcraft, I will be fine
> as will other interested in actual evoked delay, but we could have
> gotten a better measure*]
> 
> And all for the somewhat unhelpful car analogy... (it is not that for
> internal combustion engines bigger is necessarily better for RPM,
> either for torque or fuel efficiency).
> 
> I guess that ship has sailed though and RPM it is
> 
> *) Stuart notes that milliseconds and Hertz sound to sciency, but they
> could simply have given the delay increase in milliseconds a fancier
> name to solve that specific problem...
> 
> 
>> On Mar 12, 2023, at 20:31, Dave Taht via Rpm 
>> <rpm@lists.bufferbloat.net> wrote:
>> 
>> https://www.reddit.com/r/HomeNetworking/comments/11pmc9a/comment/jbypj0z/?context=3
>> 
>> --
>> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
> 
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Starlink] [Rpm] so great to see ISPs that care
  2023-03-12 21:02   ` rjmcmahon
@ 2023-03-12 21:20     ` rjmcmahon
  2023-03-12 21:37     ` Sebastian Moeller
  2023-03-12 22:39     ` Ben Greear
  2 siblings, 0 replies; 7+ messages in thread
From: rjmcmahon @ 2023-03-12 21:20 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Sebastian Moeller, Dave Taht via Starlink, Rpm, Cake List, bloat

for completeness, here is a concurrent "working load" example:

  [root@ryzen3950 iperf2-code]# iperf -c 192.168.1.58%enp4s0 -i 1 -e 
--bounceback --working-load=up,4 -t 3
------------------------------------------------------------
Client connecting to 192.168.1.58, TCP port 5001 with pid 3125575 via 
enp4s0 (1 flows)
Write buffer size:  100 Byte
Bursting:  100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs & 
tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  2] local 192.168.1.69%enp4s0 port 49268 connected with 192.168.1.58 
port 5001 (bb w/quickack len/hold=100/0) (sock=7) 
(icwnd/mss/irtt=14/1448/243) (ct=0.29 ms) on 2023-03-12 14:18:25.658 
(PDT)
[  5] local 192.168.1.69%enp4s0 port 49244 connected with 192.168.1.58 
port 5001 (prefetch=16384) (sock=3) (qack) (icwnd/mss/irtt=14/1448/260) 
(ct=0.31 ms) on 2023-03-12 14:18:25.658 (PDT)
[  4] local 192.168.1.69%enp4s0 port 49254 connected with 192.168.1.58 
port 5001 (prefetch=16384) (sock=4) (qack) (icwnd/mss/irtt=14/1448/295) 
(ct=0.35 ms) on 2023-03-12 14:18:25.658 (PDT)
[  1] local 192.168.1.69%enp4s0 port 49256 connected with 192.168.1.58 
port 5001 (prefetch=16384) (sock=6) (qack) (icwnd/mss/irtt=14/1448/270) 
(ct=0.31 ms) on 2023-03-12 14:18:25.658 (PDT)
[  3] local 192.168.1.69%enp4s0 port 49252 connected with 192.168.1.58 
port 5001 (prefetch=16384) (sock=5) (qack) (icwnd/mss/irtt=14/1448/263) 
(ct=0.31 ms) on 2023-03-12 14:18:25.658 (PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     
Cwnd/RTT(var)        NetPwr
[  5] 0.00-1.00 sec  41.8 MBytes   351 Mbits/sec  438252/0         3     
   73K/53(3) us  826892
[  1] 0.00-1.00 sec  39.3 MBytes   330 Mbits/sec  412404/0        24     
   39K/45(3) us  916455
[ ID] Interval        Transfer    Bandwidth         BB 
cnt=avg/min/max/stdev         Rtry  Cwnd/RTT    RPS
[  2] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.323/0.093/2.147/0.641 ms    0   14K/119 us    3098 rps
[  4] 0.00-1.00 sec  34.2 MBytes   287 Mbits/sec  358210/0        15     
   55K/53(3) us  675869
[  3] 0.00-1.00 sec  33.4 MBytes   280 Mbits/sec  349927/0        11     
  127K/53(4) us  660241
[SUM] 0.00-1.00 sec   109 MBytes   917 Mbits/sec  1146389/0        29
[  5] 1.00-2.00 sec  42.1 MBytes   353 Mbits/sec  441376/0         1     
   73K/55(9) us  802502
[  1] 1.00-2.00 sec  39.6 MBytes   333 Mbits/sec  415644/0         0     
   39K/51(6) us  814988
[  2] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.079/0.056/0.127/0.019 ms    0   14K/67 us    12658 rps
[  4] 1.00-2.00 sec  33.8 MBytes   283 Mbits/sec  354150/0         0     
   55K/58(7) us  610603
[  3] 1.00-2.00 sec  33.7 MBytes   283 Mbits/sec  353392/0         2     
  127K/53(6) us  666777
[SUM] 1.00-2.00 sec   110 MBytes   919 Mbits/sec  1148918/0         3
[  5] 2.00-3.00 sec  42.2 MBytes   354 Mbits/sec  442685/0         0     
   73K/50(8) us  885370
[  1] 2.00-3.00 sec  36.9 MBytes   310 Mbits/sec  387381/0         0     
   39K/48(4) us  807044
[  2] 2.00-3.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.073/0.058/0.093/0.012 ms    0   14K/60 us    13774 rps
[  4] 2.00-3.00 sec  33.9 MBytes   284 Mbits/sec  355533/0         0     
   55K/52(4) us  683717
[  3] 2.00-3.00 sec  29.4 MBytes   247 Mbits/sec  308725/0         1     
  127K/54(4) us  571713
[SUM] 2.00-3.00 sec   106 MBytes   886 Mbits/sec  1106943/0         1
[  5] 0.00-3.00 sec   126 MBytes   353 Mbits/sec  1322314/0         4    
    73K/57(18) us  773072
[  2] 0.00-3.00 sec  7.81 KBytes  21.3 Kbits/sec    
40=0.134/0.053/2.147/0.328 ms    0   14K/58 us    7489 rps
[  2] 0.00-3.00 sec BB8(f)-PDF: bin(w=100us):cnt(40)=1:31,2:8,22:1 
(5.00/95.00/99.7%=1/2/22,Outliers=1,obl/obu=0/0)
[  3] 0.00-3.00 sec  96.5 MBytes   270 Mbits/sec  1012045/0        14    
   127K/57(6) us  591693
[  1] 0.00-3.00 sec   116 MBytes   324 Mbits/sec  1215431/0        24    
    39K/51(5) us  794234
[  4] 0.00-3.00 sec   102 MBytes   285 Mbits/sec  1067895/0        15    
    55K/55(9) us  647061
[SUM] 0.00-3.00 sec   324 MBytes   907 Mbits/sec  3402254/0        33
[ CT] final connect times (min/avg/max/stdev) = 0.292/0.316/0.352/22.075 
ms (tot/err) = 5/0

> iperf 2 uses responses per second and also provides the bounce back
> times as well as one way delays.
> 
> The hypothesis is that network engineers have to fix KPI issues,
> including latency, ahead of shipping products.
> 
> Asking companies to act on consumer complaints is way too late. It's
> also extremely costly. Those running Amazon customer service can
> explain how these consumer calls about their devices cause things like
> device returns (as that's all the call support can provide.) This
> wastes energy to physically ship things back, causes a stack of
> working items that now go to ewaste, etc.
> 
> It's really on network operators, suppliers and device mfgs to get
> ahead of this years before consumers get their stuff.
> 
> As a side note, many devices select their WiFi chanspec (AP channel+)
> based on the strongest RSSI. The network paths should be based on KPIs
> like low latency. Strong signal just means an AP is yelling to loudly
> and interfering with the neighbors. Try the optimal AP chanspec that
> has 10dB separation per spatial dimension and the whole apartment
> complex would be better for it.
> 
> We're so focused on buffer bloat we're ignoring everything else where
> incremental engineering has led to poor products & offerings.
> 
> [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.72 -i 1 -e
> --bounceback --trip-times
> ------------------------------------------------------------
> Client connecting to 192.168.1.72, TCP port 5001 with pid 3123814 (1 
> flows)
> Write buffer size:  100 Byte
> Bursting:  100 Byte writes 10 times every 1.00 second(s)
> Bounce-back test (size= 100 Byte) (server hold req=0 usecs & 
> tcp_quickack)
> TOS set to 0x0 and nodelay (Nagle off)
> TCP window size: 16.0 KByte (default)
> Event based writes (pending queue watermark at 16384 bytes)
> ------------------------------------------------------------
> [  1] local 192.168.1.69%enp4s0 port 41336 connected with 192.168.1.72
> port 5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times)
> (sock=3) (icwnd/mss/irtt=14/1448/284) (ct=0.33 ms) on 2023-03-12
> 14:01:24.820 (PDT)
> [ ID] Interval        Transfer    Bandwidth         BB
> cnt=avg/min/max/stdev         Rtry  Cwnd/RTT    RPS
> [  1] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec
> 10=0.311/0.209/0.755/0.159 ms    0   14K/202 us    3220 rps
> [  1] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec
> 10=0.254/0.180/0.335/0.051 ms    0   14K/210 us    3934 rps
> [  1] 2.00-3.00 sec  1.95 KBytes  16.0 Kbits/sec
> 10=0.266/0.168/0.468/0.088 ms    0   14K/210 us    3754 rps
> [  1] 3.00-4.00 sec  1.95 KBytes  16.0 Kbits/sec
> 10=0.294/0.184/0.442/0.078 ms    0   14K/233 us    3396 rps
> [  1] 4.00-5.00 sec  1.95 KBytes  16.0 Kbits/sec
> 10=0.263/0.150/0.427/0.077 ms    0   14K/215 us    3802 rps
> [  1] 5.00-6.00 sec  1.95 KBytes  16.0 Kbits/sec
> 10=0.325/0.237/0.409/0.056 ms    0   14K/258 us    3077 rps
> [  1] 6.00-7.00 sec  1.95 KBytes  16.0 Kbits/sec
> 10=0.259/0.165/0.410/0.077 ms    0   14K/219 us    3857 rps
> [  1] 7.00-8.00 sec  1.95 KBytes  16.0 Kbits/sec
> 10=0.277/0.193/0.415/0.068 ms    0   14K/224 us    3608 rps
> [  1] 8.00-9.00 sec  1.95 KBytes  16.0 Kbits/sec
> 10=0.292/0.206/0.465/0.072 ms    0   14K/231 us    3420 rps
> [  1] 9.00-10.00 sec  1.95 KBytes  16.0 Kbits/sec
> 10=0.256/0.157/0.439/0.082 ms    0   14K/211 us    3908 rps
> [  1] 0.00-10.01 sec  19.5 KBytes  16.0 Kbits/sec
> 100=0.280/0.150/0.755/0.085 ms    0   14K/1033 us    3573 rps
> [  1] 0.00-10.01 sec  OWD Delays (ms) Cnt=100
> To=0.169/0.074/0.318/0.056 From=0.105/0.055/0.162/0.024
> Asymmetry=0.065/0.000/0.172/0.049    3573 rps
> [  1] 0.00-10.01 sec BB8(f)-PDF:
> bin(w=100us):cnt(100)=2:14,3:57,4:20,5:8,8:1
> (5.00/95.00/99.7%=2/5/8,Outliers=0,obl/obu=0/0)
> 
> 
> Bob
>> Dave,
>> 
>> your presentation was awesome, I fully agree with you ;). I very much
>> liked your practical funnel demonstration which was boiled down to the
>> bare minimum (I only partly asked myself, will the liquid spill in in
>> your laptops keyboard, and if so is it water-proof, but you clearly
>> had rehearsed/tried that before).
>> BTW, I always have to think of this
>> h++ps://www.youtube.com/watch?v=R7yfISlGLNU somehow when you present
>> live from the marina ;)
>> 
>> 
>> I am still not through watching all of the presentations and panels,
>> but can already say, team L4S continues to over-promise and
>> under-deliver, but Koen's presentation itself was done well and might
>> (sadly) convince people to buy-in into L4(S) = 2L2L = too little, too
>> late.
>> 
>> Stuart's RPM presentation was great, making a convincing point.
>> (Except for pitching L4S and LLD as "solutions", I will accept them as
>> a step in the right direction, but why not go in all the way and
>> embrace proper scheduling?)
>> 
>> In detail though, I am not fully convinced about the decision of
>> taking the inverse of delay increase as singular measure here as I
>> consider that as a bit of a squandered opportunity at public
>> outreach/education and as comparing idle and working RPM is
>> non-intuitive, while idle and working RTT can immediately subtracted
>> to see the extent of the queueing damage in actionable terms.
>> 
>> Try the same with RPM values:
>> 
>> 123-1234567:~ user$ networkQuality -v
>> ==== SUMMARY ====
>> 
>> Upload capacity: 22.208 Mbps
>> Download capacity: 88.054 Mbps
>> Upload flows: 12
>> Download flows: 12
>> Responsiveness: High (2622 RPM)
>> Base RTT: 18
>> Start: 3/12/23, 21:00:58
>> End: 3/12/23, 21:01:08
>> OS Version: Version 12.6.3 (Build 21G419)
>> 
>> here we can divide 60 [sec/minute] * 1000 [ms/sec] by the RPM [1/min]
>> to get: 60000/2622 = 22.88 ms loaded delay and subtract the base RTT
>> of 18 for 60000/2622 - 18 = 4.88 ~5ms of loaded delay which is a
>> useful quantity when managing a delay budget (this test was performed
>> over wired ethernet with competent AQM and traffic shaping on the
>> link, so no surprise about the outcome there). Let's look at the
>> reverse and convert the base RTT into a base RPM score instead:
>> 6000/18 = 333 rpm, what exactly does the delta RPM of 2622-333 =
>> 2289rpm now tell us about the difference between idle and working
>> conditions? [Well, since conversion is not witchcraft, I will be fine
>> as will other interested in actual evoked delay, but we could have
>> gotten a better measure*]
>> 
>> And all for the somewhat unhelpful car analogy... (it is not that for
>> internal combustion engines bigger is necessarily better for RPM,
>> either for torque or fuel efficiency).
>> 
>> I guess that ship has sailed though and RPM it is
>> 
>> *) Stuart notes that milliseconds and Hertz sound to sciency, but they
>> could simply have given the delay increase in milliseconds a fancier
>> name to solve that specific problem...
>> 
>> 
>>> On Mar 12, 2023, at 20:31, Dave Taht via Rpm 
>>> <rpm@lists.bufferbloat.net> wrote:
>>> 
>>> https://www.reddit.com/r/HomeNetworking/comments/11pmc9a/comment/jbypj0z/?context=3
>>> 
>>> --
>>> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
>>> Dave Täht CEO, TekLibre, LLC
>>> _______________________________________________
>>> Rpm mailing list
>>> Rpm@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>> 
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Starlink] [Rpm] so great to see ISPs that care
  2023-03-12 21:02   ` rjmcmahon
  2023-03-12 21:20     ` rjmcmahon
@ 2023-03-12 21:37     ` Sebastian Moeller
  2023-03-13  2:56       ` rjmcmahon
  2023-03-12 22:39     ` Ben Greear
  2 siblings, 1 reply; 7+ messages in thread
From: Sebastian Moeller @ 2023-03-12 21:37 UTC (permalink / raw)
  To: rjmcmahon; +Cc: Dave Täht, Dave Taht via Starlink, Rpm, Cake List, bloat

Hi Bob,


> On Mar 12, 2023, at 22:02, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
> 
> iperf 2 uses responses per second and also provides the bounce back times as well as one way delays.
> 
> The hypothesis is that network engineers have to fix KPI issues, including latency, ahead of shipping products.
> 
> Asking companies to act on consumer complaints is way too late. It's also extremely costly. Those running Amazon customer service can explain how these consumer calls about their devices cause things like device returns (as that's all the call support can provide.) This wastes energy to physically ship things back, causes a stack of working items that now go to ewaste, etc.
> 
> It's really on network operators, suppliers and device mfgs to get ahead of this years before consumers get their stuff.

	[SM] As much as I like to tinker, I agree with you to make an impact, doing this one network at a time scaled poorly, and a joined effort seems way more effective and yes that better started yesterday than today ;)


> 
> As a side note, many devices select their WiFi chanspec (AP channel+) based on the strongest RSSI. The network paths should be based on KPIs like low latency. Strong signal just means an AP is yelling to loudly and interfering with the neighbors. Try the optimal AP chanspec that has 10dB separation per spatial dimension and the whole apartment complex would be better for it.

	[SM] Sidenote, with DSL ISP are actively optimizing the per link transmit power in both directions. They seem to do this partially to save energy/cost and partially to optimize group transmission rates. Ever since vectoring was introduced to deal with crosstalk the signal fate of all links connected to a DSLAM agare a partial common fate. In the DSLAM to CPE direction the DSLAM will "pre-distort" each lines signal dynamically so that after the unavoidable crosstalk interaction between the lines the resulting "pulse shapes" are clean(er) again when they reach the CPE (I am simplifying but the principle holds). In CPE to DSLAM direction that is not possible (since there is no entity seeing all concurrent transmissions and hence no possibility to calculate or apply the pre-distortion, so the method of choice is to simply try to decode all lines together, and to help with that CPE transmit power sees to be adjusted that signal level at the DSLAM is equalized. (For very short links that often results in less than maximally possible capacity, but over the whole set of links that method seems to increase total capacity). I would guess in theory these methods are also applied on RF links (except RF with its 3D propagation is probably way more challenging).



> 
> We're so focused on buffer bloat we're ignoring everything else where incremental engineering has led to poor products & offerings.
> 
> [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.72 -i 1 -e --bounceback --trip-times
> ------------------------------------------------------------
> Client connecting to 192.168.1.72, TCP port 5001 with pid 3123814 (1 flows)
> Write buffer size:  100 Byte
> Bursting:  100 Byte writes 10 times every 1.00 second(s)
> Bounce-back test (size= 100 Byte) (server hold req=0 usecs & tcp_quickack)
> TOS set to 0x0 and nodelay (Nagle off)
> TCP window size: 16.0 KByte (default)
> Event based writes (pending queue watermark at 16384 bytes)
> ------------------------------------------------------------
> [  1] local 192.168.1.69%enp4s0 port 41336 connected with 192.168.1.72 port 5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times) (sock=3) (icwnd/mss/irtt=14/1448/284) (ct=0.33 ms) on 2023-03-12 14:01:24.820 (PDT)
> [ ID] Interval        Transfer    Bandwidth         BB cnt=avg/min/max/stdev         Rtry  Cwnd/RTT    RPS
> [  1] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec    10=0.311/0.209/0.755/0.159 ms    0   14K/202 us    3220 rps
> [  1] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec    10=0.254/0.180/0.335/0.051 ms    0   14K/210 us    3934 rps
> [  1] 2.00-3.00 sec  1.95 KBytes  16.0 Kbits/sec    10=0.266/0.168/0.468/0.088 ms    0   14K/210 us    3754 rps
> [  1] 3.00-4.00 sec  1.95 KBytes  16.0 Kbits/sec    10=0.294/0.184/0.442/0.078 ms    0   14K/233 us    3396 rps
> [  1] 4.00-5.00 sec  1.95 KBytes  16.0 Kbits/sec    10=0.263/0.150/0.427/0.077 ms    0   14K/215 us    3802 rps
> [  1] 5.00-6.00 sec  1.95 KBytes  16.0 Kbits/sec    10=0.325/0.237/0.409/0.056 ms    0   14K/258 us    3077 rps
> [  1] 6.00-7.00 sec  1.95 KBytes  16.0 Kbits/sec    10=0.259/0.165/0.410/0.077 ms    0   14K/219 us    3857 rps
> [  1] 7.00-8.00 sec  1.95 KBytes  16.0 Kbits/sec    10=0.277/0.193/0.415/0.068 ms    0   14K/224 us    3608 rps
> [  1] 8.00-9.00 sec  1.95 KBytes  16.0 Kbits/sec    10=0.292/0.206/0.465/0.072 ms    0   14K/231 us    3420 rps
> [  1] 9.00-10.00 sec  1.95 KBytes  16.0 Kbits/sec    10=0.256/0.157/0.439/0.082 ms    0   14K/211 us    3908 rps
> [  1] 0.00-10.01 sec  19.5 KBytes  16.0 Kbits/sec    100=0.280/0.150/0.755/0.085 ms    0   14K/1033 us    3573 rps
> [  1] 0.00-10.01 sec  OWD Delays (ms) Cnt=100 To=0.169/0.074/0.318/0.056 From=0.105/0.055/0.162/0.024 Asymmetry=0.065/0.000/0.172/0.049    3573 rps
> [  1] 0.00-10.01 sec BB8(f)-PDF: bin(w=100us):cnt(100)=2:14,3:57,4:20,5:8,8:1 (5.00/95.00/99.7%=2/5/8,Outliers=0,obl/obu=0/0)
> 
> 
> Bob
>> Dave,
>> your presentation was awesome, I fully agree with you ;). I very much
>> liked your practical funnel demonstration which was boiled down to the
>> bare minimum (I only partly asked myself, will the liquid spill in in
>> your laptops keyboard, and if so is it water-proof, but you clearly
>> had rehearsed/tried that before).
>> BTW, I always have to think of this
>> h++ps://www.youtube.com/watch?v=R7yfISlGLNU somehow when you present
>> live from the marina ;)
>> I am still not through watching all of the presentations and panels,
>> but can already say, team L4S continues to over-promise and
>> under-deliver, but Koen's presentation itself was done well and might
>> (sadly) convince people to buy-in into L4(S) = 2L2L = too little, too
>> late.
>> Stuart's RPM presentation was great, making a convincing point.
>> (Except for pitching L4S and LLD as "solutions", I will accept them as
>> a step in the right direction, but why not go in all the way and
>> embrace proper scheduling?)
>> In detail though, I am not fully convinced about the decision of
>> taking the inverse of delay increase as singular measure here as I
>> consider that as a bit of a squandered opportunity at public
>> outreach/education and as comparing idle and working RPM is
>> non-intuitive, while idle and working RTT can immediately subtracted
>> to see the extent of the queueing damage in actionable terms.
>> Try the same with RPM values:
>> 123-1234567:~ user$ networkQuality -v
>> ==== SUMMARY ====
>> Upload capacity: 22.208 Mbps
>> Download capacity: 88.054 Mbps
>> Upload flows: 12
>> Download flows: 12
>> Responsiveness: High (2622 RPM)
>> Base RTT: 18
>> Start: 3/12/23, 21:00:58
>> End: 3/12/23, 21:01:08
>> OS Version: Version 12.6.3 (Build 21G419)
>> here we can divide 60 [sec/minute] * 1000 [ms/sec] by the RPM [1/min]
>> to get: 60000/2622 = 22.88 ms loaded delay and subtract the base RTT
>> of 18 for 60000/2622 - 18 = 4.88 ~5ms of loaded delay which is a
>> useful quantity when managing a delay budget (this test was performed
>> over wired ethernet with competent AQM and traffic shaping on the
>> link, so no surprise about the outcome there). Let's look at the
>> reverse and convert the base RTT into a base RPM score instead:
>> 6000/18 = 333 rpm, what exactly does the delta RPM of 2622-333 =
>> 2289rpm now tell us about the difference between idle and working
>> conditions? [Well, since conversion is not witchcraft, I will be fine
>> as will other interested in actual evoked delay, but we could have
>> gotten a better measure*]
>> And all for the somewhat unhelpful car analogy... (it is not that for
>> internal combustion engines bigger is necessarily better for RPM,
>> either for torque or fuel efficiency).
>> I guess that ship has sailed though and RPM it is
>> *) Stuart notes that milliseconds and Hertz sound to sciency, but they
>> could simply have given the delay increase in milliseconds a fancier
>> name to solve that specific problem...
>>> On Mar 12, 2023, at 20:31, Dave Taht via Rpm <rpm@lists.bufferbloat.net> wrote:
>>> https://www.reddit.com/r/HomeNetworking/comments/11pmc9a/comment/jbypj0z/?context=3
>>> --
>>> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
>>> Dave Täht CEO, TekLibre, LLC
>>> _______________________________________________
>>> Rpm mailing list
>>> Rpm@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Starlink] [Rpm] so great to see ISPs that care
  2023-03-12 21:02   ` rjmcmahon
  2023-03-12 21:20     ` rjmcmahon
  2023-03-12 21:37     ` Sebastian Moeller
@ 2023-03-12 22:39     ` Ben Greear
  2 siblings, 0 replies; 7+ messages in thread
From: Ben Greear @ 2023-03-12 22:39 UTC (permalink / raw)
  To: starlink

On 3/12/23 2:02 PM, rjmcmahon via Starlink wrote:
> iperf 2 uses responses per second and also provides the bounce back times as well as one way delays.
> 
> The hypothesis is that network engineers have to fix KPI issues, including latency, ahead of shipping products.
> 
> Asking companies to act on consumer complaints is way too late. It's also extremely costly. Those running Amazon customer service can explain how these consumer 
> calls about their devices cause things like device returns (as that's all the call support can provide.) This wastes energy to physically ship things back, 
> causes a stack of working items that now go to ewaste, etc.
> 
> It's really on network operators, suppliers and device mfgs to get ahead of this years before consumers get their stuff.
> 
> As a side note, many devices select their WiFi chanspec (AP channel+) based on the strongest RSSI. The network paths should be based on KPIs like low latency. 
> Strong signal just means an AP is yelling to loudly and interfering with the neighbors. Try the optimal AP chanspec that has 10dB separation per spatial 
> dimension and the whole apartment complex would be better for it.

How are you going to make the latency determination?

I guess that anywhere there are a lot of APs you can connect to, they are managed
entity with some sort of global controller for that location.  So then the controller can make
the decision.  That general ability for controller to manage stations already
exists in wifi, but someone will have to make a clever controller...

If it is wired/fiber backhaul, then probably everyone shares same uplink so selecting strongest
AP is best option.

If wifi backhaul, you need a lot more cleverness to manage stations properly.

Thanks,
Ben

-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Starlink] [Rpm] so great to see ISPs that care
  2023-03-12 21:37     ` Sebastian Moeller
@ 2023-03-13  2:56       ` rjmcmahon
  0 siblings, 0 replies; 7+ messages in thread
From: rjmcmahon @ 2023-03-13  2:56 UTC (permalink / raw)
  To: Sebastian Moeller
  Cc: Dave Täht, Dave Taht via Starlink, Rpm, Cake List, bloat

Our current WiFi designs, at least in residential, are like garden hoses 
attached to rectangular sprinklers - flexible and suboptimal. What's 
needed is an irrigation system approach where physical dimensions and 
spray patterns are designed in by a qualified designer. (I was 16 when I 
got my Texas irrigation license - needed it for summer work.) WiFi 
designers can learn from irrigation, e.g things like just enough spray 
overlap and don't spray down the street.

Also, by fire code CPE smoke detectors can be no further than 30' from a 
habitable space as humans need to be alerted. 20' radius is better.

It is silly that we don't really take advantage of this and design a 
proper WiFi network. The distances, EMF patterns, and local devices are 
known ahead of time (as our plants, yard, main pipes, etc. with 
irrigation.)

I started my career working on a network design for the International 
Space Station. The *first* requirement was to carry "life support use 
cases" for the astronauts. None of this stuff of, well it's just 
entertainment so we don't need to worry about downtime and rebooting a 
device is just fine. Also, none of the hand waving as Elon Musk does 
conflating recycling with life support. 
https://www.youtube.com/watch?v=sOpMrVnjYeY&t=4619s

I believe skilled engineers must take the lead here. It's not going to 
come from customers complaining, nor from exec managements looking for 
the next increment. All problems aren't bufferbloat either.

We as engineers can do better. Not sure why it's been so hard to date 
but it seems to be the case. My hope is we figure it out sooner than 
later. I also think most ISPs actually do care despite, the supposition 
in the subject line. Rather we just haven't figured out as a group how 
to do our engineering at a world-class level.

Sometimes an increment is ok. Other times we need to rethink our design. 
Maybe we need to do a bit more of the latter.

Bob

> Hi Bob,
> 
> 
>> On Mar 12, 2023, at 22:02, rjmcmahon <rjmcmahon@rjmcmahon.com> wrote:
>> 
>> iperf 2 uses responses per second and also provides the bounce back 
>> times as well as one way delays.
>> 
>> The hypothesis is that network engineers have to fix KPI issues, 
>> including latency, ahead of shipping products.
>> 
>> Asking companies to act on consumer complaints is way too late. It's 
>> also extremely costly. Those running Amazon customer service can 
>> explain how these consumer calls about their devices cause things like 
>> device returns (as that's all the call support can provide.) This 
>> wastes energy to physically ship things back, causes a stack of 
>> working items that now go to ewaste, etc.
>> 
>> It's really on network operators, suppliers and device mfgs to get 
>> ahead of this years before consumers get their stuff.
> 
> 	[SM] As much as I like to tinker, I agree with you to make an impact,
> doing this one network at a time scaled poorly, and a joined effort
> seems way more effective and yes that better started yesterday than
> today ;)
> 
> 
>> 
>> As a side note, many devices select their WiFi chanspec (AP channel+) 
>> based on the strongest RSSI. The network paths should be based on KPIs 
>> like low latency. Strong signal just means an AP is yelling to loudly 
>> and interfering with the neighbors. Try the optimal AP chanspec that 
>> has 10dB separation per spatial dimension and the whole apartment 
>> complex would be better for it.
> 
> 	[SM] Sidenote, with DSL ISP are actively optimizing the per link
> transmit power in both directions. They seem to do this partially to
> save energy/cost and partially to optimize group transmission rates.
> Ever since vectoring was introduced to deal with crosstalk the signal
> fate of all links connected to a DSLAM agare a partial common fate. In
> the DSLAM to CPE direction the DSLAM will "pre-distort" each lines
> signal dynamically so that after the unavoidable crosstalk interaction
> between the lines the resulting "pulse shapes" are clean(er) again
> when they reach the CPE (I am simplifying but the principle holds). In
> CPE to DSLAM direction that is not possible (since there is no entity
> seeing all concurrent transmissions and hence no possibility to
> calculate or apply the pre-distortion, so the method of choice is to
> simply try to decode all lines together, and to help with that CPE
> transmit power sees to be adjusted that signal level at the DSLAM is
> equalized. (For very short links that often results in less than
> maximally possible capacity, but over the whole set of links that
> method seems to increase total capacity). I would guess in theory
> these methods are also applied on RF links (except RF with its 3D
> propagation is probably way more challenging).
> 
> 
> 
>> 
>> We're so focused on buffer bloat we're ignoring everything else where 
>> incremental engineering has led to poor products & offerings.
>> 
>> [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.72 -i 1 -e 
>> --bounceback --trip-times
>> ------------------------------------------------------------
>> Client connecting to 192.168.1.72, TCP port 5001 with pid 3123814 (1 
>> flows)
>> Write buffer size:  100 Byte
>> Bursting:  100 Byte writes 10 times every 1.00 second(s)
>> Bounce-back test (size= 100 Byte) (server hold req=0 usecs & 
>> tcp_quickack)
>> TOS set to 0x0 and nodelay (Nagle off)
>> TCP window size: 16.0 KByte (default)
>> Event based writes (pending queue watermark at 16384 bytes)
>> ------------------------------------------------------------
>> [  1] local 192.168.1.69%enp4s0 port 41336 connected with 192.168.1.72 
>> port 5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times) 
>> (sock=3) (icwnd/mss/irtt=14/1448/284) (ct=0.33 ms) on 2023-03-12 
>> 14:01:24.820 (PDT)
>> [ ID] Interval        Transfer    Bandwidth         BB 
>> cnt=avg/min/max/stdev         Rtry  Cwnd/RTT    RPS
>> [  1] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec    
>> 10=0.311/0.209/0.755/0.159 ms    0   14K/202 us    3220 rps
>> [  1] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec    
>> 10=0.254/0.180/0.335/0.051 ms    0   14K/210 us    3934 rps
>> [  1] 2.00-3.00 sec  1.95 KBytes  16.0 Kbits/sec    
>> 10=0.266/0.168/0.468/0.088 ms    0   14K/210 us    3754 rps
>> [  1] 3.00-4.00 sec  1.95 KBytes  16.0 Kbits/sec    
>> 10=0.294/0.184/0.442/0.078 ms    0   14K/233 us    3396 rps
>> [  1] 4.00-5.00 sec  1.95 KBytes  16.0 Kbits/sec    
>> 10=0.263/0.150/0.427/0.077 ms    0   14K/215 us    3802 rps
>> [  1] 5.00-6.00 sec  1.95 KBytes  16.0 Kbits/sec    
>> 10=0.325/0.237/0.409/0.056 ms    0   14K/258 us    3077 rps
>> [  1] 6.00-7.00 sec  1.95 KBytes  16.0 Kbits/sec    
>> 10=0.259/0.165/0.410/0.077 ms    0   14K/219 us    3857 rps
>> [  1] 7.00-8.00 sec  1.95 KBytes  16.0 Kbits/sec    
>> 10=0.277/0.193/0.415/0.068 ms    0   14K/224 us    3608 rps
>> [  1] 8.00-9.00 sec  1.95 KBytes  16.0 Kbits/sec    
>> 10=0.292/0.206/0.465/0.072 ms    0   14K/231 us    3420 rps
>> [  1] 9.00-10.00 sec  1.95 KBytes  16.0 Kbits/sec    
>> 10=0.256/0.157/0.439/0.082 ms    0   14K/211 us    3908 rps
>> [  1] 0.00-10.01 sec  19.5 KBytes  16.0 Kbits/sec    
>> 100=0.280/0.150/0.755/0.085 ms    0   14K/1033 us    3573 rps
>> [  1] 0.00-10.01 sec  OWD Delays (ms) Cnt=100 
>> To=0.169/0.074/0.318/0.056 From=0.105/0.055/0.162/0.024 
>> Asymmetry=0.065/0.000/0.172/0.049    3573 rps
>> [  1] 0.00-10.01 sec BB8(f)-PDF: 
>> bin(w=100us):cnt(100)=2:14,3:57,4:20,5:8,8:1 
>> (5.00/95.00/99.7%=2/5/8,Outliers=0,obl/obu=0/0)
>> 
>> 
>> Bob
>>> Dave,
>>> your presentation was awesome, I fully agree with you ;). I very much
>>> liked your practical funnel demonstration which was boiled down to 
>>> the
>>> bare minimum (I only partly asked myself, will the liquid spill in in
>>> your laptops keyboard, and if so is it water-proof, but you clearly
>>> had rehearsed/tried that before).
>>> BTW, I always have to think of this
>>> h++ps://www.youtube.com/watch?v=R7yfISlGLNU somehow when you present
>>> live from the marina ;)
>>> I am still not through watching all of the presentations and panels,
>>> but can already say, team L4S continues to over-promise and
>>> under-deliver, but Koen's presentation itself was done well and might
>>> (sadly) convince people to buy-in into L4(S) = 2L2L = too little, too
>>> late.
>>> Stuart's RPM presentation was great, making a convincing point.
>>> (Except for pitching L4S and LLD as "solutions", I will accept them 
>>> as
>>> a step in the right direction, but why not go in all the way and
>>> embrace proper scheduling?)
>>> In detail though, I am not fully convinced about the decision of
>>> taking the inverse of delay increase as singular measure here as I
>>> consider that as a bit of a squandered opportunity at public
>>> outreach/education and as comparing idle and working RPM is
>>> non-intuitive, while idle and working RTT can immediately subtracted
>>> to see the extent of the queueing damage in actionable terms.
>>> Try the same with RPM values:
>>> 123-1234567:~ user$ networkQuality -v
>>> ==== SUMMARY ====
>>> Upload capacity: 22.208 Mbps
>>> Download capacity: 88.054 Mbps
>>> Upload flows: 12
>>> Download flows: 12
>>> Responsiveness: High (2622 RPM)
>>> Base RTT: 18
>>> Start: 3/12/23, 21:00:58
>>> End: 3/12/23, 21:01:08
>>> OS Version: Version 12.6.3 (Build 21G419)
>>> here we can divide 60 [sec/minute] * 1000 [ms/sec] by the RPM [1/min]
>>> to get: 60000/2622 = 22.88 ms loaded delay and subtract the base RTT
>>> of 18 for 60000/2622 - 18 = 4.88 ~5ms of loaded delay which is a
>>> useful quantity when managing a delay budget (this test was performed
>>> over wired ethernet with competent AQM and traffic shaping on the
>>> link, so no surprise about the outcome there). Let's look at the
>>> reverse and convert the base RTT into a base RPM score instead:
>>> 6000/18 = 333 rpm, what exactly does the delta RPM of 2622-333 
>>> =psuedo
>>> 2289rpm now tell us about the difference between idle and working
>>> conditions? [Well, since conversion is not witchcraft, I will be fine
>>> as will other interested in actual evoked delay, but we could have
>>> gotten a better measure*]
>>> And all for the somewhat unhelpful car analogy... (it is not that for
>>> internal combustion engines bigger is necessarily better for RPM,
>>> either for torque or fuel efficiency).
>>> I guess that ship has sailed though and RPM it is
>>> *) Stuart notes that milliseconds and Hertz sound to sciency, but 
>>> they
>>> could simply have given the delay increase in milliseconds a 
>>> fancierpsuedo
>>> name to solve that specific problem...
>>>> On Mar 12, 2023, at 20:31, Dave Taht via Rpm 
>>>> <rpm@lists.bufferbloat.net> wrote:
>>>> https://www.reddit.com/r/HomeNetworking/comments/11pmc9a/comment/jbypj0z/?context=3
>>>> --
>>>> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
>>>> Dave Täht CEO, TekLibre, LLC
>>>> _______________________________________________
>>>> Rpm mailing list
>>>> Rpm@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/rpm
>>> _______________________________________________
>>> Rpm mailing list
>>> Rpm@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-03-13  2:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-12 19:31 [Starlink] so great to see ISPs that care Dave Taht
2023-03-12 20:43 ` [Starlink] [Rpm] " Sebastian Moeller
2023-03-12 21:02   ` rjmcmahon
2023-03-12 21:20     ` rjmcmahon
2023-03-12 21:37     ` Sebastian Moeller
2023-03-13  2:56       ` rjmcmahon
2023-03-12 22:39     ` Ben Greear

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox