[Starlink] [Rpm] [M-Lab-Discuss] misery metrics & consequences

rjmcmahon rjmcmahon at rjmcmahon.com
Mon Oct 24 20:08:33 EDT 2022


Be careful about assuming network loads always worsen latency over 
networks. Below is an example over WiFi with a rasberry pi over a 
Netgear Nighthawk RAXE500 to a 1G wired linux host, without a load then 
with an upstream load. I've noticed similar with some hardware 
forwarding planes where a busier AP outperforms, in terms of latency, a 
lightly loaded one. (Note: Iperf 2 uses responses per second (RPS) as 
the SI Units of time is the second.)

rjmcmahon at ubuntu:/usr/local/src/iperf2-code$ iperf -c 192.168.1.69 -i 1 
--bounceback
------------------------------------------------------------
Client connecting to 192.168.1.69, TCP port 5001 with pid 4148 (1 flows)
Write buffer size:  100 Byte
Bursting:  100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs & 
tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.40%eth0 port 53750 connected with 192.168.1.69 
port 5001 (bb w/quickack len/hold=100/0) (sock=3) 
(icwnd/mss/irtt=14/1448/341) (ct=0.44 ms) on 2022-10-25 00:00:48 (UTC)
[ ID] Interval        Transfer    Bandwidth         BB 
cnt=avg/min/max/stdev         Rtry  Cwnd/RTT    RPS
[  1] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.340/0.206/0.852/0.188 ms    0   14K/220 us    2941 rps
[  1] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.481/0.362/0.572/0.057 ms    0   14K/327 us    2078 rps
[  1] 2.00-3.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.471/0.344/0.694/0.089 ms    0   14K/340 us    2123 rps
[  1] 3.00-4.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.406/0.330/0.595/0.072 ms    0   14K/318 us    2465 rps
[  1] 4.00-5.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.471/0.405/0.603/0.057 ms    0   14K/348 us    2124 rps
[  1] 5.00-6.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.428/0.355/0.641/0.079 ms    0   14K/324 us    2337 rps
[  1] 6.00-7.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.429/0.329/0.616/0.086 ms    0   14K/306 us    2329 rps
[  1] 7.00-8.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.445/0.325/0.673/0.092 ms    0   14K/321 us    2248 rps
[  1] 8.00-9.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.423/0.348/0.604/0.074 ms    0   14K/299 us    2366 rps
[  1] 9.00-10.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.463/0.369/0.729/0.108 ms    0   14K/306 us    2159 rps
[  1] 0.00-10.01 sec  19.7 KBytes  16.1 Kbits/sec    
101=0.438/0.206/0.852/0.102 ms    0   14K/1192 us    2285 rps
[  1] 0.00-10.01 sec BB8-PDF: 
bin(w=100us):cnt(101)=3:5,4:30,5:48,6:9,7:7,8:1,9:1 
(5.00/95.00/99.7%=4/7/9,Outliers=0,obl/obu=0/0)


rjmcmahon at ubuntu:/usr/local/src/iperf2-code$ iperf -c 192.168.1.69 -i 1 
--bounceback --bounceback-congest=up,1
------------------------------------------------------------
Client connecting to 192.168.1.69, TCP port 5001 with pid 4152 (1 flows)
Write buffer size:  100 Byte
Bursting:  100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs & 
tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  2] local 192.168.1.40%eth0 port 50462 connected with 192.168.1.69 
port 5001 (sock=4) (qack) (icwnd/mss/irtt=14/1448/475) (ct=0.63 ms) on 
2022-10-25 00:01:07 (UTC)
[  1] local 192.168.1.40%eth0 port 50472 connected with 192.168.1.69 
port 5001 (bb w/quickack len/hold=100/0) (sock=3) 
(icwnd/mss/irtt=14/1448/375) (ct=0.67 ms) on 2022-10-25 00:01:07 (UTC)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     
Cwnd/RTT(var)        NetPwr
[  2] 0.00-1.00 sec  3.73 MBytes  31.3 Mbits/sec  39069/0          0     
   59K/133(1) us  29376
[ ID] Interval        Transfer    Bandwidth         BB 
cnt=avg/min/max/stdev         Rtry  Cwnd/RTT    RPS
[  1] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.282/0.178/0.887/0.214 ms    0   14K/191 us    3547 rps
[  2] 1.00-2.00 sec  3.77 MBytes  31.6 Mbits/sec  39512/0          0     
   59K/133(1) us  29708
[  1] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.196/0.149/0.240/0.024 ms    0   14K/161 us    5115 rps
[  2] 2.00-3.00 sec  3.77 MBytes  31.6 Mbits/sec  39558/0          0     
   59K/125(8) us  31646
[  1] 2.00-3.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.163/0.134/0.192/0.018 ms    0   14K/136 us    6124 rps
[  2] 3.00-4.00 sec  3.77 MBytes  31.6 Mbits/sec  39560/0          0     
   59K/133(1) us  29744
[  1] 3.00-4.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.171/0.149/0.216/0.019 ms    0   14K/133 us    5838 rps
[  2] 4.00-5.00 sec  3.76 MBytes  31.6 Mbits/sec  39460/0          0     
   59K/131(2) us  30122
[  1] 4.00-5.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.188/0.131/0.242/0.029 ms    0   14K/143 us    5308 rps
[  2] 5.00-6.00 sec  3.77 MBytes  31.6 Mbits/sec  39545/0          0     
   59K/133(0) us  29733
[  1] 5.00-6.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.197/0.147/0.255/0.031 ms    0   14K/149 us    5079 rps
[  2] 6.00-7.00 sec  3.78 MBytes  31.7 Mbits/sec  39631/0          0     
   59K/133(1) us  29798
[  1] 6.00-7.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.196/0.146/0.229/0.025 ms    0   14K/151 us    5102 rps
[  2] 7.00-8.00 sec  3.77 MBytes  31.6 Mbits/sec  39497/0          0     
   59K/133(0) us  29697
[  1] 7.00-8.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.190/0.147/0.225/0.028 ms    0   14K/155 us    5260 rps
[  2] 8.00-9.00 sec  3.77 MBytes  31.6 Mbits/sec  39533/0          0     
   59K/126(4) us  31375
[  1] 8.00-9.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.185/0.158/0.208/0.017 ms    0   14K/148 us    5414 rps
[  2] 9.00-10.00 sec  3.77 MBytes  31.6 Mbits/sec  39519/0          0    
    59K/133(1) us  29714
[  1] 9.00-10.00 sec  1.95 KBytes  16.0 Kbits/sec    
10=0.165/0.134/0.232/0.028 ms    0   14K/131 us    6064 rps
[  2] 0.00-10.01 sec  37.7 MBytes  31.6 Mbits/sec  394886/0          0   
     59K/1430(2595) us  2759
[  1] 0.00-10.01 sec  19.7 KBytes  16.1 Kbits/sec    
101=0.194/0.131/0.887/0.075 ms    0   14K/1385 us    5167 rps
[  1] 0.00-10.01 sec BB8-PDF: bin(w=100us):cnt(101)=2:69,3:31,9:1 
(5.00/95.00/99.7%=2/3/9,Outliers=0,obl/obu=0/0)
[ CT] final connect times (min/avg/max/stdev) = 0.627/0.647/0.666/27.577 
ms (tot/err) = 2/0


Bob
>> On Oct 24, 2022, at 1:57 PM, Sebastian Moeller <moeller0 at gmx.de>
>> wrote:
>> Hi Christoph
>> 
>> On Oct 24, 2022, at 22:08, Christoph Paasch <cpaasch at apple.com>
>> wrote:
>> 
>> Hello Sebastian,
>> 
>> On Oct 23, 2022, at 4:57 AM, Sebastian Moeller via Starlink
>> <starlink at lists.bufferbloat.net> wrote:
>> 
>> Hi Glenn,
>> 
>> On Oct 23, 2022, at 02:17, Glenn Fishbine via Rpm
>> <rpm at lists.bufferbloat.net> wrote:
>> 
>> As a classic died in the wool empiricist, granted that you can
>> identify "misery" factors, given a population of 1,000 users, how do
>> you propose deriving a misery index for that population?
>> 
>> We can measure download, upload, ping, jitter pretty much without
>> user intervention.  For the measurements you hypothesize, how you
>> you automatically extract those indecies without subjective user
>> contamination.
>> 
>> I.e.  my download speed sucks. Measure the download speed.
>> 
>> My isp doesn't fix my problem. Measure what? How?
>> 
>> Human survey technology is 70+ years old and it still has problems
>> figuring out how to correlate opinion with fact.
>> 
>> Without an objective measurement scheme that doesn't require human
>> interaction, the misery index is a cool hypothesis with no way to
>> link to actual data.  What objective measurements can be made?
>> Answer that and the index becomes useful. Otherwise it's just
>> consumer whining.
>> 
>> Not trying to be combative here, in fact I like the concept you
>> support, but I'm hard pressed to see how the concept can lead to
>> data, and the data lead to policy proposals.
>> 
>> [SM] So it seems that outside of seemingly simple to test
>> throughput numbers*, the next most important quality number (or the
>> most important depending on subjective ranking) is how does latency
>> change under "load". Absolute latency is also important albeit
>> static high latency can be worked around within limits so the change
>> under load seems more relevant.
>> All of flent's RRUL test, apple's networkQuality/RPM, and iperf2's
>> bounceback test offer methods to asses latency change under load**,
>> as do waveforms bufferbloat tests and even to a degree Ookla's
>> speedtest.net. IMHO something like latency increase under load or
>> apple's responsiveness measure RPM (basically the inverse of the
>> latency under load calculated on a per minute basis, so it scales in
>> the typical higher numbers are better way, unlike raw latency under
>> load numbers where smaller is better).
>> IMHO what networkQuality is missing ATM is to measure and report
>> the unloaded RPM as well as the loaded the first gives a measure
>> over the static latency the second over how well things keep working
>> if capacity gets tight. They report the base RTT which can be
>> converted to RPM. As an example:
>> 
>> macbook:~ user$ networkQuality -v
>> ==== SUMMARY ====
>> 
>> Upload capacity: 24.341 Mbps
>> Download capacity: 91.951 Mbps
>> Upload flows: 20
>> Download flows: 16
>> Responsiveness: High (2123 RPM)
>> Base RTT: 16
>> Start: 10/23/22, 13:44:39
>> End: 10/23/22, 13:44:53
>> OS Version: Version 12.6 (Build 21G115)
> 
> You should update to latest macOS:
> 
> $ networkQuality
> ==== SUMMARY ====
> Uplink capacity: 326.789 Mbps
> Downlink capacity: 446.359 Mbps
> Responsiveness: High (2195 RPM)
> Idle Latency: 5.833 milli-seconds
> 
> ;-)
> 
>  [SM] I wish... just updated to the latest and greatest for this
> hardware (A1398):
> 
> macbook-pro:DPZ smoeller$ networkQuality
> ==== SUMMARY ====
> 
> Upload capacity: 7.478 Mbps
> Download capacity: 2.415 Mbps
> Upload flows: 16
> Download flows: 20
> Responsiveness: Low (90 RPM)
> macbook-pro:DPZ smoeller$ networkQuality -v
> ==== SUMMARY ====
> 
> Upload capacity: 5.830 Mbps
> Download capacity: 6.077 Mbps
> Upload flows: 12
> Download flows: 20
> Responsiveness: Low (56 RPM)
> Base RTT: 134
> Start: 10/24/22, 22:47:48
> End: 10/24/22, 22:48:09
> OS Version: Version 12.6.1 (Build 21G217)
> macbook-pro:DPZ smoeller$
> 
> Still, I only see the "Base RTT" with the -v switch and I am not sure
> whether that is identical to your "Idle Latency".
> 
> I guess I need to convince my employer to exchange that macbook
> (actually because the battery starts bulging and not because I am
> behind with networkQuality versions ;) )
> 
> Yes, you would need macOS Ventura to get the latest and greatest.
> 
>>> But, what I read is: You are suggesting that “Idle Latency”
>>> should be expressed in RPM as well? Or, Responsiveness expressed
>>> in millisecond ?
>> 
>> [SM] Yes, I am fine with either (or both) the idea is to make it
>> really easy to see whether/how much "working conditions" deteriorate
>> the responsiveness / increase the latency-under-load. At least in
>> verbose mode it would be sweet if nwtworkQuality could expose that
>> information.
> 
> I see - let me think about that…
> 
> Christoph
> 
>> Here RPM 2133 corresponds to 60000/2123 = 28.26 ms latency under
>> load, while the Base RTT of 16ms corresponds to 60000/16 = 3750 RPM,
>> son on this link load reduces the responsiveness by 3750-2123 = 1627
>> RPM a reduction by 100-100*2123/3750 = 43.4%, and that is with
>> competent AQM and scheduling on the router.
>> 
>> Without competent AQM/shaping I get:
>> ==== SUMMARY ====
>> 
>> Upload capacity: 15.101 Mbps
>> Download capacity: 97.664 Mbps
>> Upload flows: 20
>> Download flows: 12
>> Responsiveness: Medium (427 RPM)
>> Base RTT: 16
>> Start: 10/23/22, 13:51:50
>> End: 10/23/22, 13:52:06
>> OS Version: Version 12.6 (Build 21G115)
>> latency under load: 60000/427 = 140.52 ms
>> base RPM: 60000/16 = 3750 RPM
>> reduction RPM: 100-100*427/3750 = 88.6%
>> 
>> I understand apple's desire to have a single reported number with a
>> single qualifier medium/high/... because in the end a link is only
>> reliably usable if responsiveness under load stays acceptable, but
>> with two numbers it is easier to see what one's ISP could do to
>> help. (I guess some ISPs might already be unhappy with the single
>> number, so this needs some diplomacy/tact)
>> 
>> Regards
>> Sebastian
>> 
>> *) Seemingly as quite some ISPs operate their own speedtest servers
>> in their network and ignore customers not reaching the contracted
>> rates into speedtest-servers located in different ASs. As the
>> product is called internet access I a inclined to expect that my ISP
>> maintains sufficient peering/transit capacity to reach the next tier
>> of AS at my contracted rate (the EU legislative seems to agree, see
>> EU directive 2015/2120).
>> 
>> **) Most do by creating load themselves and measuring throughput at
>> the same time, bounceback IIUC will focus on the latency measurement
>> and leave the load generation optional (so offers a mode to measure
>> responsiveness of a live network with minimal measurement traffic).
>> @Bob, please correct me if this is wrong.
>> 
>> On Fri, Oct 21, 2022, 5:20 PM Dave Taht <dave.taht at gmail.com> wrote:
>> One of the best talks I've ever seen on how to measure customer
>> satisfaction properly just went up after the P99 Conference.
>> 
>> It's called Misery Metrics.
>> 
>> After going through a deep dive as to why and how we think and act
>> on
>> percentiles, bins, and other statistical methods as to how we use
>> the
>> web and internet are *so wrong* (well worth watching and thinking
>> about if you are relying on or creating network metrics today), it
>> then points to the real metrics that matter to users and the
>> ultimate
>> success of an internet business: Timeouts, retries, misses, failed
>> queries, angry phone calls, abandoned shopping carts and loss of
>> engagement.
>> 
>> https://www.p99conf.io/session/misery-metrics-consequences/
>> 
>> The ending advice was - don't aim to make a specific percentile
>> acceptable, aim for an acceptable % of misery.
>> 
>> I enjoyed the p99 conference more than any conference I've attended
>> in years.
>> 
>> --
>> This song goes out to all the folk that thought Stadia would work:
>> 
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>> 
>> --
>> You received this message because you are subscribed to the Google
>> Groups "discuss" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to discuss+unsubscribe at measurementlab.net.
>> To view this discussion on the web visit
>> 
> https://groups.google.com/a/measurementlab.net/d/msgid/discuss/CAA93jw4w27a1EO_QQG7NNkih%2BC3QQde5%3D_7OqGeS9xy9nB6wkg%40mail.gmail.com.
>> _______________________________________________
>> Rpm mailing list
>> Rpm at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
>> 
>> _______________________________________________
>> Starlink mailing list
>> Starlink at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Rpm mailing list
> Rpm at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm


More information about the Starlink mailing list