[Starlink] [NNagain] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens)

rjmcmahon rjmcmahon at rjmcmahon.com
Fri Mar 8 15:30:22 EST 2024


This isn't the definition of latency:

"Latency refers to the amount of time, usually measured in milliseconds, 
that it takes for a packet to be sent from your Starlink router to the 
internet and for the response to be received. This is also known as 
“round-trip time”, or RTT."

Better is the time to move a message from memory a to memory b over a 
channel. Iperf 2 uses first write to final read and defaults the message 
size to 128K bytes. Example over a Wi-Fi link below. Notice the TCP RTT 
is 8 ms but the 128K write to read averages 5ms.

Packets are mostly an artifact and aren't the relevant measureable unit.

[root at fedora ~]# iperf -s -i 1  -e
------------------------------------------------------------
Server listening on TCP port 5001 with pid 931640
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
TCP congestion control default reno
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.232%eth1 port 5001 connected with 192.168.1.15 
port 43814 (trip-times) (sock=4) (peer 2.1.10-dev) 
(icwnd/mss/irtt=14/1448/3048) on 2024-03-08 12:11:36.777 (PST)
[ ID] Interval        Transfer    Bandwidth    Burst Latency 
avg/min/max/stdev (cnt/size) inP NetPwr  Reads=Dist
[  1] 0.00-1.00 sec   201 MBytes  1.69 Gbits/sec  
5.206/1.523/21.693/2.130 ms (1609/131131) 1.05 MByte 40531  
3847=1072:289:268:967:433:134:108:576
[  1] 1.00-2.00 sec   210 MBytes  1.76 Gbits/sec  
5.720/1.598/14.741/2.382 ms (1682/131086) 1.21 MByte 38544  
3808=997:285:287:859:416:163:133:668
[  1] 2.00-3.00 sec   212 MBytes  1.78 Gbits/sec  
5.435/1.371/13.913/2.195 ms (1695/131048) 1.15 MByte 40873  
3833=999:255:271:901:456:169:136:646
[  1] 3.00-4.00 sec   211 MBytes  1.77 Gbits/sec  
5.514/1.496/13.218/2.244 ms (1687/131070) 1.16 MByte 40100  
3934=1056:263:315:937:467:154:102:640
[  1] 4.00-5.00 sec   212 MBytes  1.78 Gbits/sec  
5.444/1.494/12.440/2.171 ms (1696/131050) 1.16 MByte 40826  
3931=1018:302:320:918:452:168:128:625
[  1] 5.00-6.00 sec   210 MBytes  1.76 Gbits/sec  
5.387/1.515/13.567/2.229 ms (1682/131067) 1.13 MByte 40925  
3808=977:278:295:869:453:153:124:659
[  1] 6.00-7.00 sec   210 MBytes  1.77 Gbits/sec  
5.526/1.439/16.116/2.250 ms (1683/131123) 1.17 MByte 39935  
3740=927:284:280:835:435:172:145:662
[  1] 7.00-8.00 sec   209 MBytes  1.75 Gbits/sec  
5.659/1.441/13.146/2.320 ms (1674/131017) 1.18 MByte 38759  
3822=987:284:306:883:445:167:106:644
[  1] 8.00-9.00 sec   211 MBytes  1.77 Gbits/sec  
5.465/1.481/13.540/2.256 ms (1686/131123) 1.16 MByte 40453  
3815=975:275:303:866:438:172:144:642
[  1] 9.00-10.00 sec   210 MBytes  1.76 Gbits/sec  
5.579/1.519/14.028/2.233 ms (1683/131005) 1.17 MByte 39519  
3798=965:282:284:881:460:143:119:664

[root at ctrl1fc35 iperf-2.1.n]# iperf -c 192.168.1.232 -i 1 --trip-times
------------------------------------------------------------
Client connecting to 192.168.1.232, TCP port 5001 with pid 2669821 (1/0 
flows/load)
Write buffer size: 131072 Byte
TCP congestion control using reno
TOS set to 0x0 (dscp=0,ecn=0) (Nagle on)
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[  1] local 192.168.1.15%enp2s0 port 43814 connected with 192.168.1.232 
port 5001 (prefetch=16384) (trip-times) (sock=3) 
(icwnd/mss/irtt=14/1448/3925) (ct=3.97 ms) on 2024-03-08 12:11:36.771 
(PST)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     
Cwnd/RTT(var)        NetPwr
[  1] 0.00-1.00 sec   202 MBytes  1.69 Gbits/sec  1614/0         0     
5677K/8098(2526) us  26124
[  1] 1.00-2.00 sec   212 MBytes  1.78 Gbits/sec  1693/0         0     
5677K/8827(1836) us  25139
[  1] 2.00-3.00 sec   211 MBytes  1.77 Gbits/sec  1688/0         0     
5677K/9734(603) us  22730
[  1] 3.00-4.00 sec   210 MBytes  1.76 Gbits/sec  1681/0         0     
5677K/8224(2476) us  26791
[  1] 4.00-5.00 sec   213 MBytes  1.79 Gbits/sec  1705/0         0     
5677K/8649(2945) us  25839
[  1] 5.00-6.00 sec   210 MBytes  1.77 Gbits/sec  1684/0         0     
5677K/7896(1909) us  27954
[  1] 6.00-7.00 sec   210 MBytes  1.76 Gbits/sec  1683/0         0     
5677K/7974(2579) us  27664
[  1] 7.00-8.00 sec   209 MBytes  1.76 Gbits/sec  1675/0         0     
5677K/7949(1678) us  27619
[  1] 8.00-9.00 sec   210 MBytes  1.76 Gbits/sec  1680/0         0     
5677K/7841(1992) us  28083
[  1] 9.00-10.00 sec   211 MBytes  1.77 Gbits/sec  1688/0         0     
5677K/7933(1578) us  27890
[  1] 0.00-10.02 sec  2.05 GBytes  1.76 Gbits/sec  16792/0         0     
5677K/8631(2951) us  25439

Use --histograms to get the bin'ed data without CLT averaging. 3 stdev 
is 12.2 ms

[root at fedora ~]# iperf -s -i 1  -e --histograms
------------------------------------------------------------
Server listening on TCP port 5001 with pid 931657
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
TCP congestion control default reno
Enabled receive histograms bin-width=0.100 ms, bins=100000 (clients 
should use --trip-times)
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.232%eth1 port 5001 connected with 192.168.1.15 
port 43822 (trip-times) (sock=4) (peer 2.1.10-dev) 
(icwnd/mss/irtt=14/1448/4065) on 2024-03-08 12:17:48.149 (PST)
[ ID] Interval        Transfer    Bandwidth    Burst Latency 
avg/min/max/stdev (cnt/size) inP NetPwr  Reads=Dist
[  1] 0.00-1.00 sec   202 MBytes  1.70 Gbits/sec  
5.403/1.441/15.760/2.185 ms (1617/131118) 1.10 MByte 39241  
4166=1194:310:328:1258:418:126:78:454
[  1] 0.00-1.00 sec F8-PDF: 
bin(w=100us):cnt(1617)=15:1,17:2,18:2,19:5,20:9,21:13,22:12,23:17,24:24,25:23,26:15,27:23,28:28,29:32,30:32,31:29,32:25,33:33,34:18,35:18,36:19,37:25,38:27,39:30,40:33,41:32,42:28,43:30,44:20,45:23,46:22,47:23,48:23,49:32,50:23,51:31,52:37,53:30,54:15,55:23,56:25,57:34,58:32,59:33,60:30,61:24,62:31,63:19,64:18,65:18,66:13,67:28,68:19,69:28,70:25,71:20,72:18,73:10,74:7,75:16,76:13,77:11,78:12,79:13,80:11,81:16,82:16,83:10,84:11,85:11,86:5,87:10,88:9,89:12,90:11,91:10,92:7,93:10,94:8,95:6,96:7,97:7,98:6,99:2,100:4,101:7,102:1,103:4,105:6,106:1,107:1,108:1,111:1,112:3,113:3,114:1,115:1,117:1,119:2,120:2,121:1,122:2,123:2,124:1,125:1,130:1,158:1 
(5.00/95.00/99.7%=24/94/123,Outliers=0,obl/obu=0/0) (15.760 
ms/1709929068.147166)
[  1] 1.00-2.00 sec   212 MBytes  1.78 Gbits/sec  
5.458/1.613/13.918/2.215 ms (1694/131068) 1.15 MByte 40681  
4570=1361:330:392:1431:440:135:71:410
[  1] 1.00-2.00 sec F8-PDF: 
bin(w=100us):cnt(1694)=17:2,18:5,19:10,20:10,21:17,22:21,23:20,24:19,25:24,26:20,27:20,28:32,29:32,30:27,31:27,32:29,33:27,34:23,35:18,36:15,37:30,38:22,39:20,40:26,41:27,42:30,43:31,44:24,45:24,46:19,47:26,48:33,49:35,50:27,51:35,52:23,53:26,54:28,55:26,56:25,57:23,58:29,59:22,60:38,61:24,62:29,63:26,64:26,65:18,66:18,67:24,68:31,69:15,70:28,71:34,72:24,73:14,74:13,75:13,76:17,77:10,78:12,79:15,80:15,81:23,82:15,83:9,84:12,85:14,86:11,87:14,88:8,89:9,90:12,91:7,92:9,93:9,94:8,95:4,96:10,97:4,98:4,99:3,100:7,101:3,102:7,103:6,104:1,105:1,106:3,108:2,109:2,110:1,111:1,112:1,113:1,115:2,117:3,118:2,119:4,121:1,122:2,123:1,124:2,125:1,131:1,140:1 
(5.00/95.00/99.7%=23/94/123,Outliers=0,obl/obu=0/0) (13.918 
ms/1709929069.596075)
[  1] 2.00-3.00 sec   212 MBytes  1.78 Gbits/sec  
5.375/1.492/12.891/2.103 ms (1693/131105) 1.14 MByte 41298  
4448=1274:306:352:1453:455:117:73:418
[  1] 2.00-3.00 sec F8-PDF: 
bin(w=100us):cnt(1693)=15:1,16:2,17:1,18:3,19:9,20:11,21:11,22:10,23:22,24:27,25:29,26:33,27:22,28:28,29:27,30:25,31:16,32:26,33:30,34:25,35:24,36:19,37:20,38:28,39:24,40:27,41:22,42:33,43:31,44:28,45:26,46:33,47:32,48:30,49:31,50:25,51:21,52:25,53:35,54:35,55:36,56:21,57:28,58:27,59:20,60:18,61:24,62:23,63:30,64:34,65:19,66:22,67:28,68:32,69:25,70:18,71:14,72:21,73:14,74:22,75:13,76:14,77:17,78:13,79:15,80:11,81:12,82:14,83:16,84:14,85:20,86:20,87:13,88:10,89:9,90:14,91:6,92:9,93:14,94:6,95:3,96:5,97:6,98:5,99:2,100:3,101:2,102:3,103:4,105:3,106:1,107:1,108:1,109:1,110:2,111:1,113:1,114:1,118:2,120:1,121:1,129:1 
(5.00/95.00/99.7%=24/91/114,Outliers=0,obl/obu=0/0) (12.891 
ms/1709929070.548830)
[  1] 0.00-3.01 sec   627 MBytes  1.75 Gbits/sec  
5.410/1.441/15.760/2.167 ms (5018/131072)  858 KByte 40393  
13218=3837:952:1075:4148:1317:379:225:1285
[  1] 0.00-3.01 sec F8(f)-PDF: 
bin(w=100us):cnt(5018)=15:2,16:2,17:5,18:10,19:24,20:31,21:42,22:43,23:59,24:70,25:76,26:68,27:65,28:88,29:91,30:85,31:72,32:80,33:90,34:66,35:60,36:53,37:75,38:77,39:75,40:86,41:82,42:92,43:92,44:72,45:73,46:74,47:81,48:86,49:99,50:76,51:87,52:85,53:91,54:78,55:85,56:72,57:85,58:88,59:76,60:87,61:72,62:83,63:75,64:78,65:55,66:53,67:80,68:84,69:68,70:71,71:68,72:63,73:38,74:42,75:42,76:44,77:38,78:38,79:43,80:37,81:51,82:45,83:35,84:37,85:45,86:36,87:37,88:27,89:30,90:37,91:23,92:25,93:33,94:22,95:13,96:22,97:17,98:15,99:7,100:14,101:12,102:11,103:14,104:1,105:10,106:5,107:2,108:4,109:3,110:3,111:3,112:4,113:5,114:2,115:3,117:4,118:4,119:6,120:3,121:3,122:4,123:3,124:3,125:2,129:1,130:1,131:1,140:1,158:1 
(5.00/95.00/99.7%=24/93/122,Outliers=0,obl/obu=0/0) (15.760 
ms/1709929068.147166)

Bob

> they benefited a lot from this mailing list and the research and even
> user community at large
> --
> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM), Pan at UVic.CA, 
> Web.UVic.CA/~pan
> 
> 
> On Fri, Mar 8, 2024 at 11:40 AM the keyboard of geoff goodfellow via
> Starlink <starlink at lists.bufferbloat.net> wrote:
>> 
>> Super excited to be able to share some of what we have been working on 
>> over the last few months!
>> EXCERPT:
>> 
>> Starlink engineering teams have been focused on improving the 
>> performance of our network with the goal of delivering a service with 
>> stable 20 millisecond (ms) median latency and minimal packet loss.
>> 
>> Over the past month, we have meaningfully reduced median and 
>> worst-case latency for users around the world. In the United States 
>> alone, we reduced median latency by more than 30%, from 48.5ms to 33ms 
>> during hours of peak usage. Worst-case peak hour latency (p99) has 
>> dropped by over 60%, from over 150ms to less than 65ms. Outside of the 
>> United States, we have also reduced median latency by up to 25% and 
>> worst-case latencies by up to 35%...
>> 
>> [...]
>> https://api.starlink.com/public-files/StarlinkLatency.pdf
>> via
>> https://twitter.com/Starlink/status/1766179308887028005
>> &
>> https://twitter.com/VirtuallyNathan/status/1766179789927522460
>> 
>> 
>> --
>> Geoff.Goodfellow at iconia.com
>> living as The Truth is True
>> 
>> _______________________________________________
>> Starlink mailing list
>> Starlink at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Nnagain mailing list
> Nnagain at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain


More information about the Starlink mailing list