* [NNagain] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens)
@ 2024-03-08 19:40 the keyboard of geoff goodfellow
2024-03-08 19:50 ` [NNagain] [Starlink] " J Pan
0 siblings, 1 reply; 12+ messages in thread
From: the keyboard of geoff goodfellow @ 2024-03-08 19:40 UTC (permalink / raw)
To: Starlink,
Network Neutrality is back! Let´s make the technical
aspects heard this time!
[-- Attachment #1: Type: text/plain, Size: 1009 bytes --]
*Super excited to be able to share some of what we have been working on
over the last few months!*
EXCERPT:
*Starlink engineering teams have been focused on improving the performance
of our network with the goal of delivering a service with stable 20
millisecond (ms) median latency and minimal packet loss. *
*Over the past month, we have meaningfully reduced median and worst-case
latency for users around the world. In the United States alone, we reduced
median latency by more than 30%, from 48.5ms to 33ms during hours of peak
usage. Worst-case peak hour latency (p99) has dropped by over 60%, from
over 150ms to less than 65ms. Outside of the United States, we have also
reduced median latency by up to 25% and worst-case latencies by up to
35%...*
[...]
https://api.starlink.com/public-files/StarlinkLatency.pdf
via
https://twitter.com/Starlink/status/1766179308887028005
&
https://twitter.com/VirtuallyNathan/status/1766179789927522460
--
Geoff.Goodfellow@iconia.com
living as The Truth is True
[-- Attachment #2: Type: text/html, Size: 2960 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [NNagain] [Starlink] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens)
2024-03-08 19:40 [NNagain] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens) the keyboard of geoff goodfellow
@ 2024-03-08 19:50 ` J Pan
2024-03-08 20:09 ` the keyboard of geoff goodfellow
2024-03-08 20:30 ` rjmcmahon
0 siblings, 2 replies; 12+ messages in thread
From: J Pan @ 2024-03-08 19:50 UTC (permalink / raw)
To: the keyboard of geoff goodfellow
Cc: Starlink,
Network Neutrality is back! Let´s make the technical
aspects heard this time!
they benefited a lot from this mailing list and the research and even
user community at large
--
J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM), Pan@UVic.CA, Web.UVic.CA/~pan
On Fri, Mar 8, 2024 at 11:40 AM the keyboard of geoff goodfellow via
Starlink <starlink@lists.bufferbloat.net> wrote:
>
> Super excited to be able to share some of what we have been working on over the last few months!
> EXCERPT:
>
> Starlink engineering teams have been focused on improving the performance of our network with the goal of delivering a service with stable 20 millisecond (ms) median latency and minimal packet loss.
>
> Over the past month, we have meaningfully reduced median and worst-case latency for users around the world. In the United States alone, we reduced median latency by more than 30%, from 48.5ms to 33ms during hours of peak usage. Worst-case peak hour latency (p99) has dropped by over 60%, from over 150ms to less than 65ms. Outside of the United States, we have also reduced median latency by up to 25% and worst-case latencies by up to 35%...
>
> [...]
> https://api.starlink.com/public-files/StarlinkLatency.pdf
> via
> https://twitter.com/Starlink/status/1766179308887028005
> &
> https://twitter.com/VirtuallyNathan/status/1766179789927522460
>
>
> --
> Geoff.Goodfellow@iconia.com
> living as The Truth is True
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [NNagain] [Starlink] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens)
2024-03-08 19:50 ` [NNagain] [Starlink] " J Pan
@ 2024-03-08 20:09 ` the keyboard of geoff goodfellow
2024-03-08 20:25 ` Frantisek Borsik
2024-03-08 20:31 ` Dave Taht
2024-03-08 20:30 ` rjmcmahon
1 sibling, 2 replies; 12+ messages in thread
From: the keyboard of geoff goodfellow @ 2024-03-08 20:09 UTC (permalink / raw)
To: J Pan
Cc: Starlink,
Network Neutrality is back! Let´s make the technical
aspects heard this time!
[-- Attachment #1: Type: text/plain, Size: 1907 bytes --]
it would be a super good and appreciative gesture if they would disclose
what/if any of the stuff they are making use of and then also to make a
donation :)
On Fri, Mar 8, 2024 at 12:50 PM J Pan <Pan@uvic.ca> wrote:
> they benefited a lot from this mailing list and the research and even
> user community at large
> --
> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM), Pan@UVic.CA,
> Web.UVic.CA/~pan
>
>
> On Fri, Mar 8, 2024 at 11:40 AM the keyboard of geoff goodfellow via
> Starlink <starlink@lists.bufferbloat.net> wrote:
> >
> > Super excited to be able to share some of what we have been working on
> over the last few months!
> > EXCERPT:
> >
> > Starlink engineering teams have been focused on improving the
> performance of our network with the goal of delivering a service with
> stable 20 millisecond (ms) median latency and minimal packet loss.
> >
> > Over the past month, we have meaningfully reduced median and worst-case
> latency for users around the world. In the United States alone, we reduced
> median latency by more than 30%, from 48.5ms to 33ms during hours of peak
> usage. Worst-case peak hour latency (p99) has dropped by over 60%, from
> over 150ms to less than 65ms. Outside of the United States, we have also
> reduced median latency by up to 25% and worst-case latencies by up to 35%...
> >
> > [...]
> > https://api.starlink.com/public-files/StarlinkLatency.pdf
> > via
> > https://twitter.com/Starlink/status/1766179308887028005
> > &
> > https://twitter.com/VirtuallyNathan/status/1766179789927522460
> >
> >
> > --
> > Geoff.Goodfellow@iconia.com
> > living as The Truth is True
> >
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
>
>
--
Geoff.Goodfellow@iconia.com
living as The Truth is True
[-- Attachment #2: Type: text/html, Size: 3585 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [NNagain] [Starlink] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens)
2024-03-08 20:09 ` the keyboard of geoff goodfellow
@ 2024-03-08 20:25 ` Frantisek Borsik
2024-03-08 20:31 ` Dave Taht
1 sibling, 0 replies; 12+ messages in thread
From: Frantisek Borsik @ 2024-03-08 20:25 UTC (permalink / raw)
To: the keyboard of geoff goodfellow
Cc: J Pan, Starlink,
Network Neutrality is back! Let´s make the technical
aspects heard this time!
[-- Attachment #1: Type: text/plain, Size: 2559 bytes --]
Indeed. To Dave Taht: https://www.gofundme.com/f/savewifi
All the best,
Frank
Frantisek (Frank) Borsik
https://www.linkedin.com/in/frantisekborsik
Signal, Telegram, WhatsApp: +421919416714
iMessage, mobile: +420775230885
Skype: casioa5302ca
frantisek.borsik@gmail.com
On Fri, Mar 8, 2024 at 9:10 PM the keyboard of geoff goodfellow via
Starlink <starlink@lists.bufferbloat.net> wrote:
> it would be a super good and appreciative gesture if they would disclose
> what/if any of the stuff they are making use of and then also to make a
> donation :)
>
> On Fri, Mar 8, 2024 at 12:50 PM J Pan <Pan@uvic.ca> wrote:
>
>> they benefited a lot from this mailing list and the research and even
>> user community at large
>> --
>> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM), Pan@UVic.CA,
>> Web.UVic.CA/~pan
>>
>>
>> On Fri, Mar 8, 2024 at 11:40 AM the keyboard of geoff goodfellow via
>> Starlink <starlink@lists.bufferbloat.net> wrote:
>> >
>> > Super excited to be able to share some of what we have been working on
>> over the last few months!
>> > EXCERPT:
>> >
>> > Starlink engineering teams have been focused on improving the
>> performance of our network with the goal of delivering a service with
>> stable 20 millisecond (ms) median latency and minimal packet loss.
>> >
>> > Over the past month, we have meaningfully reduced median and worst-case
>> latency for users around the world. In the United States alone, we reduced
>> median latency by more than 30%, from 48.5ms to 33ms during hours of peak
>> usage. Worst-case peak hour latency (p99) has dropped by over 60%, from
>> over 150ms to less than 65ms. Outside of the United States, we have also
>> reduced median latency by up to 25% and worst-case latencies by up to 35%...
>> >
>> > [...]
>> > https://api.starlink.com/public-files/StarlinkLatency.pdf
>> > via
>> > https://twitter.com/Starlink/status/1766179308887028005
>> > &
>> > https://twitter.com/VirtuallyNathan/status/1766179789927522460
>> >
>> >
>> > --
>> > Geoff.Goodfellow@iconia.com
>> > living as The Truth is True
>> >
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>
> --
> Geoff.Goodfellow@iconia.com
> living as The Truth is True
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #2: Type: text/html, Size: 5890 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [NNagain] [Starlink] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens)
2024-03-08 19:50 ` [NNagain] [Starlink] " J Pan
2024-03-08 20:09 ` the keyboard of geoff goodfellow
@ 2024-03-08 20:30 ` rjmcmahon
1 sibling, 0 replies; 12+ messages in thread
From: rjmcmahon @ 2024-03-08 20:30 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Cc: the keyboard of geoff goodfellow, J Pan, Starlink
This isn't the definition of latency:
"Latency refers to the amount of time, usually measured in milliseconds,
that it takes for a packet to be sent from your Starlink router to the
internet and for the response to be received. This is also known as
“round-trip time”, or RTT."
Better is the time to move a message from memory a to memory b over a
channel. Iperf 2 uses first write to final read and defaults the message
size to 128K bytes. Example over a Wi-Fi link below. Notice the TCP RTT
is 8 ms but the 128K write to read averages 5ms.
Packets are mostly an artifact and aren't the relevant measureable unit.
[root@fedora ~]# iperf -s -i 1 -e
------------------------------------------------------------
Server listening on TCP port 5001 with pid 931640
Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
TCP congestion control default reno
TCP window size: 128 KByte (default)
------------------------------------------------------------
[ 1] local 192.168.1.232%eth1 port 5001 connected with 192.168.1.15
port 43814 (trip-times) (sock=4) (peer 2.1.10-dev)
(icwnd/mss/irtt=14/1448/3048) on 2024-03-08 12:11:36.777 (PST)
[ ID] Interval Transfer Bandwidth Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
[ 1] 0.00-1.00 sec 201 MBytes 1.69 Gbits/sec
5.206/1.523/21.693/2.130 ms (1609/131131) 1.05 MByte 40531
3847=1072:289:268:967:433:134:108:576
[ 1] 1.00-2.00 sec 210 MBytes 1.76 Gbits/sec
5.720/1.598/14.741/2.382 ms (1682/131086) 1.21 MByte 38544
3808=997:285:287:859:416:163:133:668
[ 1] 2.00-3.00 sec 212 MBytes 1.78 Gbits/sec
5.435/1.371/13.913/2.195 ms (1695/131048) 1.15 MByte 40873
3833=999:255:271:901:456:169:136:646
[ 1] 3.00-4.00 sec 211 MBytes 1.77 Gbits/sec
5.514/1.496/13.218/2.244 ms (1687/131070) 1.16 MByte 40100
3934=1056:263:315:937:467:154:102:640
[ 1] 4.00-5.00 sec 212 MBytes 1.78 Gbits/sec
5.444/1.494/12.440/2.171 ms (1696/131050) 1.16 MByte 40826
3931=1018:302:320:918:452:168:128:625
[ 1] 5.00-6.00 sec 210 MBytes 1.76 Gbits/sec
5.387/1.515/13.567/2.229 ms (1682/131067) 1.13 MByte 40925
3808=977:278:295:869:453:153:124:659
[ 1] 6.00-7.00 sec 210 MBytes 1.77 Gbits/sec
5.526/1.439/16.116/2.250 ms (1683/131123) 1.17 MByte 39935
3740=927:284:280:835:435:172:145:662
[ 1] 7.00-8.00 sec 209 MBytes 1.75 Gbits/sec
5.659/1.441/13.146/2.320 ms (1674/131017) 1.18 MByte 38759
3822=987:284:306:883:445:167:106:644
[ 1] 8.00-9.00 sec 211 MBytes 1.77 Gbits/sec
5.465/1.481/13.540/2.256 ms (1686/131123) 1.16 MByte 40453
3815=975:275:303:866:438:172:144:642
[ 1] 9.00-10.00 sec 210 MBytes 1.76 Gbits/sec
5.579/1.519/14.028/2.233 ms (1683/131005) 1.17 MByte 39519
3798=965:282:284:881:460:143:119:664
[root@ctrl1fc35 iperf-2.1.n]# iperf -c 192.168.1.232 -i 1 --trip-times
------------------------------------------------------------
Client connecting to 192.168.1.232, TCP port 5001 with pid 2669821 (1/0
flows/load)
Write buffer size: 131072 Byte
TCP congestion control using reno
TOS set to 0x0 (dscp=0,ecn=0) (Nagle on)
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local 192.168.1.15%enp2s0 port 43814 connected with 192.168.1.232
port 5001 (prefetch=16384) (trip-times) (sock=3)
(icwnd/mss/irtt=14/1448/3925) (ct=3.97 ms) on 2024-03-08 12:11:36.771
(PST)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT(var) NetPwr
[ 1] 0.00-1.00 sec 202 MBytes 1.69 Gbits/sec 1614/0 0
5677K/8098(2526) us 26124
[ 1] 1.00-2.00 sec 212 MBytes 1.78 Gbits/sec 1693/0 0
5677K/8827(1836) us 25139
[ 1] 2.00-3.00 sec 211 MBytes 1.77 Gbits/sec 1688/0 0
5677K/9734(603) us 22730
[ 1] 3.00-4.00 sec 210 MBytes 1.76 Gbits/sec 1681/0 0
5677K/8224(2476) us 26791
[ 1] 4.00-5.00 sec 213 MBytes 1.79 Gbits/sec 1705/0 0
5677K/8649(2945) us 25839
[ 1] 5.00-6.00 sec 210 MBytes 1.77 Gbits/sec 1684/0 0
5677K/7896(1909) us 27954
[ 1] 6.00-7.00 sec 210 MBytes 1.76 Gbits/sec 1683/0 0
5677K/7974(2579) us 27664
[ 1] 7.00-8.00 sec 209 MBytes 1.76 Gbits/sec 1675/0 0
5677K/7949(1678) us 27619
[ 1] 8.00-9.00 sec 210 MBytes 1.76 Gbits/sec 1680/0 0
5677K/7841(1992) us 28083
[ 1] 9.00-10.00 sec 211 MBytes 1.77 Gbits/sec 1688/0 0
5677K/7933(1578) us 27890
[ 1] 0.00-10.02 sec 2.05 GBytes 1.76 Gbits/sec 16792/0 0
5677K/8631(2951) us 25439
Use --histograms to get the bin'ed data without CLT averaging. 3 stdev
is 12.2 ms
[root@fedora ~]# iperf -s -i 1 -e --histograms
------------------------------------------------------------
Server listening on TCP port 5001 with pid 931657
Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
TCP congestion control default reno
Enabled receive histograms bin-width=0.100 ms, bins=100000 (clients
should use --trip-times)
TCP window size: 128 KByte (default)
------------------------------------------------------------
[ 1] local 192.168.1.232%eth1 port 5001 connected with 192.168.1.15
port 43822 (trip-times) (sock=4) (peer 2.1.10-dev)
(icwnd/mss/irtt=14/1448/4065) on 2024-03-08 12:17:48.149 (PST)
[ ID] Interval Transfer Bandwidth Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
[ 1] 0.00-1.00 sec 202 MBytes 1.70 Gbits/sec
5.403/1.441/15.760/2.185 ms (1617/131118) 1.10 MByte 39241
4166=1194:310:328:1258:418:126:78:454
[ 1] 0.00-1.00 sec F8-PDF:
bin(w=100us):cnt(1617)=15:1,17:2,18:2,19:5,20:9,21:13,22:12,23:17,24:24,25:23,26:15,27:23,28:28,29:32,30:32,31:29,32:25,33:33,34:18,35:18,36:19,37:25,38:27,39:30,40:33,41:32,42:28,43:30,44:20,45:23,46:22,47:23,48:23,49:32,50:23,51:31,52:37,53:30,54:15,55:23,56:25,57:34,58:32,59:33,60:30,61:24,62:31,63:19,64:18,65:18,66:13,67:28,68:19,69:28,70:25,71:20,72:18,73:10,74:7,75:16,76:13,77:11,78:12,79:13,80:11,81:16,82:16,83:10,84:11,85:11,86:5,87:10,88:9,89:12,90:11,91:10,92:7,93:10,94:8,95:6,96:7,97:7,98:6,99:2,100:4,101:7,102:1,103:4,105:6,106:1,107:1,108:1,111:1,112:3,113:3,114:1,115:1,117:1,119:2,120:2,121:1,122:2,123:2,124:1,125:1,130:1,158:1
(5.00/95.00/99.7%=24/94/123,Outliers=0,obl/obu=0/0) (15.760
ms/1709929068.147166)
[ 1] 1.00-2.00 sec 212 MBytes 1.78 Gbits/sec
5.458/1.613/13.918/2.215 ms (1694/131068) 1.15 MByte 40681
4570=1361:330:392:1431:440:135:71:410
[ 1] 1.00-2.00 sec F8-PDF:
bin(w=100us):cnt(1694)=17:2,18:5,19:10,20:10,21:17,22:21,23:20,24:19,25:24,26:20,27:20,28:32,29:32,30:27,31:27,32:29,33:27,34:23,35:18,36:15,37:30,38:22,39:20,40:26,41:27,42:30,43:31,44:24,45:24,46:19,47:26,48:33,49:35,50:27,51:35,52:23,53:26,54:28,55:26,56:25,57:23,58:29,59:22,60:38,61:24,62:29,63:26,64:26,65:18,66:18,67:24,68:31,69:15,70:28,71:34,72:24,73:14,74:13,75:13,76:17,77:10,78:12,79:15,80:15,81:23,82:15,83:9,84:12,85:14,86:11,87:14,88:8,89:9,90:12,91:7,92:9,93:9,94:8,95:4,96:10,97:4,98:4,99:3,100:7,101:3,102:7,103:6,104:1,105:1,106:3,108:2,109:2,110:1,111:1,112:1,113:1,115:2,117:3,118:2,119:4,121:1,122:2,123:1,124:2,125:1,131:1,140:1
(5.00/95.00/99.7%=23/94/123,Outliers=0,obl/obu=0/0) (13.918
ms/1709929069.596075)
[ 1] 2.00-3.00 sec 212 MBytes 1.78 Gbits/sec
5.375/1.492/12.891/2.103 ms (1693/131105) 1.14 MByte 41298
4448=1274:306:352:1453:455:117:73:418
[ 1] 2.00-3.00 sec F8-PDF:
bin(w=100us):cnt(1693)=15:1,16:2,17:1,18:3,19:9,20:11,21:11,22:10,23:22,24:27,25:29,26:33,27:22,28:28,29:27,30:25,31:16,32:26,33:30,34:25,35:24,36:19,37:20,38:28,39:24,40:27,41:22,42:33,43:31,44:28,45:26,46:33,47:32,48:30,49:31,50:25,51:21,52:25,53:35,54:35,55:36,56:21,57:28,58:27,59:20,60:18,61:24,62:23,63:30,64:34,65:19,66:22,67:28,68:32,69:25,70:18,71:14,72:21,73:14,74:22,75:13,76:14,77:17,78:13,79:15,80:11,81:12,82:14,83:16,84:14,85:20,86:20,87:13,88:10,89:9,90:14,91:6,92:9,93:14,94:6,95:3,96:5,97:6,98:5,99:2,100:3,101:2,102:3,103:4,105:3,106:1,107:1,108:1,109:1,110:2,111:1,113:1,114:1,118:2,120:1,121:1,129:1
(5.00/95.00/99.7%=24/91/114,Outliers=0,obl/obu=0/0) (12.891
ms/1709929070.548830)
[ 1] 0.00-3.01 sec 627 MBytes 1.75 Gbits/sec
5.410/1.441/15.760/2.167 ms (5018/131072) 858 KByte 40393
13218=3837:952:1075:4148:1317:379:225:1285
[ 1] 0.00-3.01 sec F8(f)-PDF:
bin(w=100us):cnt(5018)=15:2,16:2,17:5,18:10,19:24,20:31,21:42,22:43,23:59,24:70,25:76,26:68,27:65,28:88,29:91,30:85,31:72,32:80,33:90,34:66,35:60,36:53,37:75,38:77,39:75,40:86,41:82,42:92,43:92,44:72,45:73,46:74,47:81,48:86,49:99,50:76,51:87,52:85,53:91,54:78,55:85,56:72,57:85,58:88,59:76,60:87,61:72,62:83,63:75,64:78,65:55,66:53,67:80,68:84,69:68,70:71,71:68,72:63,73:38,74:42,75:42,76:44,77:38,78:38,79:43,80:37,81:51,82:45,83:35,84:37,85:45,86:36,87:37,88:27,89:30,90:37,91:23,92:25,93:33,94:22,95:13,96:22,97:17,98:15,99:7,100:14,101:12,102:11,103:14,104:1,105:10,106:5,107:2,108:4,109:3,110:3,111:3,112:4,113:5,114:2,115:3,117:4,118:4,119:6,120:3,121:3,122:4,123:3,124:3,125:2,129:1,130:1,131:1,140:1,158:1
(5.00/95.00/99.7%=24/93/122,Outliers=0,obl/obu=0/0) (15.760
ms/1709929068.147166)
Bob
> they benefited a lot from this mailing list and the research and even
> user community at large
> --
> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM), Pan@UVic.CA,
> Web.UVic.CA/~pan
>
>
> On Fri, Mar 8, 2024 at 11:40 AM the keyboard of geoff goodfellow via
> Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>> Super excited to be able to share some of what we have been working on
>> over the last few months!
>> EXCERPT:
>>
>> Starlink engineering teams have been focused on improving the
>> performance of our network with the goal of delivering a service with
>> stable 20 millisecond (ms) median latency and minimal packet loss.
>>
>> Over the past month, we have meaningfully reduced median and
>> worst-case latency for users around the world. In the United States
>> alone, we reduced median latency by more than 30%, from 48.5ms to 33ms
>> during hours of peak usage. Worst-case peak hour latency (p99) has
>> dropped by over 60%, from over 150ms to less than 65ms. Outside of the
>> United States, we have also reduced median latency by up to 25% and
>> worst-case latencies by up to 35%...
>>
>> [...]
>> https://api.starlink.com/public-files/StarlinkLatency.pdf
>> via
>> https://twitter.com/Starlink/status/1766179308887028005
>> &
>> https://twitter.com/VirtuallyNathan/status/1766179789927522460
>>
>>
>> --
>> Geoff.Goodfellow@iconia.com
>> living as The Truth is True
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [NNagain] [Starlink] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens)
2024-03-08 20:09 ` the keyboard of geoff goodfellow
2024-03-08 20:25 ` Frantisek Borsik
@ 2024-03-08 20:31 ` Dave Taht
2024-03-08 23:44 ` [NNagain] When Flows Collide? Jack Haverty
2024-03-10 17:41 ` [NNagain] [Starlink] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens) Michael Richardson
1 sibling, 2 replies; 12+ messages in thread
From: Dave Taht @ 2024-03-08 20:31 UTC (permalink / raw)
To: the keyboard of geoff goodfellow
Cc: J Pan, Starlink,
Network Neutrality is back! Let´s make the technical
aspects heard this time!
I am deeply appreciative of everyones efforts here over the past 3
years, and within starlink burning the midnight oil on their 20ms
goal, (especially nathan!!!!) to make all the progress made on their
systems in these past few months. I was so happy to burn about 12
minutes, publicly, taking apart Oleg's results here, last week:
https://www.youtube.com/watch?v=N0Tmvv5jJKs&t=1760s
But couldn't then and still can't talk better to the whys and the
problems remaining. (It's not a kernel problem, actually)
As for starlink/space support of us, bufferbloat.net, and/or lowering
latency across the internet in general, I don't know. I keep hoping a
used tesla motor for my boat will arrive in the mail one day, that's
all. :)
It is my larger hope that with this news, all the others doing FWA,
and for that matter, cable, and fiber, will also get on the stick,
finally. Maybe someone in the press will explain bufferbloat. Who
knows what the coming days hold!?
13 herbs and spices....
On Fri, Mar 8, 2024 at 3:10 PM the keyboard of geoff goodfellow via
Starlink <starlink@lists.bufferbloat.net> wrote:
>
> it would be a super good and appreciative gesture if they would disclose what/if any of the stuff they are making use of and then also to make a donation :)
>
> On Fri, Mar 8, 2024 at 12:50 PM J Pan <Pan@uvic.ca> wrote:
>>
>> they benefited a lot from this mailing list and the research and even
>> user community at large
>> --
>> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM), Pan@UVic.CA, Web.UVic.CA/~pan
>>
>>
>> On Fri, Mar 8, 2024 at 11:40 AM the keyboard of geoff goodfellow via
>> Starlink <starlink@lists.bufferbloat.net> wrote:
>> >
>> > Super excited to be able to share some of what we have been working on over the last few months!
>> > EXCERPT:
>> >
>> > Starlink engineering teams have been focused on improving the performance of our network with the goal of delivering a service with stable 20 millisecond (ms) median latency and minimal packet loss.
>> >
>> > Over the past month, we have meaningfully reduced median and worst-case latency for users around the world. In the United States alone, we reduced median latency by more than 30%, from 48.5ms to 33ms during hours of peak usage. Worst-case peak hour latency (p99) has dropped by over 60%, from over 150ms to less than 65ms. Outside of the United States, we have also reduced median latency by up to 25% and worst-case latencies by up to 35%...
>> >
>> > [...]
>> > https://api.starlink.com/public-files/StarlinkLatency.pdf
>> > via
>> > https://twitter.com/Starlink/status/1766179308887028005
>> > &
>> > https://twitter.com/VirtuallyNathan/status/1766179789927522460
>> >
>> >
>> > --
>> > Geoff.Goodfellow@iconia.com
>> > living as The Truth is True
>> >
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>>
>
>
> --
> Geoff.Goodfellow@iconia.com
> living as The Truth is True
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
--
https://www.youtube.com/watch?v=N0Tmvv5jJKs Epik Mellon Podcast
Dave Täht CSO, LibreQos
^ permalink raw reply [flat|nested] 12+ messages in thread
* [NNagain] When Flows Collide?
2024-03-08 20:31 ` Dave Taht
@ 2024-03-08 23:44 ` Jack Haverty
2024-03-09 2:57 ` David Lang
2024-03-10 17:41 ` [NNagain] [Starlink] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens) Michael Richardson
1 sibling, 1 reply; 12+ messages in thread
From: Jack Haverty @ 2024-03-08 23:44 UTC (permalink / raw)
To: nnagain, Starlink
[-- Attachment #1.1.1.1: Type: text/plain, Size: 8907 bytes --]
It's great to see that latency is getting attention as well as action to
control it. But it's only part of the bigger picture of Internet
performance.
While performance across a particular network is interesting, most uses
of the Internet involve data flowing through several separate networks.
That's pretty much the definition of "Internet". The endpoints might be
some kind of LAN in a home or corporate IT facility or public venue.
In between there might be fiber, radio, satellite, or other (even
whimsically avian!?) networks carrying a users data. This kind of
system configuration has existed since the genesis of The Internet and
seems likely to continue. Technology has advanced a lot, with bigger and
bigger "pipes" invented to carry more data, but fundamental issues remain.
System configurations we used in the early research days were real
experiments to be measured and tested, or often just "thought
experiments" to imagine how the system would behave, what algorithms
would be appropriate, and what protocols had to exist to coordinate the
activities of all the components.
One such configuration was very simple. Imagine there are three very
fast computers, each attached to a very fast LAN. The computers and
LAN can send and receive data as fast as you can imagine, so that they
are not a limiting factor. The LANs are attached to some "ISP" which
isn't as fast (in bandwidth or latency) as a LAN. ISPs are
interconnected at various points, forming a somewhat rich mesh of
topology with several, or many, possible routes from any source to any
destination.
Now imagine a user configuration in which two of the computers send a
constant stream of data to the third computer at a predefined rate.
Perhaps it is a UDP datagram every N milliseconds, each datagram
containing a frame of video. If N=20 it corresponds to a 50Hz frame
rate, which is common for video.
Somewhere along the way to that common destination, those two data
streams collide, and there is a bottleneck. All the data coming in
cannot fit in the pipe going out. Something has to give.
Thought experiment -- What should happen? Does the bottleneck discard
datagrams it can't handle? How does it decide which ones to discard?
Does the bottleneck buffer the excess datagrams, hoping that the
situation is just temporary? Does the bottleneck somehow signal back
to the sources to reduce their data rate? Does th ebottleneck discard
datagrams that it knows won't reach the destination in time to be
useful? Does the bottleneck trigger some kind of network
reconfiguration, perhaps to route "low priority" data along some
alternate path to free up capacity for the video streams that requires
low latency?
Real experiment -- set up such a configuration and observe what happens,
especially from the end-users' perspectives. What kind of video does
the end-user see?
Second thought experiment -- Using the same configuration, send data
using TCP instead of UDP. This adds more mechanisms, but now in the
end-users' computers. How should the ISPs and TCPs involved behave?
How should they cooperate? What should happen? What mechanisms
(algorithms, protocols, etc.) are needed to make the system behave that way?
Second real Experiment -- How do the specific TCP implementations
actually behave? What kind of video quality do the end users
experience? What kind of data flows actually travel through the network
components?
Of course we all observe such real experiments every day, whenever we
see or participate in various kinds of videoconferences. Perhaps
someone has instrumented and gathered performance data...?
These questions were discussed and debated at great length more than 40
years ago as TCP V4 was designed. We couldn't figure out the
appropriate algorithms and protocols, and didn't have computer equipment
or communications capabilities to implement anything more than the
simplest mechanisms anyway. So the topic became an item on the "future
study" list.
But we did put various "placeholder" mechanisms in place in TCP/IP V4,
as a reminder that a "real" solution was needed for some future next
generation release. Time-to-live (TTL) would likely need to be based on
actual time instead of hops - which were silly but the best we could do
with available equipment at the time. Source Quench (SQ) needed to be
replaced by a more effective mechanism, and include details of how all
the components should act when sending or receiving an SQ. Routing
needed to be expanded to add the ability to send different data flows
over different routes, so that bulk and interactive data could more
readily coexist. Lots of such issues to be resolved.
In the meanwhile, the general consensus was that everything would work
OK as long as the traffic flows only rarely created "bottleneck"
situations, and such events would be short and transitory. There
wasn't a lot of data flow yet; the Internet was still an Experiment. We
figured we'd be OK for a while as the research continued and found
solutions.
Meanwhile, the Web happened. Videoconferencing, vlogs, and other
generators of high traffic exploded. Clouds have formed, with users now
interacting with very remote computers instead of the ones on their
desks or down the hall.
As Dorothy would say, "We're not in Kansas anymore".
Jack Haverty
On 3/8/24 12:31, Dave Taht via Nnagain wrote:
> I am deeply appreciative of everyones efforts here over the past 3
> years, and within starlink burning the midnight oil on their 20ms
> goal, (especially nathan!!!!) to make all the progress made on their
> systems in these past few months. I was so happy to burn about 12
> minutes, publicly, taking apart Oleg's results here, last week:
>
> https://www.youtube.com/watch?v=N0Tmvv5jJKs&t=1760s
>
> But couldn't then and still can't talk better to the whys and the
> problems remaining. (It's not a kernel problem, actually)
>
> As for starlink/space support of us, bufferbloat.net, and/or lowering
> latency across the internet in general, I don't know. I keep hoping a
> used tesla motor for my boat will arrive in the mail one day, that's
> all. :)
>
> It is my larger hope that with this news, all the others doing FWA,
> and for that matter, cable, and fiber, will also get on the stick,
> finally. Maybe someone in the press will explain bufferbloat. Who
> knows what the coming days hold!?
>
> 13 herbs and spices....
>
> On Fri, Mar 8, 2024 at 3:10 PM the keyboard of geoff goodfellow via
> Starlink<starlink@lists.bufferbloat.net> wrote:
>> it would be a super good and appreciative gesture if they would disclose what/if any of the stuff they are making use of and then also to make a donation :)
>>
>> On Fri, Mar 8, 2024 at 12:50 PM J Pan<Pan@uvic.ca> wrote:
>>> they benefited a lot from this mailing list and the research and even
>>> user community at large
>>> --
>>> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM),Pan@UVic.CA, Web.UVic.CA/~pan
>>>
>>>
>>> On Fri, Mar 8, 2024 at 11:40 AM the keyboard of geoff goodfellow via
>>> Starlink<starlink@lists.bufferbloat.net> wrote:
>>>> Super excited to be able to share some of what we have been working on over the last few months!
>>>> EXCERPT:
>>>>
>>>> Starlink engineering teams have been focused on improving the performance of our network with the goal of delivering a service with stable 20 millisecond (ms) median latency and minimal packet loss.
>>>>
>>>> Over the past month, we have meaningfully reduced median and worst-case latency for users around the world. In the United States alone, we reduced median latency by more than 30%, from 48.5ms to 33ms during hours of peak usage. Worst-case peak hour latency (p99) has dropped by over 60%, from over 150ms to less than 65ms. Outside of the United States, we have also reduced median latency by up to 25% and worst-case latencies by up to 35%...
>>>>
>>>> [...]
>>>> https://api.starlink.com/public-files/StarlinkLatency.pdf
>>>> via
>>>> https://twitter.com/Starlink/status/1766179308887028005
>>>> &
>>>> https://twitter.com/VirtuallyNathan/status/1766179789927522460
>>>>
>>>>
>>>> --
>>>> Geoff.Goodfellow@iconia.com
>>>> living as The Truth is True
>>>>
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>
>> --
>> Geoff.Goodfellow@iconia.com
>> living as The Truth is True
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
>
[-- Attachment #1.1.1.2: Type: text/html, Size: 11375 bytes --]
[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 2469 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 665 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [NNagain] When Flows Collide?
2024-03-08 23:44 ` [NNagain] When Flows Collide? Jack Haverty
@ 2024-03-09 2:57 ` David Lang
2024-03-09 19:08 ` Jack Haverty
0 siblings, 1 reply; 12+ messages in thread
From: David Lang @ 2024-03-09 2:57 UTC (permalink / raw)
To: Jack Haverty via Nnagain; +Cc: Starlink, Jack Haverty
[-- Attachment #1: Type: text/plain, Size: 11303 bytes --]
this is what bufferbloat has been fighting. The default was that 'data is
important, don't throw it away, hang on to it and send it later'
In practice, this has proven to be suboptimal as the buffers grew large enough
that the data being buffered was retransmitted anyway (among other problems)
And because the data was buffered, new data arriving was delayed behind the
buffered data.
This is measurable as 'latency under load' for light connections. So while
latency isn't everything, it turns out to be a good proxy to detect when the
standard queuing mechansisms are failing to give you good performance.
It turns out that not all data is equally important. Active Queue Management is
the art of deciding priorities, both in deciding what data to throw away, but
also in allowing some later arriving data to be transmitted ahead of data in
another connection that arrived before it.
With fq_codel and cake, this involves tracking the different connections and
their behavior. connections that send relatively little data (DNS lookups, video
chat) have priority over connections that send a lot of data (ISO downloads),
not based on classifying the data, but by watching the behavior.
connections with a lot of data can buffer a bit, but aren't allowed to use all
the available buffer space, after they have used 'their share', packets get
marked/dropped to signal the sender to slow down.
While it is possible for an implementation to 'cheat' by detecting the latency
probes and prioritizing them, measuring the latency on real data works as well
and they can't cheat on that without actually addressing the problem
That's why you see such a significant focus on latency in this group, it's not
latency for the sake of latency, it's latency as a sign that new and sparse
flows can get a reasonable share of bandwith even in the face of heavy/hostile
users on the same links.
David Lang
On Fri, 8 Mar 2024, Jack Haverty via Nnagain wrote:
> Date: Fri, 8 Mar 2024 15:44:05 -0800
> From: Jack Haverty via Nnagain <nnagain@lists.bufferbloat.net>
> To: nnagain@lists.bufferbloat.net, Starlink@lists.bufferbloat.net
> Cc: Jack Haverty <jack@3kitty.org>
> Subject: [NNagain] When Flows Collide?
>
> It's great to see that latency is getting attention as well as action to
> control it. But it's only part of the bigger picture of Internet
> performance.
>
> While performance across a particular network is interesting, most uses of
> the Internet involve data flowing through several separate networks. That's
> pretty much the definition of "Internet". The endpoints might be some kind
> of LAN in a home or corporate IT facility or public venue. In between there
> might be fiber, radio, satellite, or other (even whimsically avian!?)
> networks carrying a users data. This kind of system configuration has
> existed since the genesis of The Internet and seems likely to continue.
> Technology has advanced a lot, with bigger and bigger "pipes" invented to
> carry more data, but fundamental issues remain.
>
> System configurations we used in the early research days were real
> experiments to be measured and tested, or often just "thought experiments" to
> imagine how the system would behave, what algorithms would be appropriate,
> and what protocols had to exist to coordinate the activities of all the
> components.
>
> One such configuration was very simple. Imagine there are three very fast
> computers, each attached to a very fast LAN. The computers and LAN can send
> and receive data as fast as you can imagine, so that they are not a limiting
> factor. The LANs are attached to some "ISP" which isn't as fast (in
> bandwidth or latency) as a LAN. ISPs are interconnected at various points,
> forming a somewhat rich mesh of topology with several, or many, possible
> routes from any source to any destination.
>
> Now imagine a user configuration in which two of the computers send a
> constant stream of data to the third computer at a predefined rate. Perhaps
> it is a UDP datagram every N milliseconds, each datagram containing a frame
> of video. If N=20 it corresponds to a 50Hz frame rate, which is common for
> video.
>
> Somewhere along the way to that common destination, those two data streams
> collide, and there is a bottleneck. All the data coming in cannot fit in
> the pipe going out. Something has to give.
>
> Thought experiment -- What should happen? Does the bottleneck discard
> datagrams it can't handle? How does it decide which ones to discard? Does
> the bottleneck buffer the excess datagrams, hoping that the situation is just
> temporary? Does the bottleneck somehow signal back to the sources to reduce
> their data rate? Does th ebottleneck discard datagrams that it knows won't
> reach the destination in time to be useful? Does the bottleneck trigger some
> kind of network reconfiguration, perhaps to route "low priority" data along
> some alternate path to free up capacity for the video streams that requires
> low latency?
>
> Real experiment -- set up such a configuration and observe what happens,
> especially from the end-users' perspectives. What kind of video does the
> end-user see?
>
> Second thought experiment -- Using the same configuration, send data using
> TCP instead of UDP. This adds more mechanisms, but now in the end-users'
> computers. How should the ISPs and TCPs involved behave? How should they
> cooperate? What should happen? What mechanisms (algorithms, protocols,
> etc.) are needed to make the system behave that way?
>
> Second real Experiment -- How do the specific TCP implementations actually
> behave? What kind of video quality do the end users experience? What kind
> of data flows actually travel through the network components?
>
> Of course we all observe such real experiments every day, whenever we see or
> participate in various kinds of videoconferences. Perhaps someone has
> instrumented and gathered performance data...?
>
> These questions were discussed and debated at great length more than 40 years
> ago as TCP V4 was designed. We couldn't figure out the appropriate
> algorithms and protocols, and didn't have computer equipment or
> communications capabilities to implement anything more than the simplest
> mechanisms anyway. So the topic became an item on the "future study" list.
>
> But we did put various "placeholder" mechanisms in place in TCP/IP V4, as a
> reminder that a "real" solution was needed for some future next generation
> release. Time-to-live (TTL) would likely need to be based on actual time
> instead of hops - which were silly but the best we could do with available
> equipment at the time. Source Quench (SQ) needed to be replaced by a more
> effective mechanism, and include details of how all the components should act
> when sending or receiving an SQ. Routing needed to be expanded to add the
> ability to send different data flows over different routes, so that bulk and
> interactive data could more readily coexist. Lots of such issues to be
> resolved.
>
> In the meanwhile, the general consensus was that everything would work OK as
> long as the traffic flows only rarely created "bottleneck" situations, and
> such events would be short and transitory. There wasn't a lot of data flow
> yet; the Internet was still an Experiment. We figured we'd be OK for a while
> as the research continued and found solutions.
>
> Meanwhile, the Web happened. Videoconferencing, vlogs, and other generators
> of high traffic exploded. Clouds have formed, with users now interacting
> with very remote computers instead of the ones on their desks or down the
> hall.
>
> As Dorothy would say, "We're not in Kansas anymore".
>
> Jack Haverty
>
>
>
>
>
>
>
>
> On 3/8/24 12:31, Dave Taht via Nnagain wrote:
>> I am deeply appreciative of everyones efforts here over the past 3
>> years, and within starlink burning the midnight oil on their 20ms
>> goal, (especially nathan!!!!) to make all the progress made on their
>> systems in these past few months. I was so happy to burn about 12
>> minutes, publicly, taking apart Oleg's results here, last week:
>>
>> https://www.youtube.com/watch?v=N0Tmvv5jJKs&t=1760s
>>
>> But couldn't then and still can't talk better to the whys and the
>> problems remaining. (It's not a kernel problem, actually)
>>
>> As for starlink/space support of us, bufferbloat.net, and/or lowering
>> latency across the internet in general, I don't know. I keep hoping a
>> used tesla motor for my boat will arrive in the mail one day, that's
>> all. :)
>>
>> It is my larger hope that with this news, all the others doing FWA,
>> and for that matter, cable, and fiber, will also get on the stick,
>> finally. Maybe someone in the press will explain bufferbloat. Who
>> knows what the coming days hold!?
>>
>> 13 herbs and spices....
>>
>> On Fri, Mar 8, 2024 at 3:10 PM the keyboard of geoff goodfellow via
>> Starlink<starlink@lists.bufferbloat.net> wrote:
>>> it would be a super good and appreciative gesture if they would disclose
>>> what/if any of the stuff they are making use of and then also to make a
>>> donation :)
>>>
>>> On Fri, Mar 8, 2024 at 12:50 PM J Pan<Pan@uvic.ca> wrote:
>>>> they benefited a lot from this mailing list and the research and even
>>>> user community at large
>>>> --
>>>> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM),Pan@UVic.CA,
>>>> Web.UVic.CA/~pan
>>>>
>>>>
>>>> On Fri, Mar 8, 2024 at 11:40 AM the keyboard of geoff goodfellow via
>>>> Starlink<starlink@lists.bufferbloat.net> wrote:
>>>>> Super excited to be able to share some of what we have been working on
>>>>> over the last few months!
>>>>> EXCERPT:
>>>>>
>>>>> Starlink engineering teams have been focused on improving the
>>>>> performance of our network with the goal of delivering a service with
>>>>> stable 20 millisecond (ms) median latency and minimal packet loss.
>>>>>
>>>>> Over the past month, we have meaningfully reduced median and worst-case
>>>>> latency for users around the world. In the United States alone, we
>>>>> reduced median latency by more than 30%, from 48.5ms to 33ms during
>>>>> hours of peak usage. Worst-case peak hour latency (p99) has dropped by
>>>>> over 60%, from over 150ms to less than 65ms. Outside of the United
>>>>> States, we have also reduced median latency by up to 25% and worst-case
>>>>> latencies by up to 35%...
>>>>>
>>>>> [...]
>>>>> https://api.starlink.com/public-files/StarlinkLatency.pdf
>>>>> via
>>>>> https://twitter.com/Starlink/status/1766179308887028005
>>>>> &
>>>>> https://twitter.com/VirtuallyNathan/status/1766179789927522460
>>>>>
>>>>>
>>>>> --
>>>>> Geoff.Goodfellow@iconia.com
>>>>> living as The Truth is True
>>>>>
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>> --
>>> Geoff.Goodfellow@iconia.com
>>> living as The Truth is True
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>
>
[-- Attachment #2: Type: text/plain, Size: 146 bytes --]
_______________________________________________
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [NNagain] When Flows Collide?
2024-03-09 2:57 ` David Lang
@ 2024-03-09 19:08 ` Jack Haverty
2024-03-09 19:45 ` rjmcmahon
2024-03-09 20:08 ` David Lang
0 siblings, 2 replies; 12+ messages in thread
From: Jack Haverty @ 2024-03-09 19:08 UTC (permalink / raw)
To: David Lang, Jack Haverty via Nnagain; +Cc: Starlink
[-- Attachment #1.1.1.1: Type: text/plain, Size: 14158 bytes --]
Hi David,
Thanks for the explanation. I had heard most of those technology
buzzwords but your message puts them into context together.
Prioritization and queue management certainly helps, but I still don't
understand how the system behaves when it hits capacity somewhere deep
inside -- the "thought experiments" I described with a bottleneck
somewhere deep inside the Internet as two flows collide.
The scheme you describe also seems vulnerable to users' innovative
tactics to get better service. E.g., an "ISO download" using some
scheme like Torrents would spread the traffic around a bunch of
connections which may not all go through the same bottleneck and not be
judged as low priority. Also it seems prone to things like DDOS
attacks, e.g., flooding a path with DNS queries from many bot sources
that are judged as high priority.
The behavior "packets get marked/dropped to signal the sender to slow
down" seems essentially the same design as the "Source Quench" behavior
defined in the early 1980s. At the time, I was responsible for a TCP
implementation, and had questions about what my TCP should do when it
received such a "slow down" message. It was especially unclear in
certain situations - e.g., if my TCP sent a datagram to open a new
connection and got a "slow down" response, what exactly should it do?
There were no good answers back then. One TCP implementor decided that
the best reaction on receiving a "slow down" message was to immediately
retransmit the datagram that had just been confirmed to be discarded.
"Slow down" actually meant "Speed up, I threw away your last datagram."
So, I'm still curious about the Internet behavior with the current
mechanisms when the system hits its maximum capacity - the two simple
scenarios I mentioned with bottlenecks and only two data flows involved
that converged at the bottleneck. What's supposed to happen in
theory? Are implementations actually doing what they're supposed to
do? What does happen in a real-world test?
Jack Haverty
On 3/8/24 18:57, David Lang wrote:
> this is what bufferbloat has been fighting. The default was that 'data
> is important, don't throw it away, hang on to it and send it later'
>
> In practice, this has proven to be suboptimal as the buffers grew
> large enough that the data being buffered was retransmitted anyway
> (among other problems)
>
> And because the data was buffered, new data arriving was delayed
> behind the buffered data.
>
> This is measurable as 'latency under load' for light connections. So
> while latency isn't everything, it turns out to be a good proxy to
> detect when the standard queuing mechansisms are failing to give you
> good performance.
>
> It turns out that not all data is equally important. Active Queue
> Management is the art of deciding priorities, both in deciding what
> data to throw away, but also in allowing some later arriving data to
> be transmitted ahead of data in another connection that arrived before
> it.
>
> With fq_codel and cake, this involves tracking the different
> connections and their behavior. connections that send relatively
> little data (DNS lookups, video chat) have priority over connections
> that send a lot of data (ISO downloads), not based on classifying the
> data, but by watching the behavior.
>
> connections with a lot of data can buffer a bit, but aren't allowed to
> use all the available buffer space, after they have used 'their
> share', packets get marked/dropped to signal the sender to slow down.
>
> While it is possible for an implementation to 'cheat' by detecting the
> latency probes and prioritizing them, measuring the latency on real
> data works as well and they can't cheat on that without actually
> addressing the problem
>
> That's why you see such a significant focus on latency in this group,
> it's not latency for the sake of latency, it's latency as a sign that
> new and sparse flows can get a reasonable share of bandwith even in
> the face of heavy/hostile users on the same links.
>
> David Lang
>
>
> On Fri, 8 Mar 2024, Jack Haverty via Nnagain wrote:
>
>> Date: Fri, 8 Mar 2024 15:44:05 -0800
>> From: Jack Haverty via Nnagain <nnagain@lists.bufferbloat.net>
>> To: nnagain@lists.bufferbloat.net, Starlink@lists.bufferbloat.net
>> Cc: Jack Haverty <jack@3kitty.org>
>> Subject: [NNagain] When Flows Collide?
>>
>> It's great to see that latency is getting attention as well as action
>> to control it. But it's only part of the bigger picture of Internet
>> performance.
>>
>> While performance across a particular network is interesting, most
>> uses of the Internet involve data flowing through several separate
>> networks. That's pretty much the definition of "Internet". The
>> endpoints might be some kind of LAN in a home or corporate IT
>> facility or public venue. In between there might be fiber, radio,
>> satellite, or other (even whimsically avian!?) networks carrying a
>> users data. This kind of system configuration has existed since the
>> genesis of The Internet and seems likely to continue. Technology has
>> advanced a lot, with bigger and bigger "pipes" invented to carry more
>> data, but fundamental issues remain.
>>
>> System configurations we used in the early research days were real
>> experiments to be measured and tested, or often just "thought
>> experiments" to imagine how the system would behave, what algorithms
>> would be appropriate, and what protocols had to exist to coordinate
>> the activities of all the components.
>>
>> One such configuration was very simple. Imagine there are three very
>> fast computers, each attached to a very fast LAN. The computers and
>> LAN can send and receive data as fast as you can imagine, so that
>> they are not a limiting factor. The LANs are attached to some "ISP"
>> which isn't as fast (in bandwidth or latency) as a LAN. ISPs are
>> interconnected at various points, forming a somewhat rich mesh of
>> topology with several, or many, possible routes from any source to
>> any destination.
>>
>> Now imagine a user configuration in which two of the computers send a
>> constant stream of data to the third computer at a predefined rate.
>> Perhaps it is a UDP datagram every N milliseconds, each datagram
>> containing a frame of video. If N=20 it corresponds to a 50Hz frame
>> rate, which is common for video.
>>
>> Somewhere along the way to that common destination, those two data
>> streams collide, and there is a bottleneck. All the data coming in
>> cannot fit in the pipe going out. Something has to give.
>>
>> Thought experiment -- What should happen? Does the bottleneck
>> discard datagrams it can't handle? How does it decide which ones to
>> discard? Does the bottleneck buffer the excess datagrams, hoping
>> that the situation is just temporary? Does the bottleneck somehow
>> signal back to the sources to reduce their data rate? Does th
>> ebottleneck discard datagrams that it knows won't reach the
>> destination in time to be useful? Does the bottleneck trigger some
>> kind of network reconfiguration, perhaps to route "low priority" data
>> along some alternate path to free up capacity for the video streams
>> that requires low latency?
>>
>> Real experiment -- set up such a configuration and observe what
>> happens, especially from the end-users' perspectives. What kind of
>> video does the end-user see?
>>
>> Second thought experiment -- Using the same configuration, send data
>> using TCP instead of UDP. This adds more mechanisms, but now in the
>> end-users' computers. How should the ISPs and TCPs involved behave?
>> How should they cooperate? What should happen? What mechanisms
>> (algorithms, protocols, etc.) are needed to make the system behave
>> that way?
>>
>> Second real Experiment -- How do the specific TCP implementations
>> actually behave? What kind of video quality do the end users
>> experience? What kind of data flows actually travel through the
>> network components?
>>
>> Of course we all observe such real experiments every day, whenever we
>> see or participate in various kinds of videoconferences. Perhaps
>> someone has instrumented and gathered performance data...?
>>
>> These questions were discussed and debated at great length more than
>> 40 years ago as TCP V4 was designed. We couldn't figure out the
>> appropriate algorithms and protocols, and didn't have computer
>> equipment or communications capabilities to implement anything more
>> than the simplest mechanisms anyway. So the topic became an item on
>> the "future study" list.
>>
>> But we did put various "placeholder" mechanisms in place in TCP/IP
>> V4, as a reminder that a "real" solution was needed for some future
>> next generation release. Time-to-live (TTL) would likely need to be
>> based on actual time instead of hops - which were silly but the best
>> we could do with available equipment at the time. Source Quench (SQ)
>> needed to be replaced by a more effective mechanism, and include
>> details of how all the components should act when sending or
>> receiving an SQ. Routing needed to be expanded to add the ability
>> to send different data flows over different routes, so that bulk and
>> interactive data could more readily coexist. Lots of such issues to
>> be resolved.
>>
>> In the meanwhile, the general consensus was that everything would
>> work OK as long as the traffic flows only rarely created "bottleneck"
>> situations, and such events would be short and transitory. There
>> wasn't a lot of data flow yet; the Internet was still an Experiment.
>> We figured we'd be OK for a while as the research continued and found
>> solutions.
>>
>> Meanwhile, the Web happened. Videoconferencing, vlogs, and other
>> generators of high traffic exploded. Clouds have formed, with users
>> now interacting with very remote computers instead of the ones on
>> their desks or down the hall.
>>
>> As Dorothy would say, "We're not in Kansas anymore".
>>
>> Jack Haverty
>>
>>
>>
>>
>>
>>
>>
>>
>> On 3/8/24 12:31, Dave Taht via Nnagain wrote:
>>> I am deeply appreciative of everyones efforts here over the past 3
>>> years, and within starlink burning the midnight oil on their 20ms
>>> goal, (especially nathan!!!!) to make all the progress made on their
>>> systems in these past few months. I was so happy to burn about 12
>>> minutes, publicly, taking apart Oleg's results here, last week:
>>>
>>> https://www.youtube.com/watch?v=N0Tmvv5jJKs&t=1760s
>>>
>>> But couldn't then and still can't talk better to the whys and the
>>> problems remaining. (It's not a kernel problem, actually)
>>>
>>> As for starlink/space support of us, bufferbloat.net, and/or lowering
>>> latency across the internet in general, I don't know. I keep hoping a
>>> used tesla motor for my boat will arrive in the mail one day, that's
>>> all. :)
>>>
>>> It is my larger hope that with this news, all the others doing FWA,
>>> and for that matter, cable, and fiber, will also get on the stick,
>>> finally. Maybe someone in the press will explain bufferbloat. Who
>>> knows what the coming days hold!?
>>>
>>> 13 herbs and spices....
>>>
>>> On Fri, Mar 8, 2024 at 3:10 PM the keyboard of geoff goodfellow via
>>> Starlink<starlink@lists.bufferbloat.net> wrote:
>>>> it would be a super good and appreciative gesture if they would
>>>> disclose what/if any of the stuff they are making use of and then
>>>> also to make a donation :)
>>>>
>>>> On Fri, Mar 8, 2024 at 12:50 PM J Pan<Pan@uvic.ca> wrote:
>>>>> they benefited a lot from this mailing list and the research and even
>>>>> user community at large
>>>>> --
>>>>> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM),Pan@UVic.CA,
>>>>> Web.UVic.CA/~pan
>>>>>
>>>>>
>>>>> On Fri, Mar 8, 2024 at 11:40 AM the keyboard of geoff goodfellow via
>>>>> Starlink<starlink@lists.bufferbloat.net> wrote:
>>>>>> Super excited to be able to share some of what we have been
>>>>>> working on over the last few months!
>>>>>> EXCERPT:
>>>>>>
>>>>>> Starlink engineering teams have been focused on improving the
>>>>>> performance of our network with the goal of delivering a service
>>>>>> with stable 20 millisecond (ms) median latency and minimal packet
>>>>>> loss.
>>>>>>
>>>>>> Over the past month, we have meaningfully reduced median and
>>>>>> worst-case latency for users around the world. In the United
>>>>>> States alone, we reduced median latency by more than 30%, from
>>>>>> 48.5ms to 33ms during hours of peak usage. Worst-case peak hour
>>>>>> latency (p99) has dropped by over 60%, from over 150ms to less
>>>>>> than 65ms. Outside of the United States, we have also reduced
>>>>>> median latency by up to 25% and worst-case latencies by up to 35%...
>>>>>>
>>>>>> [...]
>>>>>> https://api.starlink.com/public-files/StarlinkLatency.pdf
>>>>>> via
>>>>>> https://twitter.com/Starlink/status/1766179308887028005
>>>>>> &
>>>>>> https://twitter.com/VirtuallyNathan/status/1766179789927522460
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Geoff.Goodfellow@iconia.com
>>>>>> living as The Truth is True
>>>>>>
>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>>> --
>>>> Geoff.Goodfellow@iconia.com
>>>> living as The Truth is True
>>>>
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>>
>>
>>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
[-- Attachment #1.1.1.2: Type: text/html, Size: 20089 bytes --]
[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 2469 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 665 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [NNagain] When Flows Collide?
2024-03-09 19:08 ` Jack Haverty
@ 2024-03-09 19:45 ` rjmcmahon
2024-03-09 20:08 ` David Lang
1 sibling, 0 replies; 12+ messages in thread
From: rjmcmahon @ 2024-03-09 19:45 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Cc: David Lang, Jack Haverty, Starlink
Here's one of Cisco's switch architects presenting what they did.
https://www.youtube.com/watch?v=YISujYcnbSI
They also have something called HULL, high occupancy low latency. The
idea is keep the arrival rates slightly under the service rates so
standing queues don't form. This was initially driven by high frequency
traders and by billionaires that wanted a cut of every 401Ks' asset
allocation re balancing. (Being a billionaire is hard work so they
should get a cut of all our 401Ks)
Less is More: Trading a little Bandwidth for Ultra-Low Latency in the
Data Center
https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final187.pdf
All of this is much more difficult for Wi-Fi networks for many reasons,
including that transmits and packets aren't aligned and it's transmits
that are the high cost function. Packets are an artifact.
There are a few major industry design flaws for Wi-Fi networks:
1) Bufferbloat (over sized queue buffers)
2) Mesh networks
3) AP/STA power imbalance
4) Frictionless upgrades (billions of devices in the field are non
compliant yet work together - but failures occur when a compliant device
is introduced)
5) We're missing an entire trade around in premise fiber and wireless
Paul Baran had most all of this figured out a few decades ago but we
seem to still be missing things even though his knowledge is readily
available.
Bob
> Hi David,
>
> Thanks for the explanation. I had heard most of those technology
> buzzwords but your message puts them into context together.
> Prioritization and queue management certainly helps, but I still don't
> understand how the system behaves when it hits capacity somewhere deep
> inside -- the "thought experiments" I described with a bottleneck
> somewhere deep inside the Internet as two flows collide.
>
> The scheme you describe also seems vulnerable to users' innovative
> tactics to get better service. E.g., an "ISO download" using some
> scheme like Torrents would spread the traffic around a bunch of
> connections which may not all go through the same bottleneck and not
> be judged as low priority. Also it seems prone to things like DDOS
> attacks, e.g., flooding a path with DNS queries from many bot sources
> that are judged as high priority.
>
> The behavior "packets get marked/dropped to signal the sender to slow
> down" seems essentially the same design as the "Source Quench"
> behavior defined in the early 1980s. At the time, I was responsible
> for a TCP implementation, and had questions about what my TCP should
> do when it received such a "slow down" message. It was especially
> unclear in certain situations - e.g., if my TCP sent a datagram to
> open a new connection and got a "slow down" response, what exactly
> should it do?
>
> There were no good answers back then. One TCP implementor decided
> that the best reaction on receiving a "slow down" message was to
> immediately retransmit the datagram that had just been confirmed to be
> discarded. "Slow down" actually meant "Speed up, I threw away your
> last datagram."
>
> So, I'm still curious about the Internet behavior with the current
> mechanisms when the system hits its maximum capacity - the two simple
> scenarios I mentioned with bottlenecks and only two data flows
> involved that converged at the bottleneck. What's supposed to happen
> in theory? Are implementations actually doing what they're supposed
> to do? What does happen in a real-world test?
>
> Jack Haverty
>
> On 3/8/24 18:57, David Lang wrote:
>
>> this is what bufferbloat has been fighting. The default was that
>> 'data is important, don't throw it away, hang on to it and send it
>> later'
>>
>> In practice, this has proven to be suboptimal as the buffers grew
>> large enough that the data being buffered was retransmitted anyway
>> (among other problems)
>>
>> And because the data was buffered, new data arriving was delayed
>> behind the buffered data.
>>
>> This is measurable as 'latency under load' for light connections. So
>> while latency isn't everything, it turns out to be a good proxy to
>> detect when the standard queuing mechansisms are failing to give you
>> good performance.
>>
>> It turns out that not all data is equally important. Active Queue
>> Management is the art of deciding priorities, both in deciding what
>> data to throw away, but also in allowing some later arriving data to
>> be transmitted ahead of data in another connection that arrived
>> before it.
>>
>> With fq_codel and cake, this involves tracking the different
>> connections and their behavior. connections that send relatively
>> little data (DNS lookups, video chat) have priority over connections
>> that send a lot of data (ISO downloads), not based on classifying
>> the data, but by watching the behavior.
>>
>> connections with a lot of data can buffer a bit, but aren't allowed
>> to use all the available buffer space, after they have used 'their
>> share', packets get marked/dropped to signal the sender to slow
>> down.
>>
>> While it is possible for an implementation to 'cheat' by detecting
>> the latency probes and prioritizing them, measuring the latency on
>> real data works as well and they can't cheat on that without
>> actually addressing the problem
>>
>> That's why you see such a significant focus on latency in this
>> group, it's not latency for the sake of latency, it's latency as a
>> sign that new and sparse flows can get a reasonable share of
>> bandwith even in the face of heavy/hostile users on the same links.
>>
>> David Lang
>>
>> On Fri, 8 Mar 2024, Jack Haverty via Nnagain wrote:
>>
>> Date: Fri, 8 Mar 2024 15:44:05 -0800
>> From: Jack Haverty via Nnagain <nnagain@lists.bufferbloat.net>
>> To: nnagain@lists.bufferbloat.net, Starlink@lists.bufferbloat.net
>> Cc: Jack Haverty <jack@3kitty.org>
>> Subject: [NNagain] When Flows Collide?
>>
>> It's great to see that latency is getting attention as well as
>> action to control it. But it's only part of the bigger picture of
>> Internet performance.
>>
>> While performance across a particular network is interesting, most
>> uses of the Internet involve data flowing through several separate
>> networks. That's pretty much the definition of "Internet". The
>> endpoints might be some kind of LAN in a home or corporate IT
>> facility or public venue. In between there might be fiber, radio,
>> satellite, or other (even whimsically avian!?) networks carrying a
>> users data. This kind of system configuration has existed since
>> the genesis of The Internet and seems likely to continue. Technology
>> has advanced a lot, with bigger and bigger "pipes" invented to carry
>> more data, but fundamental issues remain.
>>
>> System configurations we used in the early research days were real
>> experiments to be measured and tested, or often just "thought
>> experiments" to imagine how the system would behave, what algorithms
>> would be appropriate, and what protocols had to exist to coordinate
>> the activities of all the components.
>>
>> One such configuration was very simple. Imagine there are three
>> very fast computers, each attached to a very fast LAN. The
>> computers and LAN can send and receive data as fast as you can
>> imagine, so that they are not a limiting factor. The LANs are
>> attached to some "ISP" which isn't as fast (in bandwidth or latency)
>> as a LAN. ISPs are interconnected at various points, forming a
>> somewhat rich mesh of topology with several, or many, possible
>> routes from any source to any destination.
>>
>> Now imagine a user configuration in which two of the computers send
>> a constant stream of data to the third computer at a predefined
>> rate. Perhaps it is a UDP datagram every N milliseconds, each
>> datagram containing a frame of video. If N=20 it corresponds to a
>> 50Hz frame rate, which is common for video.
>>
>> Somewhere along the way to that common destination, those two data
>> streams collide, and there is a bottleneck. All the data coming in
>> cannot fit in the pipe going out. Something has to give.
>>
>> Thought experiment -- What should happen? Does the bottleneck
>> discard datagrams it can't handle? How does it decide which ones to
>> discard? Does the bottleneck buffer the excess datagrams, hoping
>> that the situation is just temporary? Does the bottleneck somehow
>> signal back to the sources to reduce their data rate? Does th
>> ebottleneck discard datagrams that it knows won't reach the
>> destination in time to be useful? Does the bottleneck trigger some
>> kind of network reconfiguration, perhaps to route "low priority"
>> data along some alternate path to free up capacity for the video
>> streams that requires low latency?
>>
>> Real experiment -- set up such a configuration and observe what
>> happens, especially from the end-users' perspectives. What kind of
>> video does the end-user see?
>>
>> Second thought experiment -- Using the same configuration, send data
>> using TCP instead of UDP. This adds more mechanisms, but now in the
>> end-users' computers. How should the ISPs and TCPs involved behave?
>> How should they cooperate? What should happen? What mechanisms
>> (algorithms, protocols, etc.) are needed to make the system behave
>> that way?
>>
>> Second real Experiment -- How do the specific TCP implementations
>> actually behave? What kind of video quality do the end users
>> experience? What kind of data flows actually travel through the
>> network components?
>>
>> Of course we all observe such real experiments every day, whenever
>> we see or participate in various kinds of videoconferences. Perhaps
>> someone has instrumented and gathered performance data...?
>>
>> These questions were discussed and debated at great length more than
>> 40 years ago as TCP V4 was designed. We couldn't figure out the
>> appropriate algorithms and protocols, and didn't have computer
>> equipment or communications capabilities to implement anything more
>> than the simplest mechanisms anyway. So the topic became an item
>> on the "future study" list.
>>
>> But we did put various "placeholder" mechanisms in place in TCP/IP
>> V4, as a reminder that a "real" solution was needed for some future
>> next generation release. Time-to-live (TTL) would likely need to be
>> based on actual time instead of hops - which were silly but the best
>> we could do with available equipment at the time. Source Quench
>> (SQ) needed to be replaced by a more effective mechanism, and
>> include details of how all the components should act when sending or
>> receiving an SQ. Routing needed to be expanded to add the ability
>> to send different data flows over different routes, so that bulk and
>> interactive data could more readily coexist. Lots of such issues
>> to be resolved.
>>
>> In the meanwhile, the general consensus was that everything would
>> work OK as long as the traffic flows only rarely created
>> "bottleneck" situations, and such events would be short and
>> transitory. There wasn't a lot of data flow yet; the Internet was
>> still an Experiment. We figured we'd be OK for a while as the
>> research continued and found solutions.
>>
>> Meanwhile, the Web happened. Videoconferencing, vlogs, and other
>> generators of high traffic exploded. Clouds have formed, with users
>> now interacting with very remote computers instead of the ones on
>> their desks or down the hall.
>>
>> As Dorothy would say, "We're not in Kansas anymore".
>>
>> Jack Haverty
>>
>> On 3/8/24 12:31, Dave Taht via Nnagain wrote:
>> I am deeply appreciative of everyones efforts here over the past 3
>> years, and within starlink burning the midnight oil on their 20ms
>> goal, (especially nathan!!!!) to make all the progress made on their
>>
>> systems in these past few months. I was so happy to burn about 12
>> minutes, publicly, taking apart Oleg's results here, last week:
>>
>> https://www.youtube.com/watch?v=N0Tmvv5jJKs&t=1760s
>>
>> But couldn't then and still can't talk better to the whys and the
>> problems remaining. (It's not a kernel problem, actually)
>>
>> As for starlink/space support of us, bufferbloat.net, and/or
>> lowering
>> latency across the internet in general, I don't know. I keep hoping
>> a
>> used tesla motor for my boat will arrive in the mail one day, that's
>>
>> all. :)
>>
>> It is my larger hope that with this news, all the others doing FWA,
>> and for that matter, cable, and fiber, will also get on the stick,
>> finally. Maybe someone in the press will explain bufferbloat. Who
>> knows what the coming days hold!?
>>
>> 13 herbs and spices....
>>
>> On Fri, Mar 8, 2024 at 3:10 PM the keyboard of geoff goodfellow
>> via
>> Starlink<starlink@lists.bufferbloat.net> wrote:
>> it would be a super good and appreciative gesture if they would
>> disclose what/if any of the stuff they are making use of and then
>> also to make a donation :)
>>
>> On Fri, Mar 8, 2024 at 12:50 PM J Pan<Pan@uvic.ca> wrote:
>> they benefited a lot from this mailing list and the research and
>> even
>> user community at large
>> --
>> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM),Pan@UVic.CA,
>> Web.UVic.CA/~pan
>>
>> On Fri, Mar 8, 2024 at 11:40 AM the keyboard of geoff goodfellow
>> via
>> Starlink<starlink@lists.bufferbloat.net> wrote:
>> Super excited to be able to share some of what we have been working
>> on over the last few months!
>> EXCERPT:
>>
>> Starlink engineering teams have been focused on improving the
>> performance of our network with the goal of delivering a service
>> with stable 20 millisecond (ms) median latency and minimal packet
>> loss.
>>
>> Over the past month, we have meaningfully reduced median and
>> worst-case latency for users around the world. In the United States
>> alone, we reduced median latency by more than 30%, from 48.5ms to
>> 33ms during hours of peak usage. Worst-case peak hour latency (p99)
>> has dropped by over 60%, from over 150ms to less than 65ms. Outside
>> of the United States, we have also reduced median latency by up to
>> 25% and worst-case latencies by up to 35%...
>>
>> [...]
>> https://api.starlink.com/public-files/StarlinkLatency.pdf
>> via
>> https://twitter.com/Starlink/status/1766179308887028005
>> &
>> https://twitter.com/VirtuallyNathan/status/1766179789927522460
>>
>> --
>> Geoff.Goodfellow@iconia.com
>> living as The Truth is True
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
> --
> Geoff.Goodfellow@iconia.com
> living as The Truth is True
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [NNagain] When Flows Collide?
2024-03-09 19:08 ` Jack Haverty
2024-03-09 19:45 ` rjmcmahon
@ 2024-03-09 20:08 ` David Lang
1 sibling, 0 replies; 12+ messages in thread
From: David Lang @ 2024-03-09 20:08 UTC (permalink / raw)
To: Jack Haverty; +Cc: David Lang, Jack Haverty via Nnagain, Starlink
[-- Attachment #1: Type: text/plain, Size: 6873 bytes --]
> Thanks for the explanation. I had heard most of those technology buzzwords
> but your message puts them into context together. Prioritization and queue
> management certainly helps, but I still don'tunderstand how the system behaves
> when it hits capacity somewhere deep inside -- the "thought experiments" I
> described with a bottleneck somewhere deep inside the Internet as two flows
> collide.
The active queue management I describe needs to take place on the sending side
of any congested link. (there is a limited amount of tweaking that can be done
on the receiving side by not acking to force a sender to slow down, but it works
FAR better to manage on the sending side)
It's also not that common for the connections deep in the network to be the
bottleneck. It can happen, but most ISPs watch link capacity and keep
significant headroom (by adding parallel links in many cases). In practice, it's
almost always the 'last mile' link that is the bottleneck.
> The scheme you describe also seems vulnerable to users' innovative tactics to
> get better service. E.g., an "ISO download" using some scheme like Torrents
> would spread the traffic around a bunch of connections which may not all go
> through the same bottleneck and not be judged as low priority. Also it seems
> prone to things like DDOS attacks, e.g., flooding a path with DNS queries
> from many bot sources that are judged as high priority.
First off, I'm sure that the core folks here who write the code will take
exception to my simplifications. I welcome corrections.
Cake and fq_codel are not the result of deep academic research (although they
have spawned quite a bit of it), they are the result of insights and tweaks
looking at real-world behavior, with the approach being 'keep it as simple as
possible, but no simpler'. So some of this is heuristics, but they have been
shown to work and be hard to game over many years.
It is hard to game things, because connections are evaluated based on their
bahavior, not based on port or inspecting them to determine their protocol.
DNS queries are not giving high priority because they are DNS, new connections
are given high priority until they start having a lot of data. Since DNS tends
to be short queries with a short response, they never transfer enough data to
get impacted. Torrent connections are each passing a significant amount of data
so they are slowed. Since the torrent connections will be involving different
endpoints, they will take different paths and only those
Cake also adds a layer that fq_codel doesn't have that can evaluate at a
host/network/customer level to provide fairness at those levels rather than just
at the flow level.
There are multiple queues, and sending rotates between them, connections are
assigned to a queue based on various logic (connection data and the other things
cake can take into account), you really only have contention within a queue.
queues are kept small, and no sender is allowed to use too much of the queue, so
the latency for new data is kept small.
In addition, it has been shown that when you do something, there are a lot of
small flows that happen serially, so any per-connection latency get multiplied
in terms of the end user experience (think about loading a web page. it
references many other pages, each URL needs a DNS lookup, then a check to see if
the cached data for that site is still valid, and other things before the page
can start to be rendered, and the image data can actually arrive quite a bit
later without bothering the user)
> The behavior "packets get marked/dropped to signal the sender to slow down"
> seems essentially the same design as the "Source Quench" behavior defined in
> the early 1980s. At the time, I was responsiblefor a TCP implementation, and
> had questions about what my TCP should do when it received such a "slow down"
> message. It was especially unclear in certain situations - e.g., if my TCP
> sent a datagram toopen a new connection and got a "slow down" response, what
> exactly should it do?
fq_codel and cake do not invent any new mechanisms to control the flow, they
just leverage the existing TCP fallback (including the half-measure of ECN
tagging to signal to slow down without requiring a retransmit)
> There were no good answers back then. One TCP implementor decided that the
> best reaction on receiving a "slow down" message was to immediately retransmit
> the datagram that had just been confirmed to bediscarded. "Slow down"
> actually meant "Speed up, I threw away your last datagram."
but you first have to find out that the packet didn't arrive by not getting the
ack before the timeout, and in the meantime, you don't send more than your
transmit window. When you retransmit the missing packet, you still have your
window full until you get an ack for that packet (and you are supposed to shrink
your window size when a packet is lost or you get an ECN signal)
so at a micro level, you are generating more traffic, but at a macro level you
are slowing down.
yes, misbehaving stacks can send too much, but that will just mean that more
packets on the offending connections get dropped.
In terms of packet generators, you can never get perfect defense against pure
bandwidth flooding, but if you use cake-like mechanisms to ensure fairness
between IPs/customers/etc you limit the damage
> So, I'm still curious about the Internet behavior with the current mechanisms
> when the system hits its maximum capacity - the two simple scenarios I
> mentioned with bottlenecks and only two data flowsinvolved that converged at
> the bottleneck. What's supposed to happen in theory? Are implementations
> actually doing what they're supposed to do? What does happen in a real-world
> test?
as noted above, the vast majority of the time, the link that hits maximum
capacity is the last-mile hop to the user rather than some ISP <-> ISP hop out
in the middle of the Internet. fq_codel is pretty cheap to implement (cake is a
bit more expensive, so more suitable for the endpoints than core systems).
when trying to define what 'the right thing to do' should be, it's extremely
tempting for academic studies to fall into the trap of deciding what should
happen based on global knowledge about the entire network. fq_codel and cake
work by just looking at the data being fed to the congested link (well, cake at
a last-mile hop can take advantage of some categorization rules/lookups that
would not be avaialble to core Internet routers)
but I think the short answer to your scenario is 'if it would exceed your queue
limits, drop a packet from the connection sending the most data'
a little bit of buffering is a good thing, the key is to keep the buffers from
building up and affecting other connections
David Lang
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [NNagain] [Starlink] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens)
2024-03-08 20:31 ` Dave Taht
2024-03-08 23:44 ` [NNagain] When Flows Collide? Jack Haverty
@ 2024-03-10 17:41 ` Michael Richardson
1 sibling, 0 replies; 12+ messages in thread
From: Michael Richardson @ 2024-03-10 17:41 UTC (permalink / raw)
To: Dave Taht
Cc: the keyboard of geoff goodfellow, Starlink,
=?UTF-8?Q?Network_Neutrality_is_back=21_Let=C2=B4s_make_the_technical_aspect?=
=?UTF-8?Q?s_heard_this_time=21?=
Dave Taht via Starlink <starlink@lists.bufferbloat.net> wrote:
> and for that matter, cable, and fiber, will also get on the stick,
> finally. Maybe someone in the press will explain bufferbloat. Who
> knows what the coming days hold!?
I am imaging a 2005-era Apple iPod shadow dancing ad... one with high
buffers (low RPM), and one with low (high RPM).
What do you think Stuart?
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2024-03-10 17:41 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-08 19:40 [NNagain] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens) the keyboard of geoff goodfellow
2024-03-08 19:50 ` [NNagain] [Starlink] " J Pan
2024-03-08 20:09 ` the keyboard of geoff goodfellow
2024-03-08 20:25 ` Frantisek Borsik
2024-03-08 20:31 ` Dave Taht
2024-03-08 23:44 ` [NNagain] When Flows Collide? Jack Haverty
2024-03-09 2:57 ` David Lang
2024-03-09 19:08 ` Jack Haverty
2024-03-09 19:45 ` rjmcmahon
2024-03-09 20:08 ` David Lang
2024-03-10 17:41 ` [NNagain] [Starlink] SpaceX: "IMPROVING STARLINK’S LATENCY" (via Nathan Owens) Michael Richardson
2024-03-08 20:30 ` rjmcmahon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox