Development issues regarding the cerowrt test router project
 help / color / mirror / Atom feed
* [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
@ 2021-07-01  0:12 Dave Taht
  2021-07-02  1:16 ` David P. Reed
  0 siblings, 1 reply; 108+ messages in thread
From: Dave Taht @ 2021-07-01  0:12 UTC (permalink / raw)
  To: bloat, Make-Wifi-fast, cerowrt-devel, codel, starlink, Cake List

The program committee members are *amazing*. Perhaps, finally, we can
move the bar for the internet's quality metrics past endless, blind
repetitions of speedtest.

For complete details, please see:
https://www.iab.org/activities/workshops/network-quality/

Submissions Due: Monday 2nd August 2021, midnight AOE (Anywhere On Earth)
Invitations Issued by: Monday 16th August 2021

Workshop Date: This will be a virtual workshop, spread over three days:

1400-1800 UTC Tue 14th September 2021
1400-1800 UTC Wed 15th September 2021
1400-1800 UTC Thu 16th September 2021

Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira

The Program Committee members:

Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam
Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, Geoff
Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja Kuehlewind,
Jason Livingood, Matt Mathias, Randall Meyer, Kathleen Nichols,
Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.

Send Submissions to: network-quality-workshop-pc@iab.org.

Position papers from academia, industry, the open source community and
others that focus on measurements, experiences, observations and
advice for the future are welcome. Papers that reflect experience
based on deployed services are especially welcome. The organizers
understand that specific actions taken by operators are unlikely to be
discussed in detail, so papers discussing general categories of
actions and issues without naming specific technologies, products, or
other players in the ecosystem are expected. Papers should not focus
on specific protocol solutions.

The workshop will be by invitation only. Those wishing to attend
should submit a position paper to the address above; it may take the
form of an Internet-Draft.

All inputs submitted and considered relevant will be published on the
workshop website. The organisers will decide whom to invite based on
the submissions received. Sessions will be organized according to
content, and not every accepted submission or invited attendee will
have an opportunity to present as the intent is to foster discussion
and not simply to have a sequence of presentations.

Position papers from those not planning to attend the virtual sessions
themselves are also encouraged. A workshop report will be published
afterwards.

Overview:

"We believe that one of the major factors behind this lack of progress
is the popular perception that throughput is the often sole measure of
the quality of Internet connectivity. With such narrow focus, people
don’t consider questions such as:

What is the latency under typical working conditions?
How reliable is the connectivity across longer time periods?
Does the network allow the use of a broad range of protocols?
What services can be run by clients of the network?
What kind of IPv4, NAT or IPv6 connectivity is offered, and are there firewalls?
What security mechanisms are available for local services, such as DNS?
To what degree are the privacy, confidentiality, integrity and
authenticity of user communications guarded?

Improving these aspects of network quality will likely depend on
measurement and exposing metrics to all involved parties, including to
end users in a meaningful way. Such measurements and exposure of the
right metrics will allow service providers and network operators to
focus on the aspects that impacts the users’ experience most and at
the same time empowers users to choose the Internet service that will
give them the best experience."


-- 
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-01  0:12 [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board Dave Taht
@ 2021-07-02  1:16 ` David P. Reed
  2021-07-02  4:04   ` [Make-wifi-fast] " Bob McMahon
  2021-07-02 17:07   ` [Cerowrt-devel] " Dave Taht
  0 siblings, 2 replies; 108+ messages in thread
From: David P. Reed @ 2021-07-02  1:16 UTC (permalink / raw)
  To: Dave Taht
  Cc: bloat, Make-Wifi-fast, cerowrt-devel, codel, starlink, Cake List

[-- Attachment #1: Type: text/plain, Size: 7359 bytes --]


Well, nice that the folks doing the conference  are willing to consider that quality of user experience has little to do with signalling rate at the physical layer or throughput of FTP transfers.
 
But honestly, the fact that they call the problem "network quality" suggests that they REALLY, REALLY don't understand the Internet isn't the hardware or the routers or even the routing algorithms *to its users*.
 
By ignoring the diversity of applications now and in the future, and the fact that we DON'T KNOW what will be coming up, this conference will likely fall into the usual trap that net-heads fall into - optimizing for some imaginary reality that doesn't exist, and in fact will probably never be what users actually will do given the chance.
 
I saw this issue in 1976 in the group developing the original Internet protocols - a desire to put *into the network* special tricks to optimize ASR33 logins to remote computers from terminal concentrators (aka remote login), bulk file transfers between file systems on different time-sharing systems, and "sessions" (virtual circuits) that required logins. And then trying to exploit underlying "multicast" by building it into the IP layer, because someone thought that TV broadcast would be the dominant application.
 
Frankly, to think of "quality" as something that can be "provided" by "the network" misses the entire point of "end-to-end argument in system design". Quality is not a property defined or created by The Network. If you want to talk about Quality, you need to talk about users - all the users at all times, now and into the future, and that's something you can't do if you don't bother to include current and future users talking about what they might expect to experience that they don't experience.
 
There was much fighting back in 1976 that basically involved "network experts" saying that the network was the place to "solve" such issues as quality, so applications could avoid having to solve such issues.
 
What some of us managed to do was to argue that you can't "solve" such issues. All you can do is provide a framework that enables different uses to *cooperate* in some way.
 
Which is why the Internet drops packets rather than queueing them, and why diffserv cannot work.
(I know the latter is conftroversial, but at the moment, ALL of diffserv attempts to talk about end-to-end applicaiton specific metrics, but never, ever explains what the diffserv control points actually do w.r.t. what the IP layer can actually control. So it is meaningless - another violation of the so-called end-to-end principle).
 
Networks are about getting packets from here to there, multiplexing the underlying resources. That's it. Quality is a whole different thing. Quality can be improved by end-to-end approaches, if the underlying network provides some kind of thing that actually creates a way for end-to-end applications to affect queueing and routing decisions, and more importantly getting "telemetry" from the network regarding what is actually going on with the other end-to-end users sharing the infrastructure.
 
This conference won't talk about it this way. So don't waste your time.
 
 
 
On Wednesday, June 30, 2021 8:12pm, "Dave Taht" <dave.taht@gmail.com> said:



> The program committee members are *amazing*. Perhaps, finally, we can
> move the bar for the internet's quality metrics past endless, blind
> repetitions of speedtest.
> 
> For complete details, please see:
> https://www.iab.org/activities/workshops/network-quality/
> 
> Submissions Due: Monday 2nd August 2021, midnight AOE (Anywhere On Earth)
> Invitations Issued by: Monday 16th August 2021
> 
> Workshop Date: This will be a virtual workshop, spread over three days:
> 
> 1400-1800 UTC Tue 14th September 2021
> 1400-1800 UTC Wed 15th September 2021
> 1400-1800 UTC Thu 16th September 2021
> 
> Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> 
> The Program Committee members:
> 
> Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam
> Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, Geoff
> Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja Kuehlewind,
> Jason Livingood, Matt Mathias, Randall Meyer, Kathleen Nichols,
> Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> 
> Send Submissions to: network-quality-workshop-pc@iab.org.
> 
> Position papers from academia, industry, the open source community and
> others that focus on measurements, experiences, observations and
> advice for the future are welcome. Papers that reflect experience
> based on deployed services are especially welcome. The organizers
> understand that specific actions taken by operators are unlikely to be
> discussed in detail, so papers discussing general categories of
> actions and issues without naming specific technologies, products, or
> other players in the ecosystem are expected. Papers should not focus
> on specific protocol solutions.
> 
> The workshop will be by invitation only. Those wishing to attend
> should submit a position paper to the address above; it may take the
> form of an Internet-Draft.
> 
> All inputs submitted and considered relevant will be published on the
> workshop website. The organisers will decide whom to invite based on
> the submissions received. Sessions will be organized according to
> content, and not every accepted submission or invited attendee will
> have an opportunity to present as the intent is to foster discussion
> and not simply to have a sequence of presentations.
> 
> Position papers from those not planning to attend the virtual sessions
> themselves are also encouraged. A workshop report will be published
> afterwards.
> 
> Overview:
> 
> "We believe that one of the major factors behind this lack of progress
> is the popular perception that throughput is the often sole measure of
> the quality of Internet connectivity. With such narrow focus, people
> don’t consider questions such as:
> 
> What is the latency under typical working conditions?
> How reliable is the connectivity across longer time periods?
> Does the network allow the use of a broad range of protocols?
> What services can be run by clients of the network?
> What kind of IPv4, NAT or IPv6 connectivity is offered, and are there firewalls?
> What security mechanisms are available for local services, such as DNS?
> To what degree are the privacy, confidentiality, integrity and
> authenticity of user communications guarded?
> 
> Improving these aspects of network quality will likely depend on
> measurement and exposing metrics to all involved parties, including to
> end users in a meaningful way. Such measurements and exposure of the
> right metrics will allow service providers and network operators to
> focus on the aspects that impacts the users’ experience most and at
> the same time empowers users to choose the Internet service that will
> give them the best experience."
> 
> 
> --
> Latest Podcast:
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> 
> Dave Täht CTO, TekLibre, LLC
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 

[-- Attachment #2: Type: text/html, Size: 10572 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-02  1:16 ` David P. Reed
@ 2021-07-02  4:04   ` Bob McMahon
  2021-07-02 16:11     ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Dick Roy
  2021-07-02 17:07   ` [Cerowrt-devel] " Dave Taht
  1 sibling, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-07-02  4:04 UTC (permalink / raw)
  To: David P. Reed
  Cc: Dave Taht, Cake List, Make-Wifi-fast, starlink, codel,
	cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 19821 bytes --]

I think even packets are a network construct. End/end protocols don't write
packets.  They mostly make writes() and reads and have no clue about
packets. Except for, of course, UDP which you know everything about being
the original designer.

Agreed the telemetry is most interesting and a huge void. Curious to more
of your thoughts on it, metrics, etc.

Note: iperf 2 has write to read latencies. It requires clock sync. My
systems sync to the GPS atomic as the commonNote/ reference. I think
end/end queue depths can be calculated per Little's law (shown below per
inP.) https://sourceforge.net/projects/iperf2/

[rjmcmahon@rjm-nas ~]$ iperf -s -i 1
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.94%enp2s0 port 5001 connected with 192.168.1.100 port
59142 (MSS=1448) (trip-times) (sock=4) (peer 2.1.3-rc) on 2021-07-01
20:57:37 (PDT)
[ ID] Interval        Transfer    Bandwidth    Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr  Reads=Dist
[  1] 0.00-1.00 sec   596 MBytes  5.00 Gbits/sec  0.170/0.153/1.492/0.078
ms (4769/131082)  104 KByte 3674521  22841=787:18657:2467:623:84:41:66:116
[  1] 1.00-2.00 sec   596 MBytes  5.00 Gbits/sec  0.167/0.156/0.434/0.015
ms (4768/131086)  102 KByte 3742630  23346=1307:18975:2171:578:105:53:56:101
[  1] 2.00-3.00 sec   596 MBytes  5.00 Gbits/sec  0.168/0.157/1.337/0.033
ms (4769/131046)  103 KByte 3710006  23263=1470:18602:2148:725:107:53:60:98
[  1] 3.00-4.00 sec   596 MBytes  5.00 Gbits/sec  0.166/0.158/0.241/0.008
ms (4768/131082)  102 KByte 3756478  23960=1452:19714:2123:449:79:32:38:73
[  1] 4.00-5.00 sec   596 MBytes  5.00 Gbits/sec  0.166/0.157/0.247/0.008
ms (4769/131061)  102 KByte 3756193  23653=1234:19529:2206:439:89:36:44:76
[  1] 5.00-6.00 sec   596 MBytes  5.00 Gbits/sec  0.166/0.158/0.245/0.007
ms (4768/131072)  101 KByte 3758826  23478=1081:19356:2284:535:73:35:39:75
[  1] 6.00-7.00 sec   596 MBytes  5.00 Gbits/sec  0.168/0.158/0.283/0.009
ms (4768/131096)  102 KByte 3728988  23477=1338:19301:1995:535:104:46:59:99
[  1] 7.00-8.00 sec   596 MBytes  5.00 Gbits/sec  0.163/0.150/0.400/0.010
ms (4769/131047) 99.7 KByte 3826119  23496=1213:19404:2101:498:83:57:43:97
[  1] 8.00-9.00 sec   596 MBytes  5.00 Gbits/sec  0.158/0.149/0.236/0.008
ms (4768/131082) 96.6 KByte 3951089  23652=1328:19498:2074:493:77:41:53:88
[  1] 9.00-10.00 sec   596 MBytes  5.00 Gbits/sec  0.158/0.149/0.235/0.008
ms (4769/131061) 96.4 KByte 3958720  23725=1509:19410:2051:463:91:46:47:108
[  1] 0.00-10.00 sec  5.82 GBytes  5.00 Gbits/sec  0.165/0.149/1.492/0.028
ms (47685/131072)  101 KByte 3784172
 234891=12719:192446:21620:5338:892:440:505:931

[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.94 -i 1 --trip-times
-b 5g -e
------------------------------------------------------------
Client connecting to 192.168.1.94, TCP port 5001 with pid 68866 (1 flows)
Write buffer size: 131072 Byte
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.100%enp4s0 port 59142 connected with 192.168.1.94
port 5001 (MSS=1448) (trip-times) (sock=3) (ct=0.33 ms) on 2021-07-01
20:57:37 (PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
Cwnd/RTT        NetPwr
[  1] 0.00-1.00 sec   596 MBytes  5.00 Gbits/sec  4770/0          5
 295K/111 us  5631373
[  1] 1.00-2.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 295K/120 us  5207927
[  1] 2.00-3.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 306K/110 us  5681375
[  1] 3.00-4.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
 306K/107 us  5841891
[  1] 4.00-5.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 306K/110 us  5681375
[  1] 5.00-6.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 306K/109 us  5733498
[  1] 6.00-7.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
 306K/115 us  5435499
[  1] 7.00-8.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 306K/111 us  5630192
[  1] 8.00-9.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
 306K/110 us  5682567
[  1] 9.00-10.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 306K/109 us  5733498

[rjmcmahon@rjm-nas ~]$ iperf -s -i 1 --histograms=10u
------------------------------------------------------------
Server listening on TCP port 5001 with pid 5166
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
Enabled rx-histograms bin-width=0.010 ms, bins=1000 (clients must use
--trip-times)
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.94%enp2s0 port 5001 connected with 192.168.1.100 port
59146 (MSS=1448) (trip-times) (sock=4) (peer 2.1.3-rc) on 2021-07-01
21:01:42 (PDT)
[ ID] Interval        Transfer    Bandwidth    Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr  Reads=Dist
[  1] 0.00-1.00 sec   596 MBytes  5.00 Gbits/sec  0.164/0.149/1.832/0.101
ms (4769/131072)  100 KByte 3809846  22370=435:17000:3686:1060:77:35:25:52
[  1] 0.00-1.00 sec F8-PDF:
bin(w=10us):cnt(4769)=15:3,16:4414,17:227,18:49,19:14,20:11,21:6,22:1,23:1,35:1,49:1,55:1,67:1,74:1,85:1,90:2,94:1,95:1,97:1,100:1,103:1,104:1,113:1,114:1,115:2,116:1,118:1,119:2,120:1,125:2,126:1,127:1,132:1,133:1,134:1,137:2,138:1,140:1,142:2,143:1,144:1,149:1,153:1,157:1,159:1,184:1
(5.00/95.00/99.7%=16/17/133,Outliers=352,obl/obu=0/0) (1.832
ms/1625198502.626723)
[  1] 1.00-2.00 sec   596 MBytes  5.00 Gbits/sec  0.156/0.148/0.235/0.006
ms (4768/131094) 95.0 KByte 4018733  21762=498:16581:2918:1512:75:36:56:86
[  1] 1.00-2.00 sec F8-PDF:
bin(w=10us):cnt(4768)=15:6,16:4304,17:287,18:99,19:36,20:21,21:10,22:3,23:1,24:1
(5.00/95.00/99.7%=16/17/21,Outliers=458,obl/obu=0/0) (0.235
ms/1625198503.810735)
[  1] 2.00-3.00 sec   596 MBytes  5.00 Gbits/sec  0.158/0.150/0.515/0.009
ms (4769/131049) 96.2 KByte 3966043  22863=528:18422:3099:571:78:36:47:82
[  1] 2.00-3.00 sec F8-PDF:
bin(w=10us):cnt(4769)=16:4078,17:416,18:182,19:50,20:23,21:9,22:4,23:3,24:1,27:1,30:1,52:1
(5.00/95.00/99.7%=16/18/21,Outliers=0,obl/obu=0/0) (0.515
ms/1625198505.144479)
[  1] 3.00-4.00 sec   596 MBytes  5.00 Gbits/sec  0.157/0.149/0.284/0.007
ms (4768/131082) 95.9 KByte 3978135  22766=472:18044:3360:646:90:37:51:66
[  1] 3.00-4.00 sec F8-PDF:
bin(w=10us):cnt(4768)=15:1,16:4183,17:342,18:159,19:37,20:23,21:13,22:4,23:3,25:1,27:1,29:1
(5.00/95.00/99.7%=16/18/21,Outliers=23,obl/obu=0/0) (0.284
ms/1625198505.973695)
[  1] 4.00-5.00 sec   596 MBytes  5.00 Gbits/sec  0.157/0.149/0.381/0.008
ms (4769/131061) 95.9 KByte 3978347  22759=451:18039:3415:632:57:16:49:100
[  1] 4.00-5.00 sec F8-PDF:
bin(w=10us):cnt(4769)=15:1,16:4253,17:287,18:150,19:31,20:11,21:15,22:6,23:4,24:4,25:1,26:1,27:1,28:2,30:1,39:1
(5.00/95.00/99.7%=16/17/23,Outliers=36,obl/obu=0/0) (0.381
ms/1625198507.119394)
[  1] 5.00-6.00 sec   596 MBytes  5.00 Gbits/sec  0.157/0.151/0.222/0.006
ms (4768/131072) 96.0 KByte 3974720  22661=422:17875:3411:723:95:29:44:62
[  1] 5.00-6.00 sec F8-PDF:
bin(w=10us):cnt(4768)=16:4166,17:405,18:130,19:30,20:21,21:8,22:7,23:1
(5.00/95.00/99.7%=16/17/21,Outliers=0,obl/obu=0/0) (0.222
ms/1625198508.350409)
[  1] 6.00-7.00 sec   596 MBytes  5.00 Gbits/sec  0.158/0.150/0.302/0.008
ms (4768/131082) 96.3 KByte 3962779  22723=453:17930:3414:699:93:24:33:77
[  1] 6.00-7.00 sec F8-PDF:
bin(w=10us):cnt(4768)=16:4179,17:323,18:152,19:50,20:33,21:18,22:6,23:1,24:2,26:1,27:1,28:1,31:1
(5.00/95.00/99.7%=16/18/21,Outliers=0,obl/obu=0/0) (0.302
ms/1625198509.416997)
[  1] 7.00-8.00 sec   596 MBytes  5.00 Gbits/sec  0.157/0.150/0.217/0.006
ms (4769/131061) 96.0 KByte 3974060  22923=489:18132:3533:568:78:23:36:64
[  1] 7.00-8.00 sec F8-PDF:
bin(w=10us):cnt(4769)=16:4228,17:317,18:137,19:45,20:21,21:14,22:7
(5.00/95.00/99.7%=16/17/21,Outliers=0,obl/obu=0/0) (0.217
ms/1625198510.34875)
[  1] 8.00-9.00 sec   596 MBytes  5.00 Gbits/sec  0.158/0.150/0.363/0.009
ms (4768/131072) 96.3 KByte 3960477  22677=472:17988:3377:533:92:50:64:101
[  1] 8.00-9.00 sec F8-PDF:
bin(w=10us):cnt(4768)=16:4194,17:253,18:173,19:62,20:32,21:27,22:12,23:8,24:3,25:2,28:1,37:1
(5.00/95.00/99.7%=16/18/23,Outliers=0,obl/obu=0/0) (0.363
ms/1625198511.392746)
[  1] 9.00-10.00 sec   596 MBytes  5.00 Gbits/sec  0.156/0.150/0.232/0.005
ms (4768/131082) 95.5 KByte 3993997  23174=396:18593:3590:461:50:13:25:46
[  1] 9.00-10.00 sec F8-PDF:
bin(w=10us):cnt(4768)=16:4378,17:234,18:113,19:21,20:10,21:6,22:4,24:2
(5.00/95.00/99.7%=16/17/20,Outliers=0,obl/obu=0/0) (0.232
ms/1625198512.528385)
[  1] 0.00-10.00 sec  5.82 GBytes  5.00 Gbits/sec  0.158/0.148/1.832/0.033
ms (47685/131072) 96.3 KByte 3961002
 226681=4616:178607:33803:7405:785:299:430:736
[  1] 0.00-10.00 sec F8(f)-PDF:
bin(w=10us):cnt(47685)=15:11,16:42378,17:3091,18:1344,19:376,20:206,21:126,22:54,23:22,24:13,25:4,26:2,27:4,28:4,29:1,30:2,31:1,35:1,37:1,39:1,49:1,52:1,55:1,67:1,74:1,85:1,90:2,94:1,95:1,97:1,100:1,103:1,104:1,113:1,114:1,115:2,116:1,118:1,119:2,120:1,125:2,126:1,127:1,132:1,133:1,134:1,137:2,138:1,140:1,142:2,143:1,144:1,149:1,153:1,157:1,159:1,184:1
(5.00/95.00/99.7%=16/17/22,Outliers=279,obl/obu=0/0) (1.832
ms/1625198502.626723)


[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.94 -i 1 --trip-times
-b 5g -e
------------------------------------------------------------
Client connecting to 192.168.1.94, TCP port 5001 with pid 69171 (1 flows)
Write buffer size: 131072 Byte
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.100%enp4s0 port 59146 connected with 192.168.1.94
port 5001 (MSS=1448) (trip-times) (sock=3) (ct=0.30 ms) on 2021-07-01
21:01:42 (PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
Cwnd/RTT        NetPwr
[  1] 0.00-1.00 sec   596 MBytes  5.00 Gbits/sec  4770/0          8
 231K/111 us  5631373
[  1] 1.00-2.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 240K/120 us  5207927
[  1] 2.00-3.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 257K/114 us  5482029
[  1] 3.00-4.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
 257K/110 us  5682567
[  1] 4.00-5.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 257K/108 us  5786586
[  1] 5.00-6.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 257K/136 us  4595230
[  1] 6.00-7.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
 257K/111 us  5631373
[  1] 7.00-8.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 257K/131 us  4770621
[  1] 8.00-9.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
 257K/110 us  5682567
[  1] 9.00-10.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
 257K/110 us  5681375
[  1] 0.00-10.01 sec  5.82 GBytes  5.00 Gbits/sec  47687/0          8
 257K/110 us  5676364
[rjmcmahon@ryzen3950 iperf2-code]$

Bob



On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com> wrote:

> Well, nice that the folks doing the conference  are willing to consider
> that quality of user experience has little to do with signalling rate at
> the physical layer or throughput of FTP transfers.
>
>
>
> But honestly, the fact that they call the problem "network quality"
> suggests that they REALLY, REALLY don't understand the Internet isn't the
> hardware or the routers or even the routing algorithms *to its users*.
>
>
>
> By ignoring the diversity of applications now and in the future, and the
> fact that we DON'T KNOW what will be coming up, this conference will likely
> fall into the usual trap that net-heads fall into - optimizing for some
> imaginary reality that doesn't exist, and in fact will probably never be
> what users actually will do given the chance.
>
>
>
> I saw this issue in 1976 in the group developing the original Internet
> protocols - a desire to put *into the network* special tricks to optimize
> ASR33 logins to remote computers from terminal concentrators (aka remote
> login), bulk file transfers between file systems on different time-sharing
> systems, and "sessions" (virtual circuits) that required logins. And then
> trying to exploit underlying "multicast" by building it into the IP layer,
> because someone thought that TV broadcast would be the dominant application.
>
>
>
> Frankly, to think of "quality" as something that can be "provided" by "the
> network" misses the entire point of "end-to-end argument in system design".
> Quality is not a property defined or created by The Network. If you want to
> talk about Quality, you need to talk about users - all the users at all
> times, now and into the future, and that's something you can't do if you
> don't bother to include current and future users talking about what they
> might expect to experience that they don't experience.
>
>
>
> There was much fighting back in 1976 that basically involved "network
> experts" saying that the network was the place to "solve" such issues as
> quality, so applications could avoid having to solve such issues.
>
>
>
> What some of us managed to do was to argue that you can't "solve" such
> issues. All you can do is provide a framework that enables different uses
> to *cooperate* in some way.
>
>
>
> Which is why the Internet drops packets rather than queueing them, and why
> diffserv cannot work.
>
> (I know the latter is conftroversial, but at the moment, ALL of diffserv
> attempts to talk about end-to-end applicaiton specific metrics, but never,
> ever explains what the diffserv control points actually do w.r.t. what the
> IP layer can actually control. So it is meaningless - another violation of
> the so-called end-to-end principle).
>
>
>
> Networks are about getting packets from here to there, multiplexing the
> underlying resources. That's it. Quality is a whole different thing.
> Quality can be improved by end-to-end approaches, if the underlying network
> provides some kind of thing that actually creates a way for end-to-end
> applications to affect queueing and routing decisions, and more importantly
> getting "telemetry" from the network regarding what is actually going on
> with the other end-to-end users sharing the infrastructure.
>
>
>
> This conference won't talk about it this way. So don't waste your time.
>
>
>
>
>
>
>
> On Wednesday, June 30, 2021 8:12pm, "Dave Taht" <dave.taht@gmail.com>
> said:
>
> > The program committee members are *amazing*. Perhaps, finally, we can
> > move the bar for the internet's quality metrics past endless, blind
> > repetitions of speedtest.
> >
> > For complete details, please see:
> > https://www.iab.org/activities/workshops/network-quality/
> >
> > Submissions Due: Monday 2nd August 2021, midnight AOE (Anywhere On Earth)
> > Invitations Issued by: Monday 16th August 2021
> >
> > Workshop Date: This will be a virtual workshop, spread over three days:
> >
> > 1400-1800 UTC Tue 14th September 2021
> > 1400-1800 UTC Wed 15th September 2021
> > 1400-1800 UTC Thu 16th September 2021
> >
> > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> >
> > The Program Committee members:
> >
> > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam
> > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, Geoff
> > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja Kuehlewind,
> > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen Nichols,
> > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> >
> > Send Submissions to: network-quality-workshop-pc@iab.org.
> >
> > Position papers from academia, industry, the open source community and
> > others that focus on measurements, experiences, observations and
> > advice for the future are welcome. Papers that reflect experience
> > based on deployed services are especially welcome. The organizers
> > understand that specific actions taken by operators are unlikely to be
> > discussed in detail, so papers discussing general categories of
> > actions and issues without naming specific technologies, products, or
> > other players in the ecosystem are expected. Papers should not focus
> > on specific protocol solutions.
> >
> > The workshop will be by invitation only. Those wishing to attend
> > should submit a position paper to the address above; it may take the
> > form of an Internet-Draft.
> >
> > All inputs submitted and considered relevant will be published on the
> > workshop website. The organisers will decide whom to invite based on
> > the submissions received. Sessions will be organized according to
> > content, and not every accepted submission or invited attendee will
> > have an opportunity to present as the intent is to foster discussion
> > and not simply to have a sequence of presentations.
> >
> > Position papers from those not planning to attend the virtual sessions
> > themselves are also encouraged. A workshop report will be published
> > afterwards.
> >
> > Overview:
> >
> > "We believe that one of the major factors behind this lack of progress
> > is the popular perception that throughput is the often sole measure of
> > the quality of Internet connectivity. With such narrow focus, people
> > don’t consider questions such as:
> >
> > What is the latency under typical working conditions?
> > How reliable is the connectivity across longer time periods?
> > Does the network allow the use of a broad range of protocols?
> > What services can be run by clients of the network?
> > What kind of IPv4, NAT or IPv6 connectivity is offered, and are there
> firewalls?
> > What security mechanisms are available for local services, such as DNS?
> > To what degree are the privacy, confidentiality, integrity and
> > authenticity of user communications guarded?
> >
> > Improving these aspects of network quality will likely depend on
> > measurement and exposing metrics to all involved parties, including to
> > end users in a meaningful way. Such measurements and exposure of the
> > right metrics will allow service providers and network operators to
> > focus on the aspects that impacts the users’ experience most and at
> > the same time empowers users to choose the Internet service that will
> > give them the best experience."
> >
> >
> > --
> > Latest Podcast:
> >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >
> > Dave Täht CTO, TekLibre, LLC
> > _______________________________________________
> > Cerowrt-devel mailing list
> > Cerowrt-devel@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> >
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 23695 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-02  4:04   ` [Make-wifi-fast] " Bob McMahon
@ 2021-07-02 16:11     ` Dick Roy
  0 siblings, 0 replies; 108+ messages in thread
From: Dick Roy @ 2021-07-02 16:11 UTC (permalink / raw)
  To: 'Bob McMahon', 'David P. Reed'
  Cc: starlink, 'Make-Wifi-fast', 'Cake List',
	codel, 'cerowrt-devel', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 20209 bytes --]

Some terminology if one cares:

 

“Segments” are “transported”    (Layer 4)

“Packets” are “networked”        (Layer 3)

“Frames” are “”data linked”       (Layer 2)

 

and last but not least …

 

“Streams: flow “over the air”     (Layer 1)

 

  _____  

From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
Bob McMahon
Sent: Thursday, July 1, 2021 9:04 PM
To: David P. Reed
Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast; Cake List;
codel@lists.bufferbloat.net; cerowrt-devel; bloat
Subject: Re: [Starlink] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet
Quality workshop CFP for the internet architecture board

 

I think even packets are a network construct. End/end protocols don't write
packets.  They mostly make writes() and reads and have no clue about
packets. Except for, of course, UDP which you know everything about being
the original designer.

Agreed the telemetry is most interesting and a huge void. Curious to more of
your thoughts on it, metrics, etc.

Note: iperf 2 has write to read latencies. It requires clock sync. My
systems sync to the GPS atomic as the commonNote/ reference. I think end/end
queue depths can be calculated per Little's law (shown below per inP.)
https://sourceforge.net/projects/iperf2/

[rjmcmahon@rjm-nas ~]$ iperf -s -i 1 
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.94%enp2s0 port 5001 connected with 192.168.1.100 port
59142 (MSS=1448) (trip-times) (sock=4) (peer 2.1.3-rc) on 2021-07-01
20:57:37 (PDT)
[ ID] Interval        Transfer    Bandwidth    Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr  Reads=Dist
[  1] 0.00-1.00 sec   596 MBytes  5.00 Gbits/sec  0.170/0.153/1.492/0.078 ms
(4769/131082)  104 KByte 3674521  22841=787:18657:2467:623:84:41:66:116
[  1] 1.00-2.00 sec   596 MBytes  5.00 Gbits/sec  0.167/0.156/0.434/0.015 ms
(4768/131086)  102 KByte 3742630  23346=1307:18975:2171:578:105:53:56:101
[  1] 2.00-3.00 sec   596 MBytes  5.00 Gbits/sec  0.168/0.157/1.337/0.033 ms
(4769/131046)  103 KByte 3710006  23263=1470:18602:2148:725:107:53:60:98
[  1] 3.00-4.00 sec   596 MBytes  5.00 Gbits/sec  0.166/0.158/0.241/0.008 ms
(4768/131082)  102 KByte 3756478  23960=1452:19714:2123:449:79:32:38:73
[  1] 4.00-5.00 sec   596 MBytes  5.00 Gbits/sec  0.166/0.157/0.247/0.008 ms
(4769/131061)  102 KByte 3756193  23653=1234:19529:2206:439:89:36:44:76
[  1] 5.00-6.00 sec   596 MBytes  5.00 Gbits/sec  0.166/0.158/0.245/0.007 ms
(4768/131072)  101 KByte 3758826  23478=1081:19356:2284:535:73:35:39:75
[  1] 6.00-7.00 sec   596 MBytes  5.00 Gbits/sec  0.168/0.158/0.283/0.009 ms
(4768/131096)  102 KByte 3728988  23477=1338:19301:1995:535:104:46:59:99
[  1] 7.00-8.00 sec   596 MBytes  5.00 Gbits/sec  0.163/0.150/0.400/0.010 ms
(4769/131047) 99.7 KByte 3826119  23496=1213:19404:2101:498:83:57:43:97
[  1] 8.00-9.00 sec   596 MBytes  5.00 Gbits/sec  0.158/0.149/0.236/0.008 ms
(4768/131082) 96.6 KByte 3951089  23652=1328:19498:2074:493:77:41:53:88
[  1] 9.00-10.00 sec   596 MBytes  5.00 Gbits/sec  0.158/0.149/0.235/0.008
ms (4769/131061) 96.4 KByte 3958720  23725=1509:19410:2051:463:91:46:47:108
[  1] 0.00-10.00 sec  5.82 GBytes  5.00 Gbits/sec  0.165/0.149/1.492/0.028
ms (47685/131072)  101 KByte 3784172
234891=12719:192446:21620:5338:892:440:505:931

[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.94 -i 1 --trip-times
-b 5g -e
------------------------------------------------------------
Client connecting to 192.168.1.94, TCP port 5001 with pid 68866 (1 flows)
Write buffer size: 131072 Byte
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.100%enp4s0 port 59142 connected with 192.168.1.94 port
5001 (MSS=1448) (trip-times) (sock=3) (ct=0.33 ms) on 2021-07-01 20:57:37
(PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
Cwnd/RTT        NetPwr
[  1] 0.00-1.00 sec   596 MBytes  5.00 Gbits/sec  4770/0          5
295K/111 us  5631373
[  1] 1.00-2.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
295K/120 us  5207927
[  1] 2.00-3.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
306K/110 us  5681375
[  1] 3.00-4.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
306K/107 us  5841891
[  1] 4.00-5.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
306K/110 us  5681375
[  1] 5.00-6.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
306K/109 us  5733498
[  1] 6.00-7.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
306K/115 us  5435499
[  1] 7.00-8.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
306K/111 us  5630192
[  1] 8.00-9.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
306K/110 us  5682567
[  1] 9.00-10.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
306K/109 us  5733498

[rjmcmahon@rjm-nas ~]$ iperf -s -i 1 --histograms=10u
------------------------------------------------------------
Server listening on TCP port 5001 with pid 5166
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
Enabled rx-histograms bin-width=0.010 ms, bins=1000 (clients must use
--trip-times)
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.94%enp2s0 port 5001 connected with 192.168.1.100 port
59146 (MSS=1448) (trip-times) (sock=4) (peer 2.1.3-rc) on 2021-07-01
21:01:42 (PDT)
[ ID] Interval        Transfer    Bandwidth    Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr  Reads=Dist
[  1] 0.00-1.00 sec   596 MBytes  5.00 Gbits/sec  0.164/0.149/1.832/0.101 ms
(4769/131072)  100 KByte 3809846  22370=435:17000:3686:1060:77:35:25:52
[  1] 0.00-1.00 sec F8-PDF:
bin(w=10us):cnt(4769)=15:3,16:4414,17:227,18:49,19:14,20:11,21:6,22:1,23:1,3
5:1,49:1,55:1,67:1,74:1,85:1,90:2,94:1,95:1,97:1,100:1,103:1,104:1,113:1,114
:1,115:2,116:1,118:1,119:2,120:1,125:2,126:1,127:1,132:1,133:1,134:1,137:2,1
38:1,140:1,142:2,143:1,144:1,149:1,153:1,157:1,159:1,184:1
(5.00/95.00/99.7%=16/17/133,Outliers=352,obl/obu=0/0) (1.832
ms/1625198502.626723)
[  1] 1.00-2.00 sec   596 MBytes  5.00 Gbits/sec  0.156/0.148/0.235/0.006 ms
(4768/131094) 95.0 KByte 4018733  21762=498:16581:2918:1512:75:36:56:86
[  1] 1.00-2.00 sec F8-PDF:
bin(w=10us):cnt(4768)=15:6,16:4304,17:287,18:99,19:36,20:21,21:10,22:3,23:1,
24:1 (5.00/95.00/99.7%=16/17/21,Outliers=458,obl/obu=0/0) (0.235
ms/1625198503.810735)
[  1] 2.00-3.00 sec   596 MBytes  5.00 Gbits/sec  0.158/0.150/0.515/0.009 ms
(4769/131049) 96.2 KByte 3966043  22863=528:18422:3099:571:78:36:47:82
[  1] 2.00-3.00 sec F8-PDF:
bin(w=10us):cnt(4769)=16:4078,17:416,18:182,19:50,20:23,21:9,22:4,23:3,24:1,
27:1,30:1,52:1 (5.00/95.00/99.7%=16/18/21,Outliers=0,obl/obu=0/0) (0.515
ms/1625198505.144479)
[  1] 3.00-4.00 sec   596 MBytes  5.00 Gbits/sec  0.157/0.149/0.284/0.007 ms
(4768/131082) 95.9 KByte 3978135  22766=472:18044:3360:646:90:37:51:66
[  1] 3.00-4.00 sec F8-PDF:
bin(w=10us):cnt(4768)=15:1,16:4183,17:342,18:159,19:37,20:23,21:13,22:4,23:3
,25:1,27:1,29:1 (5.00/95.00/99.7%=16/18/21,Outliers=23,obl/obu=0/0) (0.284
ms/1625198505.973695)
[  1] 4.00-5.00 sec   596 MBytes  5.00 Gbits/sec  0.157/0.149/0.381/0.008 ms
(4769/131061) 95.9 KByte 3978347  22759=451:18039:3415:632:57:16:49:100
[  1] 4.00-5.00 sec F8-PDF:
bin(w=10us):cnt(4769)=15:1,16:4253,17:287,18:150,19:31,20:11,21:15,22:6,23:4
,24:4,25:1,26:1,27:1,28:2,30:1,39:1
(5.00/95.00/99.7%=16/17/23,Outliers=36,obl/obu=0/0) (0.381
ms/1625198507.119394)
[  1] 5.00-6.00 sec   596 MBytes  5.00 Gbits/sec  0.157/0.151/0.222/0.006 ms
(4768/131072) 96.0 KByte 3974720  22661=422:17875:3411:723:95:29:44:62
[  1] 5.00-6.00 sec F8-PDF:
bin(w=10us):cnt(4768)=16:4166,17:405,18:130,19:30,20:21,21:8,22:7,23:1
(5.00/95.00/99.7%=16/17/21,Outliers=0,obl/obu=0/0) (0.222
ms/1625198508.350409)
[  1] 6.00-7.00 sec   596 MBytes  5.00 Gbits/sec  0.158/0.150/0.302/0.008 ms
(4768/131082) 96.3 KByte 3962779  22723=453:17930:3414:699:93:24:33:77
[  1] 6.00-7.00 sec F8-PDF:
bin(w=10us):cnt(4768)=16:4179,17:323,18:152,19:50,20:33,21:18,22:6,23:1,24:2
,26:1,27:1,28:1,31:1 (5.00/95.00/99.7%=16/18/21,Outliers=0,obl/obu=0/0)
(0.302 ms/1625198509.416997)
[  1] 7.00-8.00 sec   596 MBytes  5.00 Gbits/sec  0.157/0.150/0.217/0.006 ms
(4769/131061) 96.0 KByte 3974060  22923=489:18132:3533:568:78:23:36:64
[  1] 7.00-8.00 sec F8-PDF:
bin(w=10us):cnt(4769)=16:4228,17:317,18:137,19:45,20:21,21:14,22:7
(5.00/95.00/99.7%=16/17/21,Outliers=0,obl/obu=0/0) (0.217
ms/1625198510.34875)
[  1] 8.00-9.00 sec   596 MBytes  5.00 Gbits/sec  0.158/0.150/0.363/0.009 ms
(4768/131072) 96.3 KByte 3960477  22677=472:17988:3377:533:92:50:64:101
[  1] 8.00-9.00 sec F8-PDF:
bin(w=10us):cnt(4768)=16:4194,17:253,18:173,19:62,20:32,21:27,22:12,23:8,24:
3,25:2,28:1,37:1 (5.00/95.00/99.7%=16/18/23,Outliers=0,obl/obu=0/0) (0.363
ms/1625198511.392746)
[  1] 9.00-10.00 sec   596 MBytes  5.00 Gbits/sec  0.156/0.150/0.232/0.005
ms (4768/131082) 95.5 KByte 3993997  23174=396:18593:3590:461:50:13:25:46
[  1] 9.00-10.00 sec F8-PDF:
bin(w=10us):cnt(4768)=16:4378,17:234,18:113,19:21,20:10,21:6,22:4,24:2
(5.00/95.00/99.7%=16/17/20,Outliers=0,obl/obu=0/0) (0.232
ms/1625198512.528385)
[  1] 0.00-10.00 sec  5.82 GBytes  5.00 Gbits/sec  0.158/0.148/1.832/0.033
ms (47685/131072) 96.3 KByte 3961002
226681=4616:178607:33803:7405:785:299:430:736
[  1] 0.00-10.00 sec F8(f)-PDF:
bin(w=10us):cnt(47685)=15:11,16:42378,17:3091,18:1344,19:376,20:206,21:126,2
2:54,23:22,24:13,25:4,26:2,27:4,28:4,29:1,30:2,31:1,35:1,37:1,39:1,49:1,52:1
,55:1,67:1,74:1,85:1,90:2,94:1,95:1,97:1,100:1,103:1,104:1,113:1,114:1,115:2
,116:1,118:1,119:2,120:1,125:2,126:1,127:1,132:1,133:1,134:1,137:2,138:1,140
:1,142:2,143:1,144:1,149:1,153:1,157:1,159:1,184:1
(5.00/95.00/99.7%=16/17/22,Outliers=279,obl/obu=0/0) (1.832
ms/1625198502.626723)


[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.94 -i 1 --trip-times
-b 5g -e 
------------------------------------------------------------
Client connecting to 192.168.1.94, TCP port 5001 with pid 69171 (1 flows)
Write buffer size: 131072 Byte
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.100%enp4s0 port 59146 connected with 192.168.1.94 port
5001 (MSS=1448) (trip-times) (sock=3) (ct=0.30 ms) on 2021-07-01 21:01:42
(PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
Cwnd/RTT        NetPwr
[  1] 0.00-1.00 sec   596 MBytes  5.00 Gbits/sec  4770/0          8
231K/111 us  5631373
[  1] 1.00-2.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
240K/120 us  5207927
[  1] 2.00-3.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
257K/114 us  5482029
[  1] 3.00-4.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
257K/110 us  5682567
[  1] 4.00-5.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
257K/108 us  5786586
[  1] 5.00-6.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
257K/136 us  4595230
[  1] 6.00-7.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
257K/111 us  5631373
[  1] 7.00-8.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
257K/131 us  4770621
[  1] 8.00-9.00 sec   596 MBytes  5.00 Gbits/sec  4769/0          0
257K/110 us  5682567
[  1] 9.00-10.00 sec   596 MBytes  5.00 Gbits/sec  4768/0          0
257K/110 us  5681375
[  1] 0.00-10.01 sec  5.82 GBytes  5.00 Gbits/sec  47687/0          8
257K/110 us  5676364
[rjmcmahon@ryzen3950 iperf2-code]$ 

Bob 



 

On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com> wrote:

Well, nice that the folks doing the conference  are willing to consider that
quality of user experience has little to do with signalling rate at the
physical layer or throughput of FTP transfers.

 

But honestly, the fact that they call the problem "network quality" suggests
that they REALLY, REALLY don't understand the Internet isn't the hardware or
the routers or even the routing algorithms *to its users*.

 

By ignoring the diversity of applications now and in the future, and the
fact that we DON'T KNOW what will be coming up, this conference will likely
fall into the usual trap that net-heads fall into - optimizing for some
imaginary reality that doesn't exist, and in fact will probably never be
what users actually will do given the chance.

 

I saw this issue in 1976 in the group developing the original Internet
protocols - a desire to put *into the network* special tricks to optimize
ASR33 logins to remote computers from terminal concentrators (aka remote
login), bulk file transfers between file systems on different time-sharing
systems, and "sessions" (virtual circuits) that required logins. And then
trying to exploit underlying "multicast" by building it into the IP layer,
because someone thought that TV broadcast would be the dominant application.

 

Frankly, to think of "quality" as something that can be "provided" by "the
network" misses the entire point of "end-to-end argument in system design".
Quality is not a property defined or created by The Network. If you want to
talk about Quality, you need to talk about users - all the users at all
times, now and into the future, and that's something you can't do if you
don't bother to include current and future users talking about what they
might expect to experience that they don't experience.

 

There was much fighting back in 1976 that basically involved "network
experts" saying that the network was the place to "solve" such issues as
quality, so applications could avoid having to solve such issues.

 

What some of us managed to do was to argue that you can't "solve" such
issues. All you can do is provide a framework that enables different uses to
*cooperate* in some way.

 

Which is why the Internet drops packets rather than queueing them, and why
diffserv cannot work.

(I know the latter is conftroversial, but at the moment, ALL of diffserv
attempts to talk about end-to-end applicaiton specific metrics, but never,
ever explains what the diffserv control points actually do w.r.t. what the
IP layer can actually control. So it is meaningless - another violation of
the so-called end-to-end principle).

 

Networks are about getting packets from here to there, multiplexing the
underlying resources. That's it. Quality is a whole different thing. Quality
can be improved by end-to-end approaches, if the underlying network provides
some kind of thing that actually creates a way for end-to-end applications
to affect queueing and routing decisions, and more importantly getting
"telemetry" from the network regarding what is actually going on with the
other end-to-end users sharing the infrastructure.

 

This conference won't talk about it this way. So don't waste your time.

 

 

 

On Wednesday, June 30, 2021 8:12pm, "Dave Taht" <dave.taht@gmail.com> said:

> The program committee members are *amazing*. Perhaps, finally, we can
> move the bar for the internet's quality metrics past endless, blind
> repetitions of speedtest.
> 
> For complete details, please see:
> https://www.iab.org/activities/workshops/network-quality/
> 
> Submissions Due: Monday 2nd August 2021, midnight AOE (Anywhere On Earth)
> Invitations Issued by: Monday 16th August 2021
> 
> Workshop Date: This will be a virtual workshop, spread over three days:
> 
> 1400-1800 UTC Tue 14th September 2021
> 1400-1800 UTC Wed 15th September 2021
> 1400-1800 UTC Thu 16th September 2021
> 
> Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> 
> The Program Committee members:
> 
> Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam
> Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, Geoff
> Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja Kuehlewind,
> Jason Livingood, Matt Mathias, Randall Meyer, Kathleen Nichols,
> Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> 
> Send Submissions to: network-quality-workshop-pc@iab.org.
> 
> Position papers from academia, industry, the open source community and
> others that focus on measurements, experiences, observations and
> advice for the future are welcome. Papers that reflect experience
> based on deployed services are especially welcome. The organizers
> understand that specific actions taken by operators are unlikely to be
> discussed in detail, so papers discussing general categories of
> actions and issues without naming specific technologies, products, or
> other players in the ecosystem are expected. Papers should not focus
> on specific protocol solutions.
> 
> The workshop will be by invitation only. Those wishing to attend
> should submit a position paper to the address above; it may take the
> form of an Internet-Draft.
> 
> All inputs submitted and considered relevant will be published on the
> workshop website. The organisers will decide whom to invite based on
> the submissions received. Sessions will be organized according to
> content, and not every accepted submission or invited attendee will
> have an opportunity to present as the intent is to foster discussion
> and not simply to have a sequence of presentations.
> 
> Position papers from those not planning to attend the virtual sessions
> themselves are also encouraged. A workshop report will be published
> afterwards.
> 
> Overview:
> 
> "We believe that one of the major factors behind this lack of progress
> is the popular perception that throughput is the often sole measure of
> the quality of Internet connectivity. With such narrow focus, people
> don’t consider questions such as:
> 
> What is the latency under typical working conditions?
> How reliable is the connectivity across longer time periods?
> Does the network allow the use of a broad range of protocols?
> What services can be run by clients of the network?
> What kind of IPv4, NAT or IPv6 connectivity is offered, and are there
firewalls?
> What security mechanisms are available for local services, such as DNS?
> To what degree are the privacy, confidentiality, integrity and
> authenticity of user communications guarded?
> 
> Improving these aspects of network quality will likely depend on
> measurement and exposing metrics to all involved parties, including to
> end users in a meaningful way. Such measurements and exposure of the
> right metrics will allow service providers and network operators to
> focus on the aspects that impacts the users’ experience most and at
> the same time empowers users to choose the Internet service that will
> give them the best experience."
> 
> 
> --
> Latest Podcast:
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> 
> Dave Täht CTO, TekLibre, LLC
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 

_______________________________________________
Make-wifi-fast mailing list
Make-wifi-fast@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast


This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for the
use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are not
the intended recipient or the person responsible for delivering the e-mail
to the intended recipient, you are hereby notified that any use, copying,
distributing, dissemination, forwarding, printing, or copying of this e-mail
is strictly prohibited. If you received this e-mail in error, please return
the e-mail to the sender, delete it from your computer, and destroy any
printed copy of it.


[-- Attachment #2: Type: text/html, Size: 33732 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-02  1:16 ` David P. Reed
  2021-07-02  4:04   ` [Make-wifi-fast] " Bob McMahon
@ 2021-07-02 17:07   ` Dave Taht
  2021-07-02 23:28     ` [Make-wifi-fast] " Bob McMahon
  1 sibling, 1 reply; 108+ messages in thread
From: Dave Taht @ 2021-07-02 17:07 UTC (permalink / raw)
  To: David P. Reed
  Cc: bloat, Make-Wifi-fast, cerowrt-devel, codel, starlink, Cake List

In terms of trying to find "Quality" I have tried to encourage folk to
both read "zen and the art of motorcycle maintenance"[0], and Deming's
work on "total quality management".

My own slice at this network, computer and lifestyle "issue" is aiming
for "imperceptible latency" in all things. [1]. There's a lot of
fallout from that in terms of not just addressing queuing delay, but
caching, prefetching, and learning more about what a user really needs
(as opposed to wants) to know via intelligent agents.

[0] If you want to get depressed, read Pirsig's successor to "zen...",
lila, which is in part about what happens when an engineer hits an
insoluble problem.
[1] https://www.internetsociety.org/events/latency2013/



On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com> wrote:
>
> Well, nice that the folks doing the conference  are willing to consider that quality of user experience has little to do with signalling rate at the physical layer or throughput of FTP transfers.
>
>
>
> But honestly, the fact that they call the problem "network quality" suggests that they REALLY, REALLY don't understand the Internet isn't the hardware or the routers or even the routing algorithms *to its users*.
>
>
>
> By ignoring the diversity of applications now and in the future, and the fact that we DON'T KNOW what will be coming up, this conference will likely fall into the usual trap that net-heads fall into - optimizing for some imaginary reality that doesn't exist, and in fact will probably never be what users actually will do given the chance.
>
>
>
> I saw this issue in 1976 in the group developing the original Internet protocols - a desire to put *into the network* special tricks to optimize ASR33 logins to remote computers from terminal concentrators (aka remote login), bulk file transfers between file systems on different time-sharing systems, and "sessions" (virtual circuits) that required logins. And then trying to exploit underlying "multicast" by building it into the IP layer, because someone thought that TV broadcast would be the dominant application.
>
>
>
> Frankly, to think of "quality" as something that can be "provided" by "the network" misses the entire point of "end-to-end argument in system design". Quality is not a property defined or created by The Network. If you want to talk about Quality, you need to talk about users - all the users at all times, now and into the future, and that's something you can't do if you don't bother to include current and future users talking about what they might expect to experience that they don't experience.
>
>
>
> There was much fighting back in 1976 that basically involved "network experts" saying that the network was the place to "solve" such issues as quality, so applications could avoid having to solve such issues.
>
>
>
> What some of us managed to do was to argue that you can't "solve" such issues. All you can do is provide a framework that enables different uses to *cooperate* in some way.
>
>
>
> Which is why the Internet drops packets rather than queueing them, and why diffserv cannot work.
>
> (I know the latter is conftroversial, but at the moment, ALL of diffserv attempts to talk about end-to-end applicaiton specific metrics, but never, ever explains what the diffserv control points actually do w.r.t. what the IP layer can actually control. So it is meaningless - another violation of the so-called end-to-end principle).
>
>
>
> Networks are about getting packets from here to there, multiplexing the underlying resources. That's it. Quality is a whole different thing. Quality can be improved by end-to-end approaches, if the underlying network provides some kind of thing that actually creates a way for end-to-end applications to affect queueing and routing decisions, and more importantly getting "telemetry" from the network regarding what is actually going on with the other end-to-end users sharing the infrastructure.
>
>
>
> This conference won't talk about it this way. So don't waste your time.
>
>
>
>
>
>
>
> On Wednesday, June 30, 2021 8:12pm, "Dave Taht" <dave.taht@gmail.com> said:
>
> > The program committee members are *amazing*. Perhaps, finally, we can
> > move the bar for the internet's quality metrics past endless, blind
> > repetitions of speedtest.
> >
> > For complete details, please see:
> > https://www.iab.org/activities/workshops/network-quality/
> >
> > Submissions Due: Monday 2nd August 2021, midnight AOE (Anywhere On Earth)
> > Invitations Issued by: Monday 16th August 2021
> >
> > Workshop Date: This will be a virtual workshop, spread over three days:
> >
> > 1400-1800 UTC Tue 14th September 2021
> > 1400-1800 UTC Wed 15th September 2021
> > 1400-1800 UTC Thu 16th September 2021
> >
> > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> >
> > The Program Committee members:
> >
> > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam
> > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, Geoff
> > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja Kuehlewind,
> > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen Nichols,
> > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> >
> > Send Submissions to: network-quality-workshop-pc@iab.org.
> >
> > Position papers from academia, industry, the open source community and
> > others that focus on measurements, experiences, observations and
> > advice for the future are welcome. Papers that reflect experience
> > based on deployed services are especially welcome. The organizers
> > understand that specific actions taken by operators are unlikely to be
> > discussed in detail, so papers discussing general categories of
> > actions and issues without naming specific technologies, products, or
> > other players in the ecosystem are expected. Papers should not focus
> > on specific protocol solutions.
> >
> > The workshop will be by invitation only. Those wishing to attend
> > should submit a position paper to the address above; it may take the
> > form of an Internet-Draft.
> >
> > All inputs submitted and considered relevant will be published on the
> > workshop website. The organisers will decide whom to invite based on
> > the submissions received. Sessions will be organized according to
> > content, and not every accepted submission or invited attendee will
> > have an opportunity to present as the intent is to foster discussion
> > and not simply to have a sequence of presentations.
> >
> > Position papers from those not planning to attend the virtual sessions
> > themselves are also encouraged. A workshop report will be published
> > afterwards.
> >
> > Overview:
> >
> > "We believe that one of the major factors behind this lack of progress
> > is the popular perception that throughput is the often sole measure of
> > the quality of Internet connectivity. With such narrow focus, people
> > don’t consider questions such as:
> >
> > What is the latency under typical working conditions?
> > How reliable is the connectivity across longer time periods?
> > Does the network allow the use of a broad range of protocols?
> > What services can be run by clients of the network?
> > What kind of IPv4, NAT or IPv6 connectivity is offered, and are there firewalls?
> > What security mechanisms are available for local services, such as DNS?
> > To what degree are the privacy, confidentiality, integrity and
> > authenticity of user communications guarded?
> >
> > Improving these aspects of network quality will likely depend on
> > measurement and exposing metrics to all involved parties, including to
> > end users in a meaningful way. Such measurements and exposure of the
> > right metrics will allow service providers and network operators to
> > focus on the aspects that impacts the users’ experience most and at
> > the same time empowers users to choose the Internet service that will
> > give them the best experience."
> >
> >
> > --
> > Latest Podcast:
> > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >
> > Dave Täht CTO, TekLibre, LLC
> > _______________________________________________
> > Cerowrt-devel mailing list
> > Cerowrt-devel@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> >



--
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-02 17:07   ` [Cerowrt-devel] " Dave Taht
@ 2021-07-02 23:28     ` Bob McMahon
  2021-07-06 13:46       ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Ben Greear
  0 siblings, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-07-02 23:28 UTC (permalink / raw)
  To: Dave Taht
  Cc: David P. Reed, Cake List, Make-Wifi-fast, starlink, codel,
	cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 10508 bytes --]

I think we need the language of math here. It seems like the network power
metric, introduced by Kleinrock and Jaffe in the late 70s, is something
useful. Effective end/end queue depths per Little's law also seems useful.
Both are available in iperf 2 from a test perspective. Repurposing test
techniques to actual traffic could be useful. Hence the question around
what exact telemetry is useful to apps making socket write() and read()
calls.

Bob

On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com> wrote:

> In terms of trying to find "Quality" I have tried to encourage folk to
> both read "zen and the art of motorcycle maintenance"[0], and Deming's
> work on "total quality management".
>
> My own slice at this network, computer and lifestyle "issue" is aiming
> for "imperceptible latency" in all things. [1]. There's a lot of
> fallout from that in terms of not just addressing queuing delay, but
> caching, prefetching, and learning more about what a user really needs
> (as opposed to wants) to know via intelligent agents.
>
> [0] If you want to get depressed, read Pirsig's successor to "zen...",
> lila, which is in part about what happens when an engineer hits an
> insoluble problem.
> [1] https://www.internetsociety.org/events/latency2013/
>
>
>
> On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com> wrote:
> >
> > Well, nice that the folks doing the conference  are willing to consider
> that quality of user experience has little to do with signalling rate at
> the physical layer or throughput of FTP transfers.
> >
> >
> >
> > But honestly, the fact that they call the problem "network quality"
> suggests that they REALLY, REALLY don't understand the Internet isn't the
> hardware or the routers or even the routing algorithms *to its users*.
> >
> >
> >
> > By ignoring the diversity of applications now and in the future, and the
> fact that we DON'T KNOW what will be coming up, this conference will likely
> fall into the usual trap that net-heads fall into - optimizing for some
> imaginary reality that doesn't exist, and in fact will probably never be
> what users actually will do given the chance.
> >
> >
> >
> > I saw this issue in 1976 in the group developing the original Internet
> protocols - a desire to put *into the network* special tricks to optimize
> ASR33 logins to remote computers from terminal concentrators (aka remote
> login), bulk file transfers between file systems on different time-sharing
> systems, and "sessions" (virtual circuits) that required logins. And then
> trying to exploit underlying "multicast" by building it into the IP layer,
> because someone thought that TV broadcast would be the dominant application.
> >
> >
> >
> > Frankly, to think of "quality" as something that can be "provided" by
> "the network" misses the entire point of "end-to-end argument in system
> design". Quality is not a property defined or created by The Network. If
> you want to talk about Quality, you need to talk about users - all the
> users at all times, now and into the future, and that's something you can't
> do if you don't bother to include current and future users talking about
> what they might expect to experience that they don't experience.
> >
> >
> >
> > There was much fighting back in 1976 that basically involved "network
> experts" saying that the network was the place to "solve" such issues as
> quality, so applications could avoid having to solve such issues.
> >
> >
> >
> > What some of us managed to do was to argue that you can't "solve" such
> issues. All you can do is provide a framework that enables different uses
> to *cooperate* in some way.
> >
> >
> >
> > Which is why the Internet drops packets rather than queueing them, and
> why diffserv cannot work.
> >
> > (I know the latter is conftroversial, but at the moment, ALL of diffserv
> attempts to talk about end-to-end applicaiton specific metrics, but never,
> ever explains what the diffserv control points actually do w.r.t. what the
> IP layer can actually control. So it is meaningless - another violation of
> the so-called end-to-end principle).
> >
> >
> >
> > Networks are about getting packets from here to there, multiplexing the
> underlying resources. That's it. Quality is a whole different thing.
> Quality can be improved by end-to-end approaches, if the underlying network
> provides some kind of thing that actually creates a way for end-to-end
> applications to affect queueing and routing decisions, and more importantly
> getting "telemetry" from the network regarding what is actually going on
> with the other end-to-end users sharing the infrastructure.
> >
> >
> >
> > This conference won't talk about it this way. So don't waste your time.
> >
> >
> >
> >
> >
> >
> >
> > On Wednesday, June 30, 2021 8:12pm, "Dave Taht" <dave.taht@gmail.com>
> said:
> >
> > > The program committee members are *amazing*. Perhaps, finally, we can
> > > move the bar for the internet's quality metrics past endless, blind
> > > repetitions of speedtest.
> > >
> > > For complete details, please see:
> > > https://www.iab.org/activities/workshops/network-quality/
> > >
> > > Submissions Due: Monday 2nd August 2021, midnight AOE (Anywhere On
> Earth)
> > > Invitations Issued by: Monday 16th August 2021
> > >
> > > Workshop Date: This will be a virtual workshop, spread over three days:
> > >
> > > 1400-1800 UTC Tue 14th September 2021
> > > 1400-1800 UTC Wed 15th September 2021
> > > 1400-1800 UTC Thu 16th September 2021
> > >
> > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> > >
> > > The Program Committee members:
> > >
> > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam
> > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, Geoff
> > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja Kuehlewind,
> > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen Nichols,
> > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> > >
> > > Send Submissions to: network-quality-workshop-pc@iab.org.
> > >
> > > Position papers from academia, industry, the open source community and
> > > others that focus on measurements, experiences, observations and
> > > advice for the future are welcome. Papers that reflect experience
> > > based on deployed services are especially welcome. The organizers
> > > understand that specific actions taken by operators are unlikely to be
> > > discussed in detail, so papers discussing general categories of
> > > actions and issues without naming specific technologies, products, or
> > > other players in the ecosystem are expected. Papers should not focus
> > > on specific protocol solutions.
> > >
> > > The workshop will be by invitation only. Those wishing to attend
> > > should submit a position paper to the address above; it may take the
> > > form of an Internet-Draft.
> > >
> > > All inputs submitted and considered relevant will be published on the
> > > workshop website. The organisers will decide whom to invite based on
> > > the submissions received. Sessions will be organized according to
> > > content, and not every accepted submission or invited attendee will
> > > have an opportunity to present as the intent is to foster discussion
> > > and not simply to have a sequence of presentations.
> > >
> > > Position papers from those not planning to attend the virtual sessions
> > > themselves are also encouraged. A workshop report will be published
> > > afterwards.
> > >
> > > Overview:
> > >
> > > "We believe that one of the major factors behind this lack of progress
> > > is the popular perception that throughput is the often sole measure of
> > > the quality of Internet connectivity. With such narrow focus, people
> > > don’t consider questions such as:
> > >
> > > What is the latency under typical working conditions?
> > > How reliable is the connectivity across longer time periods?
> > > Does the network allow the use of a broad range of protocols?
> > > What services can be run by clients of the network?
> > > What kind of IPv4, NAT or IPv6 connectivity is offered, and are there
> firewalls?
> > > What security mechanisms are available for local services, such as DNS?
> > > To what degree are the privacy, confidentiality, integrity and
> > > authenticity of user communications guarded?
> > >
> > > Improving these aspects of network quality will likely depend on
> > > measurement and exposing metrics to all involved parties, including to
> > > end users in a meaningful way. Such measurements and exposure of the
> > > right metrics will allow service providers and network operators to
> > > focus on the aspects that impacts the users’ experience most and at
> > > the same time empowers users to choose the Internet service that will
> > > give them the best experience."
> > >
> > >
> > > --
> > > Latest Podcast:
> > >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> > >
> > > Dave Täht CTO, TekLibre, LLC
> > > _______________________________________________
> > > Cerowrt-devel mailing list
> > > Cerowrt-devel@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> > >
>
>
>
> --
> Latest Podcast:
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>
> Dave Täht CTO, TekLibre, LLC
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 12987 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-02 23:28     ` [Make-wifi-fast] " Bob McMahon
@ 2021-07-06 13:46       ` Ben Greear
  2021-07-06 20:43         ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
  2021-07-08 19:38         ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " David P. Reed
  0 siblings, 2 replies; 108+ messages in thread
From: Ben Greear @ 2021-07-06 13:46 UTC (permalink / raw)
  To: Bob McMahon, Dave Taht
  Cc: starlink, Make-Wifi-fast, David P. Reed, Cake List, codel,
	cerowrt-devel, bloat

Hello,

I am interested to hear wish lists for network testing features.  We make test equipment, supporting lots
of wifi stations and a distributed architecture, with built-in udp, tcp, ipv6, http, ... protocols,
and open to creating/improving some of our automated tests.

I know Dave has some test scripts already, so I'm not necessarily looking to reimplement that,
but more fishing for other/new ideas.

Thanks,
Ben

On 7/2/21 4:28 PM, Bob McMahon wrote:
> I think we need the language of math here. It seems like the network power metric, introduced by Kleinrock and Jaffe in the late 70s, is something useful. 
> Effective end/end queue depths per Little's law also seems useful. Both are available in iperf 2 from a test perspective. Repurposing test techniques to actual 
> traffic could be useful. Hence the question around what exact telemetry is useful to apps making socket write() and read() calls.
> 
> Bob
> 
> On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
> 
>     In terms of trying to find "Quality" I have tried to encourage folk to
>     both read "zen and the art of motorcycle maintenance"[0], and Deming's
>     work on "total quality management".
> 
>     My own slice at this network, computer and lifestyle "issue" is aiming
>     for "imperceptible latency" in all things. [1]. There's a lot of
>     fallout from that in terms of not just addressing queuing delay, but
>     caching, prefetching, and learning more about what a user really needs
>     (as opposed to wants) to know via intelligent agents.
> 
>     [0] If you want to get depressed, read Pirsig's successor to "zen...",
>     lila, which is in part about what happens when an engineer hits an
>     insoluble problem.
>     [1] https://www.internetsociety.org/events/latency2013/ <https://www.internetsociety.org/events/latency2013/>
> 
> 
> 
>     On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com <mailto:dpreed@deepplum.com>> wrote:
>      >
>      > Well, nice that the folks doing the conference  are willing to consider that quality of user experience has little to do with signalling rate at the
>     physical layer or throughput of FTP transfers.
>      >
>      >
>      >
>      > But honestly, the fact that they call the problem "network quality" suggests that they REALLY, REALLY don't understand the Internet isn't the hardware or
>     the routers or even the routing algorithms *to its users*.
>      >
>      >
>      >
>      > By ignoring the diversity of applications now and in the future, and the fact that we DON'T KNOW what will be coming up, this conference will likely fall
>     into the usual trap that net-heads fall into - optimizing for some imaginary reality that doesn't exist, and in fact will probably never be what users
>     actually will do given the chance.
>      >
>      >
>      >
>      > I saw this issue in 1976 in the group developing the original Internet protocols - a desire to put *into the network* special tricks to optimize ASR33
>     logins to remote computers from terminal concentrators (aka remote login), bulk file transfers between file systems on different time-sharing systems, and
>     "sessions" (virtual circuits) that required logins. And then trying to exploit underlying "multicast" by building it into the IP layer, because someone
>     thought that TV broadcast would be the dominant application.
>      >
>      >
>      >
>      > Frankly, to think of "quality" as something that can be "provided" by "the network" misses the entire point of "end-to-end argument in system design".
>     Quality is not a property defined or created by The Network. If you want to talk about Quality, you need to talk about users - all the users at all times,
>     now and into the future, and that's something you can't do if you don't bother to include current and future users talking about what they might expect to
>     experience that they don't experience.
>      >
>      >
>      >
>      > There was much fighting back in 1976 that basically involved "network experts" saying that the network was the place to "solve" such issues as quality,
>     so applications could avoid having to solve such issues.
>      >
>      >
>      >
>      > What some of us managed to do was to argue that you can't "solve" such issues. All you can do is provide a framework that enables different uses to
>     *cooperate* in some way.
>      >
>      >
>      >
>      > Which is why the Internet drops packets rather than queueing them, and why diffserv cannot work.
>      >
>      > (I know the latter is conftroversial, but at the moment, ALL of diffserv attempts to talk about end-to-end applicaiton specific metrics, but never, ever
>     explains what the diffserv control points actually do w.r.t. what the IP layer can actually control. So it is meaningless - another violation of the
>     so-called end-to-end principle).
>      >
>      >
>      >
>      > Networks are about getting packets from here to there, multiplexing the underlying resources. That's it. Quality is a whole different thing. Quality can
>     be improved by end-to-end approaches, if the underlying network provides some kind of thing that actually creates a way for end-to-end applications to
>     affect queueing and routing decisions, and more importantly getting "telemetry" from the network regarding what is actually going on with the other
>     end-to-end users sharing the infrastructure.
>      >
>      >
>      >
>      > This conference won't talk about it this way. So don't waste your time.
>      >
>      >
>      >
>      >
>      >
>      >
>      >
>      > On Wednesday, June 30, 2021 8:12pm, "Dave Taht" <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> said:
>      >
>      > > The program committee members are *amazing*. Perhaps, finally, we can
>      > > move the bar for the internet's quality metrics past endless, blind
>      > > repetitions of speedtest.
>      > >
>      > > For complete details, please see:
>      > > https://www.iab.org/activities/workshops/network-quality/ <https://www.iab.org/activities/workshops/network-quality/>
>      > >
>      > > Submissions Due: Monday 2nd August 2021, midnight AOE (Anywhere On Earth)
>      > > Invitations Issued by: Monday 16th August 2021
>      > >
>      > > Workshop Date: This will be a virtual workshop, spread over three days:
>      > >
>      > > 1400-1800 UTC Tue 14th September 2021
>      > > 1400-1800 UTC Wed 15th September 2021
>      > > 1400-1800 UTC Thu 16th September 2021
>      > >
>      > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
>      > >
>      > > The Program Committee members:
>      > >
>      > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam
>      > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, Geoff
>      > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja Kuehlewind,
>      > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen Nichols,
>      > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
>      > >
>      > > Send Submissions to: network-quality-workshop-pc@iab.org <mailto:network-quality-workshop-pc@iab.org>.
>      > >
>      > > Position papers from academia, industry, the open source community and
>      > > others that focus on measurements, experiences, observations and
>      > > advice for the future are welcome. Papers that reflect experience
>      > > based on deployed services are especially welcome. The organizers
>      > > understand that specific actions taken by operators are unlikely to be
>      > > discussed in detail, so papers discussing general categories of
>      > > actions and issues without naming specific technologies, products, or
>      > > other players in the ecosystem are expected. Papers should not focus
>      > > on specific protocol solutions.
>      > >
>      > > The workshop will be by invitation only. Those wishing to attend
>      > > should submit a position paper to the address above; it may take the
>      > > form of an Internet-Draft.
>      > >
>      > > All inputs submitted and considered relevant will be published on the
>      > > workshop website. The organisers will decide whom to invite based on
>      > > the submissions received. Sessions will be organized according to
>      > > content, and not every accepted submission or invited attendee will
>      > > have an opportunity to present as the intent is to foster discussion
>      > > and not simply to have a sequence of presentations.
>      > >
>      > > Position papers from those not planning to attend the virtual sessions
>      > > themselves are also encouraged. A workshop report will be published
>      > > afterwards.
>      > >
>      > > Overview:
>      > >
>      > > "We believe that one of the major factors behind this lack of progress
>      > > is the popular perception that throughput is the often sole measure of
>      > > the quality of Internet connectivity. With such narrow focus, people
>      > > don’t consider questions such as:
>      > >
>      > > What is the latency under typical working conditions?
>      > > How reliable is the connectivity across longer time periods?
>      > > Does the network allow the use of a broad range of protocols?
>      > > What services can be run by clients of the network?
>      > > What kind of IPv4, NAT or IPv6 connectivity is offered, and are there firewalls?
>      > > What security mechanisms are available for local services, such as DNS?
>      > > To what degree are the privacy, confidentiality, integrity and
>      > > authenticity of user communications guarded?
>      > >
>      > > Improving these aspects of network quality will likely depend on
>      > > measurement and exposing metrics to all involved parties, including to
>      > > end users in a meaningful way. Such measurements and exposure of the
>      > > right metrics will allow service providers and network operators to
>      > > focus on the aspects that impacts the users’ experience most and at
>      > > the same time empowers users to choose the Internet service that will
>      > > give them the best experience."
>      > >
>      > >
>      > > --
>      > > Latest Podcast:
>      > > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
>      > >
>      > > Dave Täht CTO, TekLibre, LLC
>      > > _______________________________________________
>      > > Cerowrt-devel mailing list
>      > > Cerowrt-devel@lists.bufferbloat.net <mailto:Cerowrt-devel@lists.bufferbloat.net>
>      > > https://lists.bufferbloat.net/listinfo/cerowrt-devel <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
>      > >
> 
> 
> 
>     --
>     Latest Podcast:
>     https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> 
>     Dave Täht CTO, TekLibre, LLC
>     _______________________________________________
>     Make-wifi-fast mailing list
>     Make-wifi-fast@lists.bufferbloat.net <mailto:Make-wifi-fast@lists.bufferbloat.net>
>     https://lists.bufferbloat.net/listinfo/make-wifi-fast <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
> 
> 
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of 
> the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise 
> restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, 
> you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you 
> received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
> 
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
> 


-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-06 13:46       ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Ben Greear
@ 2021-07-06 20:43         ` Bob McMahon
  2021-07-06 21:24           ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Ben Greear
  2021-07-08 19:38         ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " David P. Reed
  1 sibling, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-07-06 20:43 UTC (permalink / raw)
  To: Ben Greear
  Cc: Dave Taht, starlink, Make-Wifi-fast, David P. Reed, Cake List,
	codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 14576 bytes --]

The four part attenuator part would be more interesting to me if it also
had a solid state phase shifters.  This allows for testing 2x2 MIMO testing
per affecting the spatial stream eigen vectors/values.

Bob

PS. The price per port isn't competitive. Probably a good idea to survey
the market competition.

On Tue, Jul 6, 2021 at 6:46 AM Ben Greear <greearb@candelatech.com> wrote:

> Hello,
>
> I am interested to hear wish lists for network testing features.  We make
> test equipment, supporting lots
> of wifi stations and a distributed architecture, with built-in udp, tcp,
> ipv6, http, ... protocols,
> and open to creating/improving some of our automated tests.
>
> I know Dave has some test scripts already, so I'm not necessarily looking
> to reimplement that,
> but more fishing for other/new ideas.
>
> Thanks,
> Ben
>
> On 7/2/21 4:28 PM, Bob McMahon wrote:
> > I think we need the language of math here. It seems like the network
> power metric, introduced by Kleinrock and Jaffe in the late 70s, is
> something useful.
> > Effective end/end queue depths per Little's law also seems useful. Both
> are available in iperf 2 from a test perspective. Repurposing test
> techniques to actual
> > traffic could be useful. Hence the question around what exact telemetry
> is useful to apps making socket write() and read() calls.
> >
> > Bob
> >
> > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com <mailto:
> dave.taht@gmail.com>> wrote:
> >
> >     In terms of trying to find "Quality" I have tried to encourage folk
> to
> >     both read "zen and the art of motorcycle maintenance"[0], and
> Deming's
> >     work on "total quality management".
> >
> >     My own slice at this network, computer and lifestyle "issue" is
> aiming
> >     for "imperceptible latency" in all things. [1]. There's a lot of
> >     fallout from that in terms of not just addressing queuing delay, but
> >     caching, prefetching, and learning more about what a user really
> needs
> >     (as opposed to wants) to know via intelligent agents.
> >
> >     [0] If you want to get depressed, read Pirsig's successor to
> "zen...",
> >     lila, which is in part about what happens when an engineer hits an
> >     insoluble problem.
> >     [1] https://www.internetsociety.org/events/latency2013/ <
> https://www.internetsociety.org/events/latency2013/>
> >
> >
> >
> >     On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com
> <mailto:dpreed@deepplum.com>> wrote:
> >      >
> >      > Well, nice that the folks doing the conference  are willing to
> consider that quality of user experience has little to do with signalling
> rate at the
> >     physical layer or throughput of FTP transfers.
> >      >
> >      >
> >      >
> >      > But honestly, the fact that they call the problem "network
> quality" suggests that they REALLY, REALLY don't understand the Internet
> isn't the hardware or
> >     the routers or even the routing algorithms *to its users*.
> >      >
> >      >
> >      >
> >      > By ignoring the diversity of applications now and in the future,
> and the fact that we DON'T KNOW what will be coming up, this conference
> will likely fall
> >     into the usual trap that net-heads fall into - optimizing for some
> imaginary reality that doesn't exist, and in fact will probably never be
> what users
> >     actually will do given the chance.
> >      >
> >      >
> >      >
> >      > I saw this issue in 1976 in the group developing the original
> Internet protocols - a desire to put *into the network* special tricks to
> optimize ASR33
> >     logins to remote computers from terminal concentrators (aka remote
> login), bulk file transfers between file systems on different time-sharing
> systems, and
> >     "sessions" (virtual circuits) that required logins. And then trying
> to exploit underlying "multicast" by building it into the IP layer, because
> someone
> >     thought that TV broadcast would be the dominant application.
> >      >
> >      >
> >      >
> >      > Frankly, to think of "quality" as something that can be
> "provided" by "the network" misses the entire point of "end-to-end argument
> in system design".
> >     Quality is not a property defined or created by The Network. If you
> want to talk about Quality, you need to talk about users - all the users at
> all times,
> >     now and into the future, and that's something you can't do if you
> don't bother to include current and future users talking about what they
> might expect to
> >     experience that they don't experience.
> >      >
> >      >
> >      >
> >      > There was much fighting back in 1976 that basically involved
> "network experts" saying that the network was the place to "solve" such
> issues as quality,
> >     so applications could avoid having to solve such issues.
> >      >
> >      >
> >      >
> >      > What some of us managed to do was to argue that you can't "solve"
> such issues. All you can do is provide a framework that enables different
> uses to
> >     *cooperate* in some way.
> >      >
> >      >
> >      >
> >      > Which is why the Internet drops packets rather than queueing
> them, and why diffserv cannot work.
> >      >
> >      > (I know the latter is conftroversial, but at the moment, ALL of
> diffserv attempts to talk about end-to-end applicaiton specific metrics,
> but never, ever
> >     explains what the diffserv control points actually do w.r.t. what
> the IP layer can actually control. So it is meaningless - another violation
> of the
> >     so-called end-to-end principle).
> >      >
> >      >
> >      >
> >      > Networks are about getting packets from here to there,
> multiplexing the underlying resources. That's it. Quality is a whole
> different thing. Quality can
> >     be improved by end-to-end approaches, if the underlying network
> provides some kind of thing that actually creates a way for end-to-end
> applications to
> >     affect queueing and routing decisions, and more importantly getting
> "telemetry" from the network regarding what is actually going on with the
> other
> >     end-to-end users sharing the infrastructure.
> >      >
> >      >
> >      >
> >      > This conference won't talk about it this way. So don't waste your
> time.
> >      >
> >      >
> >      >
> >      >
> >      >
> >      >
> >      >
> >      > On Wednesday, June 30, 2021 8:12pm, "Dave Taht" <
> dave.taht@gmail.com <mailto:dave.taht@gmail.com>> said:
> >      >
> >      > > The program committee members are *amazing*. Perhaps, finally,
> we can
> >      > > move the bar for the internet's quality metrics past endless,
> blind
> >      > > repetitions of speedtest.
> >      > >
> >      > > For complete details, please see:
> >      > > https://www.iab.org/activities/workshops/network-quality/ <
> https://www.iab.org/activities/workshops/network-quality/>
> >      > >
> >      > > Submissions Due: Monday 2nd August 2021, midnight AOE (Anywhere
> On Earth)
> >      > > Invitations Issued by: Monday 16th August 2021
> >      > >
> >      > > Workshop Date: This will be a virtual workshop, spread over
> three days:
> >      > >
> >      > > 1400-1800 UTC Tue 14th September 2021
> >      > > 1400-1800 UTC Wed 15th September 2021
> >      > > 1400-1800 UTC Thu 16th September 2021
> >      > >
> >      > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> >      > >
> >      > > The Program Committee members:
> >      > >
> >      > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam
> >      > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
> Geoff
> >      > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
> Kuehlewind,
> >      > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen Nichols,
> >      > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> >      > >
> >      > > Send Submissions to: network-quality-workshop-pc@iab.org
> <mailto:network-quality-workshop-pc@iab.org>.
> >      > >
> >      > > Position papers from academia, industry, the open source
> community and
> >      > > others that focus on measurements, experiences, observations and
> >      > > advice for the future are welcome. Papers that reflect
> experience
> >      > > based on deployed services are especially welcome. The
> organizers
> >      > > understand that specific actions taken by operators are
> unlikely to be
> >      > > discussed in detail, so papers discussing general categories of
> >      > > actions and issues without naming specific technologies,
> products, or
> >      > > other players in the ecosystem are expected. Papers should not
> focus
> >      > > on specific protocol solutions.
> >      > >
> >      > > The workshop will be by invitation only. Those wishing to attend
> >      > > should submit a position paper to the address above; it may
> take the
> >      > > form of an Internet-Draft.
> >      > >
> >      > > All inputs submitted and considered relevant will be published
> on the
> >      > > workshop website. The organisers will decide whom to invite
> based on
> >      > > the submissions received. Sessions will be organized according
> to
> >      > > content, and not every accepted submission or invited attendee
> will
> >      > > have an opportunity to present as the intent is to foster
> discussion
> >      > > and not simply to have a sequence of presentations.
> >      > >
> >      > > Position papers from those not planning to attend the virtual
> sessions
> >      > > themselves are also encouraged. A workshop report will be
> published
> >      > > afterwards.
> >      > >
> >      > > Overview:
> >      > >
> >      > > "We believe that one of the major factors behind this lack of
> progress
> >      > > is the popular perception that throughput is the often sole
> measure of
> >      > > the quality of Internet connectivity. With such narrow focus,
> people
> >      > > don’t consider questions such as:
> >      > >
> >      > > What is the latency under typical working conditions?
> >      > > How reliable is the connectivity across longer time periods?
> >      > > Does the network allow the use of a broad range of protocols?
> >      > > What services can be run by clients of the network?
> >      > > What kind of IPv4, NAT or IPv6 connectivity is offered, and are
> there firewalls?
> >      > > What security mechanisms are available for local services, such
> as DNS?
> >      > > To what degree are the privacy, confidentiality, integrity and
> >      > > authenticity of user communications guarded?
> >      > >
> >      > > Improving these aspects of network quality will likely depend on
> >      > > measurement and exposing metrics to all involved parties,
> including to
> >      > > end users in a meaningful way. Such measurements and exposure
> of the
> >      > > right metrics will allow service providers and network
> operators to
> >      > > focus on the aspects that impacts the users’ experience most
> and at
> >      > > the same time empowers users to choose the Internet service
> that will
> >      > > give them the best experience."
> >      > >
> >      > >
> >      > > --
> >      > > Latest Podcast:
> >      > >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >
> >      > >
> >      > > Dave Täht CTO, TekLibre, LLC
> >      > > _______________________________________________
> >      > > Cerowrt-devel mailing list
> >      > > Cerowrt-devel@lists.bufferbloat.net <mailto:
> Cerowrt-devel@lists.bufferbloat.net>
> >      > > https://lists.bufferbloat.net/listinfo/cerowrt-devel <
> https://lists.bufferbloat.net/listinfo/cerowrt-devel>
> >      > >
> >
> >
> >
> >     --
> >     Latest Podcast:
> >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >
> >
> >     Dave Täht CTO, TekLibre, LLC
> >     _______________________________________________
> >     Make-wifi-fast mailing list
> >     Make-wifi-fast@lists.bufferbloat.net <mailto:
> Make-wifi-fast@lists.bufferbloat.net>
> >     https://lists.bufferbloat.net/listinfo/make-wifi-fast <
> https://lists.bufferbloat.net/listinfo/make-wifi-fast>
> >
> >
> > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of
> > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or otherwise
> > restricted from disclosure to anyone else. If you are not the intended
> recipient or the person responsible for delivering the e-mail to the
> intended recipient,
> > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> > received this e-mail in error, please return the e-mail to the sender,
> delete it from your computer, and destroy any printed copy of it.
> >
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
> >
>
>
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc  http://www.candelatech.com
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 19631 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-06 20:43         ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
@ 2021-07-06 21:24           ` Ben Greear
  2021-07-06 22:05             ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: Ben Greear @ 2021-07-06 21:24 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Dave Taht, starlink, Make-Wifi-fast, David P. Reed, Cake List,
	codel, cerowrt-devel, bloat

We tried adding in an external butler matrix in the past, but could not notice any useful difference.  Possibly
we didn't have the right use case.

Typically we are competitive on price for full testing solutions, but you can get stand-alone attenuators
cheaper from specialized vendors.  Happy to discuss pricing offlist if you wish.

Thanks,
Ben

On 7/6/21 1:43 PM, Bob McMahon wrote:
> The four part attenuator part would be more interesting to me if it also had a solid state phase shifters.  This allows for testing 2x2 MIMO testing per 
> affecting the spatial stream eigen vectors/values.
> 
> Bob
> 
> PS. The price per port isn't competitive. Probably a good idea to survey the market competition.
> 
> On Tue, Jul 6, 2021 at 6:46 AM Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com>> wrote:
> 
>     Hello,
> 
>     I am interested to hear wish lists for network testing features.  We make test equipment, supporting lots
>     of wifi stations and a distributed architecture, with built-in udp, tcp, ipv6, http, ... protocols,
>     and open to creating/improving some of our automated tests.
> 
>     I know Dave has some test scripts already, so I'm not necessarily looking to reimplement that,
>     but more fishing for other/new ideas.
> 
>     Thanks,
>     Ben
> 
>     On 7/2/21 4:28 PM, Bob McMahon wrote:
>      > I think we need the language of math here. It seems like the network power metric, introduced by Kleinrock and Jaffe in the late 70s, is something useful.
>      > Effective end/end queue depths per Little's law also seems useful. Both are available in iperf 2 from a test perspective. Repurposing test techniques to
>     actual
>      > traffic could be useful. Hence the question around what exact telemetry is useful to apps making socket write() and read() calls.
>      >
>      > Bob
>      >
>      > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:dave.taht@gmail.com <mailto:dave.taht@gmail.com>>> wrote:
>      >
>      >     In terms of trying to find "Quality" I have tried to encourage folk to
>      >     both read "zen and the art of motorcycle maintenance"[0], and Deming's
>      >     work on "total quality management".
>      >
>      >     My own slice at this network, computer and lifestyle "issue" is aiming
>      >     for "imperceptible latency" in all things. [1]. There's a lot of
>      >     fallout from that in terms of not just addressing queuing delay, but
>      >     caching, prefetching, and learning more about what a user really needs
>      >     (as opposed to wants) to know via intelligent agents.
>      >
>      >     [0] If you want to get depressed, read Pirsig's successor to "zen...",
>      >     lila, which is in part about what happens when an engineer hits an
>      >     insoluble problem.
>      >     [1] https://www.internetsociety.org/events/latency2013/ <https://www.internetsociety.org/events/latency2013/>
>      >
>      >
>      >
>      >     On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:dpreed@deepplum.com
>     <mailto:dpreed@deepplum.com>>> wrote:
>      >      >
>      >      > Well, nice that the folks doing the conference  are willing to consider that quality of user experience has little to do with signalling rate at the
>      >     physical layer or throughput of FTP transfers.
>      >      >
>      >      >
>      >      >
>      >      > But honestly, the fact that they call the problem "network quality" suggests that they REALLY, REALLY don't understand the Internet isn't the
>     hardware or
>      >     the routers or even the routing algorithms *to its users*.
>      >      >
>      >      >
>      >      >
>      >      > By ignoring the diversity of applications now and in the future, and the fact that we DON'T KNOW what will be coming up, this conference will
>     likely fall
>      >     into the usual trap that net-heads fall into - optimizing for some imaginary reality that doesn't exist, and in fact will probably never be what users
>      >     actually will do given the chance.
>      >      >
>      >      >
>      >      >
>      >      > I saw this issue in 1976 in the group developing the original Internet protocols - a desire to put *into the network* special tricks to optimize ASR33
>      >     logins to remote computers from terminal concentrators (aka remote login), bulk file transfers between file systems on different time-sharing
>     systems, and
>      >     "sessions" (virtual circuits) that required logins. And then trying to exploit underlying "multicast" by building it into the IP layer, because someone
>      >     thought that TV broadcast would be the dominant application.
>      >      >
>      >      >
>      >      >
>      >      > Frankly, to think of "quality" as something that can be "provided" by "the network" misses the entire point of "end-to-end argument in system design".
>      >     Quality is not a property defined or created by The Network. If you want to talk about Quality, you need to talk about users - all the users at all
>     times,
>      >     now and into the future, and that's something you can't do if you don't bother to include current and future users talking about what they might
>     expect to
>      >     experience that they don't experience.
>      >      >
>      >      >
>      >      >
>      >      > There was much fighting back in 1976 that basically involved "network experts" saying that the network was the place to "solve" such issues as
>     quality,
>      >     so applications could avoid having to solve such issues.
>      >      >
>      >      >
>      >      >
>      >      > What some of us managed to do was to argue that you can't "solve" such issues. All you can do is provide a framework that enables different uses to
>      >     *cooperate* in some way.
>      >      >
>      >      >
>      >      >
>      >      > Which is why the Internet drops packets rather than queueing them, and why diffserv cannot work.
>      >      >
>      >      > (I know the latter is conftroversial, but at the moment, ALL of diffserv attempts to talk about end-to-end applicaiton specific metrics, but
>     never, ever
>      >     explains what the diffserv control points actually do w.r.t. what the IP layer can actually control. So it is meaningless - another violation of the
>      >     so-called end-to-end principle).
>      >      >
>      >      >
>      >      >
>      >      > Networks are about getting packets from here to there, multiplexing the underlying resources. That's it. Quality is a whole different thing.
>     Quality can
>      >     be improved by end-to-end approaches, if the underlying network provides some kind of thing that actually creates a way for end-to-end applications to
>      >     affect queueing and routing decisions, and more importantly getting "telemetry" from the network regarding what is actually going on with the other
>      >     end-to-end users sharing the infrastructure.
>      >      >
>      >      >
>      >      >
>      >      > This conference won't talk about it this way. So don't waste your time.
>      >      >
>      >      >
>      >      >
>      >      >
>      >      >
>      >      >
>      >      >
>      >      > On Wednesday, June 30, 2021 8:12pm, "Dave Taht" <dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:dave.taht@gmail.com
>     <mailto:dave.taht@gmail.com>>> said:
>      >      >
>      >      > > The program committee members are *amazing*. Perhaps, finally, we can
>      >      > > move the bar for the internet's quality metrics past endless, blind
>      >      > > repetitions of speedtest.
>      >      > >
>      >      > > For complete details, please see:
>      >      > > https://www.iab.org/activities/workshops/network-quality/ <https://www.iab.org/activities/workshops/network-quality/>
>      >      > >
>      >      > > Submissions Due: Monday 2nd August 2021, midnight AOE (Anywhere On Earth)
>      >      > > Invitations Issued by: Monday 16th August 2021
>      >      > >
>      >      > > Workshop Date: This will be a virtual workshop, spread over three days:
>      >      > >
>      >      > > 1400-1800 UTC Tue 14th September 2021
>      >      > > 1400-1800 UTC Wed 15th September 2021
>      >      > > 1400-1800 UTC Thu 16th September 2021
>      >      > >
>      >      > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
>      >      > >
>      >      > > The Program Committee members:
>      >      > >
>      >      > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam
>      >      > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, Geoff
>      >      > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja Kuehlewind,
>      >      > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen Nichols,
>      >      > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
>      >      > >
>      >      > > Send Submissions to: network-quality-workshop-pc@iab.org <mailto:network-quality-workshop-pc@iab.org>
>     <mailto:network-quality-workshop-pc@iab.org <mailto:network-quality-workshop-pc@iab.org>>.
>      >      > >
>      >      > > Position papers from academia, industry, the open source community and
>      >      > > others that focus on measurements, experiences, observations and
>      >      > > advice for the future are welcome. Papers that reflect experience
>      >      > > based on deployed services are especially welcome. The organizers
>      >      > > understand that specific actions taken by operators are unlikely to be
>      >      > > discussed in detail, so papers discussing general categories of
>      >      > > actions and issues without naming specific technologies, products, or
>      >      > > other players in the ecosystem are expected. Papers should not focus
>      >      > > on specific protocol solutions.
>      >      > >
>      >      > > The workshop will be by invitation only. Those wishing to attend
>      >      > > should submit a position paper to the address above; it may take the
>      >      > > form of an Internet-Draft.
>      >      > >
>      >      > > All inputs submitted and considered relevant will be published on the
>      >      > > workshop website. The organisers will decide whom to invite based on
>      >      > > the submissions received. Sessions will be organized according to
>      >      > > content, and not every accepted submission or invited attendee will
>      >      > > have an opportunity to present as the intent is to foster discussion
>      >      > > and not simply to have a sequence of presentations.
>      >      > >
>      >      > > Position papers from those not planning to attend the virtual sessions
>      >      > > themselves are also encouraged. A workshop report will be published
>      >      > > afterwards.
>      >      > >
>      >      > > Overview:
>      >      > >
>      >      > > "We believe that one of the major factors behind this lack of progress
>      >      > > is the popular perception that throughput is the often sole measure of
>      >      > > the quality of Internet connectivity. With such narrow focus, people
>      >      > > don’t consider questions such as:
>      >      > >
>      >      > > What is the latency under typical working conditions?
>      >      > > How reliable is the connectivity across longer time periods?
>      >      > > Does the network allow the use of a broad range of protocols?
>      >      > > What services can be run by clients of the network?
>      >      > > What kind of IPv4, NAT or IPv6 connectivity is offered, and are there firewalls?
>      >      > > What security mechanisms are available for local services, such as DNS?
>      >      > > To what degree are the privacy, confidentiality, integrity and
>      >      > > authenticity of user communications guarded?
>      >      > >
>      >      > > Improving these aspects of network quality will likely depend on
>      >      > > measurement and exposing metrics to all involved parties, including to
>      >      > > end users in a meaningful way. Such measurements and exposure of the
>      >      > > right metrics will allow service providers and network operators to
>      >      > > focus on the aspects that impacts the users’ experience most and at
>      >      > > the same time empowers users to choose the Internet service that will
>      >      > > give them the best experience."
>      >      > >
>      >      > >
>      >      > > --
>      >      > > Latest Podcast:
>      >      > > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>     <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
>      >      > >
>      >      > > Dave Täht CTO, TekLibre, LLC
>      >      > > _______________________________________________
>      >      > > Cerowrt-devel mailing list
>      >      > > Cerowrt-devel@lists.bufferbloat.net <mailto:Cerowrt-devel@lists.bufferbloat.net> <mailto:Cerowrt-devel@lists.bufferbloat.net
>     <mailto:Cerowrt-devel@lists.bufferbloat.net>>
>      >      > > https://lists.bufferbloat.net/listinfo/cerowrt-devel <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
>      >      > >
>      >
>      >
>      >
>      >     --
>      >     Latest Podcast:
>      > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
>      >
>      >     Dave Täht CTO, TekLibre, LLC
>      >     _______________________________________________
>      >     Make-wifi-fast mailing list
>      > Make-wifi-fast@lists.bufferbloat.net <mailto:Make-wifi-fast@lists.bufferbloat.net> <mailto:Make-wifi-fast@lists.bufferbloat.net
>     <mailto:Make-wifi-fast@lists.bufferbloat.net>>
>      > https://lists.bufferbloat.net/listinfo/make-wifi-fast <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
>      >
>      >
>      > This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the
>     use of
>      > the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or
>     otherwise
>      > restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended
>     recipient,
>      > you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you
>      > received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
>      >
>      > _______________________________________________
>      > Starlink mailing list
>      > Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>      > https://lists.bufferbloat.net/listinfo/starlink
>      >
> 
> 
>     -- 
>     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com>>
>     Candela Technologies Inc http://www.candelatech.com
> 
> 
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of 
> the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise 
> restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, 
> you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you 
> received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.


-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-06 21:24           ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Ben Greear
@ 2021-07-06 22:05             ` Bob McMahon
  2021-07-07 13:34               ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Ben Greear
  0 siblings, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-07-06 22:05 UTC (permalink / raw)
  To: Ben Greear
  Cc: Dave Taht, starlink, Make-Wifi-fast, David P. Reed, Cake List,
	codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 18838 bytes --]

Sorry, I should have been more clear. Not a fixed butler matrix but a
device with solid state, programmable, phase shifters, 0 - 360 degrees.
It's a way to create multiple phy channels and affect and vary the off
diagonal elements of a MIMO H-matrix using conducted parts. Then
automation software can have more robust RF MIMO test scenarios that are
reproducible.

https://web.stanford.edu/~dntse/Chapters_PDF/Fundamentals_Wireless_Communication_chapter7.pdf

Bob

On Tue, Jul 6, 2021 at 2:24 PM Ben Greear <greearb@candelatech.com> wrote:

> We tried adding in an external butler matrix in the past, but could not
> notice any useful difference.  Possibly
> we didn't have the right use case.
>
> Typically we are competitive on price for full testing solutions, but you
> can get stand-alone attenuators
> cheaper from specialized vendors.  Happy to discuss pricing offlist if you
> wish.
>
> Thanks,
> Ben
>
> On 7/6/21 1:43 PM, Bob McMahon wrote:
> > The four part attenuator part would be more interesting to me if it also
> had a solid state phase shifters.  This allows for testing 2x2 MIMO testing
> per
> > affecting the spatial stream eigen vectors/values.
> >
> > Bob
> >
> > PS. The price per port isn't competitive. Probably a good idea to survey
> the market competition.
> >
> > On Tue, Jul 6, 2021 at 6:46 AM Ben Greear <greearb@candelatech.com
> <mailto:greearb@candelatech.com>> wrote:
> >
> >     Hello,
> >
> >     I am interested to hear wish lists for network testing features.  We
> make test equipment, supporting lots
> >     of wifi stations and a distributed architecture, with built-in udp,
> tcp, ipv6, http, ... protocols,
> >     and open to creating/improving some of our automated tests.
> >
> >     I know Dave has some test scripts already, so I'm not necessarily
> looking to reimplement that,
> >     but more fishing for other/new ideas.
> >
> >     Thanks,
> >     Ben
> >
> >     On 7/2/21 4:28 PM, Bob McMahon wrote:
> >      > I think we need the language of math here. It seems like the
> network power metric, introduced by Kleinrock and Jaffe in the late 70s, is
> something useful.
> >      > Effective end/end queue depths per Little's law also seems
> useful. Both are available in iperf 2 from a test perspective. Repurposing
> test techniques to
> >     actual
> >      > traffic could be useful. Hence the question around what exact
> telemetry is useful to apps making socket write() and read() calls.
> >      >
> >      > Bob
> >      >
> >      > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com
> <mailto:dave.taht@gmail.com> <mailto:dave.taht@gmail.com <mailto:
> dave.taht@gmail.com>>> wrote:
> >      >
> >      >     In terms of trying to find "Quality" I have tried to
> encourage folk to
> >      >     both read "zen and the art of motorcycle maintenance"[0], and
> Deming's
> >      >     work on "total quality management".
> >      >
> >      >     My own slice at this network, computer and lifestyle "issue"
> is aiming
> >      >     for "imperceptible latency" in all things. [1]. There's a lot
> of
> >      >     fallout from that in terms of not just addressing queuing
> delay, but
> >      >     caching, prefetching, and learning more about what a user
> really needs
> >      >     (as opposed to wants) to know via intelligent agents.
> >      >
> >      >     [0] If you want to get depressed, read Pirsig's successor to
> "zen...",
> >      >     lila, which is in part about what happens when an engineer
> hits an
> >      >     insoluble problem.
> >      >     [1] https://www.internetsociety.org/events/latency2013/ <
> https://www.internetsociety.org/events/latency2013/>
> >      >
> >      >
> >      >
> >      >     On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <
> dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:
> dpreed@deepplum.com
> >     <mailto:dpreed@deepplum.com>>> wrote:
> >      >      >
> >      >      > Well, nice that the folks doing the conference  are
> willing to consider that quality of user experience has little to do with
> signalling rate at the
> >      >     physical layer or throughput of FTP transfers.
> >      >      >
> >      >      >
> >      >      >
> >      >      > But honestly, the fact that they call the problem "network
> quality" suggests that they REALLY, REALLY don't understand the Internet
> isn't the
> >     hardware or
> >      >     the routers or even the routing algorithms *to its users*.
> >      >      >
> >      >      >
> >      >      >
> >      >      > By ignoring the diversity of applications now and in the
> future, and the fact that we DON'T KNOW what will be coming up, this
> conference will
> >     likely fall
> >      >     into the usual trap that net-heads fall into - optimizing for
> some imaginary reality that doesn't exist, and in fact will probably never
> be what users
> >      >     actually will do given the chance.
> >      >      >
> >      >      >
> >      >      >
> >      >      > I saw this issue in 1976 in the group developing the
> original Internet protocols - a desire to put *into the network* special
> tricks to optimize ASR33
> >      >     logins to remote computers from terminal concentrators (aka
> remote login), bulk file transfers between file systems on different
> time-sharing
> >     systems, and
> >      >     "sessions" (virtual circuits) that required logins. And then
> trying to exploit underlying "multicast" by building it into the IP layer,
> because someone
> >      >     thought that TV broadcast would be the dominant application.
> >      >      >
> >      >      >
> >      >      >
> >      >      > Frankly, to think of "quality" as something that can be
> "provided" by "the network" misses the entire point of "end-to-end argument
> in system design".
> >      >     Quality is not a property defined or created by The Network.
> If you want to talk about Quality, you need to talk about users - all the
> users at all
> >     times,
> >      >     now and into the future, and that's something you can't do if
> you don't bother to include current and future users talking about what
> they might
> >     expect to
> >      >     experience that they don't experience.
> >      >      >
> >      >      >
> >      >      >
> >      >      > There was much fighting back in 1976 that basically
> involved "network experts" saying that the network was the place to "solve"
> such issues as
> >     quality,
> >      >     so applications could avoid having to solve such issues.
> >      >      >
> >      >      >
> >      >      >
> >      >      > What some of us managed to do was to argue that you can't
> "solve" such issues. All you can do is provide a framework that enables
> different uses to
> >      >     *cooperate* in some way.
> >      >      >
> >      >      >
> >      >      >
> >      >      > Which is why the Internet drops packets rather than
> queueing them, and why diffserv cannot work.
> >      >      >
> >      >      > (I know the latter is conftroversial, but at the moment,
> ALL of diffserv attempts to talk about end-to-end applicaiton specific
> metrics, but
> >     never, ever
> >      >     explains what the diffserv control points actually do w.r.t.
> what the IP layer can actually control. So it is meaningless - another
> violation of the
> >      >     so-called end-to-end principle).
> >      >      >
> >      >      >
> >      >      >
> >      >      > Networks are about getting packets from here to there,
> multiplexing the underlying resources. That's it. Quality is a whole
> different thing.
> >     Quality can
> >      >     be improved by end-to-end approaches, if the underlying
> network provides some kind of thing that actually creates a way for
> end-to-end applications to
> >      >     affect queueing and routing decisions, and more importantly
> getting "telemetry" from the network regarding what is actually going on
> with the other
> >      >     end-to-end users sharing the infrastructure.
> >      >      >
> >      >      >
> >      >      >
> >      >      > This conference won't talk about it this way. So don't
> waste your time.
> >      >      >
> >      >      >
> >      >      >
> >      >      >
> >      >      >
> >      >      >
> >      >      >
> >      >      > On Wednesday, June 30, 2021 8:12pm, "Dave Taht" <
> dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:
> dave.taht@gmail.com
> >     <mailto:dave.taht@gmail.com>>> said:
> >      >      >
> >      >      > > The program committee members are *amazing*. Perhaps,
> finally, we can
> >      >      > > move the bar for the internet's quality metrics past
> endless, blind
> >      >      > > repetitions of speedtest.
> >      >      > >
> >      >      > > For complete details, please see:
> >      >      > >
> https://www.iab.org/activities/workshops/network-quality/ <
> https://www.iab.org/activities/workshops/network-quality/>
> >      >      > >
> >      >      > > Submissions Due: Monday 2nd August 2021, midnight AOE
> (Anywhere On Earth)
> >      >      > > Invitations Issued by: Monday 16th August 2021
> >      >      > >
> >      >      > > Workshop Date: This will be a virtual workshop, spread
> over three days:
> >      >      > >
> >      >      > > 1400-1800 UTC Tue 14th September 2021
> >      >      > > 1400-1800 UTC Wed 15th September 2021
> >      >      > > 1400-1800 UTC Thu 16th September 2021
> >      >      > >
> >      >      > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer
> Shapira
> >      >      > >
> >      >      > > The Program Committee members:
> >      >      > >
> >      >      > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart
> Cheshire, Sam
> >      >      > > Crowford, Nick Feamster, Jim Gettys, Toke
> Hoiland-Jorgensen, Geoff
> >      >      > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
> Kuehlewind,
> >      >      > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
> Nichols,
> >      >      > > Christoph Paasch, Tommy Pauly, Greg White, Keith
> Winstein.
> >      >      > >
> >      >      > > Send Submissions to: network-quality-workshop-pc@iab.org
> <mailto:network-quality-workshop-pc@iab.org>
> >     <mailto:network-quality-workshop-pc@iab.org <mailto:
> network-quality-workshop-pc@iab.org>>.
> >      >      > >
> >      >      > > Position papers from academia, industry, the open source
> community and
> >      >      > > others that focus on measurements, experiences,
> observations and
> >      >      > > advice for the future are welcome. Papers that reflect
> experience
> >      >      > > based on deployed services are especially welcome. The
> organizers
> >      >      > > understand that specific actions taken by operators are
> unlikely to be
> >      >      > > discussed in detail, so papers discussing general
> categories of
> >      >      > > actions and issues without naming specific technologies,
> products, or
> >      >      > > other players in the ecosystem are expected. Papers
> should not focus
> >      >      > > on specific protocol solutions.
> >      >      > >
> >      >      > > The workshop will be by invitation only. Those wishing
> to attend
> >      >      > > should submit a position paper to the address above; it
> may take the
> >      >      > > form of an Internet-Draft.
> >      >      > >
> >      >      > > All inputs submitted and considered relevant will be
> published on the
> >      >      > > workshop website. The organisers will decide whom to
> invite based on
> >      >      > > the submissions received. Sessions will be organized
> according to
> >      >      > > content, and not every accepted submission or invited
> attendee will
> >      >      > > have an opportunity to present as the intent is to
> foster discussion
> >      >      > > and not simply to have a sequence of presentations.
> >      >      > >
> >      >      > > Position papers from those not planning to attend the
> virtual sessions
> >      >      > > themselves are also encouraged. A workshop report will
> be published
> >      >      > > afterwards.
> >      >      > >
> >      >      > > Overview:
> >      >      > >
> >      >      > > "We believe that one of the major factors behind this
> lack of progress
> >      >      > > is the popular perception that throughput is the often
> sole measure of
> >      >      > > the quality of Internet connectivity. With such narrow
> focus, people
> >      >      > > don’t consider questions such as:
> >      >      > >
> >      >      > > What is the latency under typical working conditions?
> >      >      > > How reliable is the connectivity across longer time
> periods?
> >      >      > > Does the network allow the use of a broad range of
> protocols?
> >      >      > > What services can be run by clients of the network?
> >      >      > > What kind of IPv4, NAT or IPv6 connectivity is offered,
> and are there firewalls?
> >      >      > > What security mechanisms are available for local
> services, such as DNS?
> >      >      > > To what degree are the privacy, confidentiality,
> integrity and
> >      >      > > authenticity of user communications guarded?
> >      >      > >
> >      >      > > Improving these aspects of network quality will likely
> depend on
> >      >      > > measurement and exposing metrics to all involved
> parties, including to
> >      >      > > end users in a meaningful way. Such measurements and
> exposure of the
> >      >      > > right metrics will allow service providers and network
> operators to
> >      >      > > focus on the aspects that impacts the users’ experience
> most and at
> >      >      > > the same time empowers users to choose the Internet
> service that will
> >      >      > > give them the best experience."
> >      >      > >
> >      >      > >
> >      >      > > --
> >      >      > > Latest Podcast:
> >      >      > >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >     <
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> >      >      > >
> >      >      > > Dave Täht CTO, TekLibre, LLC
> >      >      > > _______________________________________________
> >      >      > > Cerowrt-devel mailing list
> >      >      > > Cerowrt-devel@lists.bufferbloat.net <mailto:
> Cerowrt-devel@lists.bufferbloat.net> <mailto:
> Cerowrt-devel@lists.bufferbloat.net
> >     <mailto:Cerowrt-devel@lists.bufferbloat.net>>
> >      >      > > https://lists.bufferbloat.net/listinfo/cerowrt-devel <
> https://lists.bufferbloat.net/listinfo/cerowrt-devel>
> >      >      > >
> >      >
> >      >
> >      >
> >      >     --
> >      >     Latest Podcast:
> >      >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >
> >      >
> >      >     Dave Täht CTO, TekLibre, LLC
> >      >     _______________________________________________
> >      >     Make-wifi-fast mailing list
> >      > Make-wifi-fast@lists.bufferbloat.net <mailto:
> Make-wifi-fast@lists.bufferbloat.net> <mailto:
> Make-wifi-fast@lists.bufferbloat.net
> >     <mailto:Make-wifi-fast@lists.bufferbloat.net>>
> >      > https://lists.bufferbloat.net/listinfo/make-wifi-fast <
> https://lists.bufferbloat.net/listinfo/make-wifi-fast>
> >      >
> >      >
> >      > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the
> >     use of
> >      > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or
> >     otherwise
> >      > restricted from disclosure to anyone else. If you are not the
> intended recipient or the person responsible for delivering the e-mail to
> the intended
> >     recipient,
> >      > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> >      > received this e-mail in error, please return the e-mail to the
> sender, delete it from your computer, and destroy any printed copy of it.
> >      >
> >      > _______________________________________________
> >      > Starlink mailing list
> >      > Starlink@lists.bufferbloat.net <mailto:
> Starlink@lists.bufferbloat.net>
> >      > https://lists.bufferbloat.net/listinfo/starlink
> >      >
> >
> >
> >     --
> >     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com
> >>
> >     Candela Technologies Inc http://www.candelatech.com
> >
> >
> > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of
> > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or otherwise
> > restricted from disclosure to anyone else. If you are not the intended
> recipient or the person responsible for delivering the e-mail to the
> intended recipient,
> > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> > received this e-mail in error, please return the e-mail to the sender,
> delete it from your computer, and destroy any printed copy of it.
>
>
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc  http://www.candelatech.com
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 26733 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-06 22:05             ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
@ 2021-07-07 13:34               ` Ben Greear
  2021-07-07 19:19                 ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: Ben Greear @ 2021-07-07 13:34 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Dave Taht, starlink, Make-Wifi-fast, David P. Reed, Cake List,
	codel, cerowrt-devel, bloat

Thanks for the clarification.  There are vendors that make these, but we have not tried
integrating our software control with any of them.  To date, not many of our customers
have been interested in this, so it did not seem worth the effort.

Do you see this as being useful for normal AP vendors/users, or mostly just radio manufacturers such
as BCM?

In case someone has one of these that has a sane API, I'd consider adding automation support
to drive it while running throughput or RvR or whatever other types of tests seem interesting.

Thanks,
Ben

On 7/6/21 3:05 PM, Bob McMahon wrote:
> Sorry, I should have been more clear. Not a fixed butler matrix but a device with solid state, programmable, phase shifters, 0 - 360 degrees. It's a way to 
> create multiple phy channels and affect and vary the off diagonal elements of a MIMO H-matrix using conducted parts. Then automation software can have more 
> robust RF MIMO test scenarios that are reproducible.
> 
> https://web.stanford.edu/~dntse/Chapters_PDF/Fundamentals_Wireless_Communication_chapter7.pdf 
> <https://web.stanford.edu/~dntse/Chapters_PDF/Fundamentals_Wireless_Communication_chapter7.pdf>
> 
> Bob
> 
> On Tue, Jul 6, 2021 at 2:24 PM Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com>> wrote:
> 
>     We tried adding in an external butler matrix in the past, but could not notice any useful difference.  Possibly
>     we didn't have the right use case.
> 
>     Typically we are competitive on price for full testing solutions, but you can get stand-alone attenuators
>     cheaper from specialized vendors.  Happy to discuss pricing offlist if you wish.
> 
>     Thanks,
>     Ben
> 
>     On 7/6/21 1:43 PM, Bob McMahon wrote:
>      > The four part attenuator part would be more interesting to me if it also had a solid state phase shifters.  This allows for testing 2x2 MIMO testing per
>      > affecting the spatial stream eigen vectors/values.
>      >
>      > Bob
>      >
>      > PS. The price per port isn't competitive. Probably a good idea to survey the market competition.
>      >
>      > On Tue, Jul 6, 2021 at 6:46 AM Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com> <mailto:greearb@candelatech.com
>     <mailto:greearb@candelatech.com>>> wrote:
>      >
>      >     Hello,
>      >
>      >     I am interested to hear wish lists for network testing features.  We make test equipment, supporting lots
>      >     of wifi stations and a distributed architecture, with built-in udp, tcp, ipv6, http, ... protocols,
>      >     and open to creating/improving some of our automated tests.
>      >
>      >     I know Dave has some test scripts already, so I'm not necessarily looking to reimplement that,
>      >     but more fishing for other/new ideas.
>      >
>      >     Thanks,
>      >     Ben
>      >
>      >     On 7/2/21 4:28 PM, Bob McMahon wrote:
>      >      > I think we need the language of math here. It seems like the network power metric, introduced by Kleinrock and Jaffe in the late 70s, is something
>     useful.
>      >      > Effective end/end queue depths per Little's law also seems useful. Both are available in iperf 2 from a test perspective. Repurposing test
>     techniques to
>      >     actual
>      >      > traffic could be useful. Hence the question around what exact telemetry is useful to apps making socket write() and read() calls.
>      >      >
>      >      > Bob
>      >      >
>      >      > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:dave.taht@gmail.com
>     <mailto:dave.taht@gmail.com>> <mailto:dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:dave.taht@gmail.com <mailto:dave.taht@gmail.com>>>> wrote:
>      >      >
>      >      >     In terms of trying to find "Quality" I have tried to encourage folk to
>      >      >     both read "zen and the art of motorcycle maintenance"[0], and Deming's
>      >      >     work on "total quality management".
>      >      >
>      >      >     My own slice at this network, computer and lifestyle "issue" is aiming
>      >      >     for "imperceptible latency" in all things. [1]. There's a lot of
>      >      >     fallout from that in terms of not just addressing queuing delay, but
>      >      >     caching, prefetching, and learning more about what a user really needs
>      >      >     (as opposed to wants) to know via intelligent agents.
>      >      >
>      >      >     [0] If you want to get depressed, read Pirsig's successor to "zen...",
>      >      >     lila, which is in part about what happens when an engineer hits an
>      >      >     insoluble problem.
>      >      >     [1] https://www.internetsociety.org/events/latency2013/ <https://www.internetsociety.org/events/latency2013/>
>     <https://www.internetsociety.org/events/latency2013/ <https://www.internetsociety.org/events/latency2013/>>
>      >      >
>      >      >
>      >      >
>      >      >     On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:dpreed@deepplum.com
>     <mailto:dpreed@deepplum.com>> <mailto:dpreed@deepplum.com <mailto:dpreed@deepplum.com>
>      >     <mailto:dpreed@deepplum.com <mailto:dpreed@deepplum.com>>>> wrote:
>      >      >      >
>      >      >      > Well, nice that the folks doing the conference  are willing to consider that quality of user experience has little to do with signalling
>     rate at the
>      >      >     physical layer or throughput of FTP transfers.
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      > But honestly, the fact that they call the problem "network quality" suggests that they REALLY, REALLY don't understand the Internet isn't the
>      >     hardware or
>      >      >     the routers or even the routing algorithms *to its users*.
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      > By ignoring the diversity of applications now and in the future, and the fact that we DON'T KNOW what will be coming up, this conference will
>      >     likely fall
>      >      >     into the usual trap that net-heads fall into - optimizing for some imaginary reality that doesn't exist, and in fact will probably never be
>     what users
>      >      >     actually will do given the chance.
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      > I saw this issue in 1976 in the group developing the original Internet protocols - a desire to put *into the network* special tricks to
>     optimize ASR33
>      >      >     logins to remote computers from terminal concentrators (aka remote login), bulk file transfers between file systems on different time-sharing
>      >     systems, and
>      >      >     "sessions" (virtual circuits) that required logins. And then trying to exploit underlying "multicast" by building it into the IP layer,
>     because someone
>      >      >     thought that TV broadcast would be the dominant application.
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      > Frankly, to think of "quality" as something that can be "provided" by "the network" misses the entire point of "end-to-end argument in
>     system design".
>      >      >     Quality is not a property defined or created by The Network. If you want to talk about Quality, you need to talk about users - all the users
>     at all
>      >     times,
>      >      >     now and into the future, and that's something you can't do if you don't bother to include current and future users talking about what they might
>      >     expect to
>      >      >     experience that they don't experience.
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      > There was much fighting back in 1976 that basically involved "network experts" saying that the network was the place to "solve" such issues as
>      >     quality,
>      >      >     so applications could avoid having to solve such issues.
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      > What some of us managed to do was to argue that you can't "solve" such issues. All you can do is provide a framework that enables different
>     uses to
>      >      >     *cooperate* in some way.
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      > Which is why the Internet drops packets rather than queueing them, and why diffserv cannot work.
>      >      >      >
>      >      >      > (I know the latter is conftroversial, but at the moment, ALL of diffserv attempts to talk about end-to-end applicaiton specific metrics, but
>      >     never, ever
>      >      >     explains what the diffserv control points actually do w.r.t. what the IP layer can actually control. So it is meaningless - another violation
>     of the
>      >      >     so-called end-to-end principle).
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      > Networks are about getting packets from here to there, multiplexing the underlying resources. That's it. Quality is a whole different thing.
>      >     Quality can
>      >      >     be improved by end-to-end approaches, if the underlying network provides some kind of thing that actually creates a way for end-to-end
>     applications to
>      >      >     affect queueing and routing decisions, and more importantly getting "telemetry" from the network regarding what is actually going on with the
>     other
>      >      >     end-to-end users sharing the infrastructure.
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      > This conference won't talk about it this way. So don't waste your time.
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      >
>      >      >      > On Wednesday, June 30, 2021 8:12pm, "Dave Taht" <dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:dave.taht@gmail.com
>     <mailto:dave.taht@gmail.com>> <mailto:dave.taht@gmail.com <mailto:dave.taht@gmail.com>
>      >     <mailto:dave.taht@gmail.com <mailto:dave.taht@gmail.com>>>> said:
>      >      >      >
>      >      >      > > The program committee members are *amazing*. Perhaps, finally, we can
>      >      >      > > move the bar for the internet's quality metrics past endless, blind
>      >      >      > > repetitions of speedtest.
>      >      >      > >
>      >      >      > > For complete details, please see:
>      >      >      > > https://www.iab.org/activities/workshops/network-quality/ <https://www.iab.org/activities/workshops/network-quality/>
>     <https://www.iab.org/activities/workshops/network-quality/ <https://www.iab.org/activities/workshops/network-quality/>>
>      >      >      > >
>      >      >      > > Submissions Due: Monday 2nd August 2021, midnight AOE (Anywhere On Earth)
>      >      >      > > Invitations Issued by: Monday 16th August 2021
>      >      >      > >
>      >      >      > > Workshop Date: This will be a virtual workshop, spread over three days:
>      >      >      > >
>      >      >      > > 1400-1800 UTC Tue 14th September 2021
>      >      >      > > 1400-1800 UTC Wed 15th September 2021
>      >      >      > > 1400-1800 UTC Thu 16th September 2021
>      >      >      > >
>      >      >      > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
>      >      >      > >
>      >      >      > > The Program Committee members:
>      >      >      > >
>      >      >      > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam
>      >      >      > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, Geoff
>      >      >      > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja Kuehlewind,
>      >      >      > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen Nichols,
>      >      >      > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
>      >      >      > >
>      >      >      > > Send Submissions to: network-quality-workshop-pc@iab.org <mailto:network-quality-workshop-pc@iab.org>
>     <mailto:network-quality-workshop-pc@iab.org <mailto:network-quality-workshop-pc@iab.org>>
>      >     <mailto:network-quality-workshop-pc@iab.org <mailto:network-quality-workshop-pc@iab.org> <mailto:network-quality-workshop-pc@iab.org
>     <mailto:network-quality-workshop-pc@iab.org>>>.
>      >      >      > >
>      >      >      > > Position papers from academia, industry, the open source community and
>      >      >      > > others that focus on measurements, experiences, observations and
>      >      >      > > advice for the future are welcome. Papers that reflect experience
>      >      >      > > based on deployed services are especially welcome. The organizers
>      >      >      > > understand that specific actions taken by operators are unlikely to be
>      >      >      > > discussed in detail, so papers discussing general categories of
>      >      >      > > actions and issues without naming specific technologies, products, or
>      >      >      > > other players in the ecosystem are expected. Papers should not focus
>      >      >      > > on specific protocol solutions.
>      >      >      > >
>      >      >      > > The workshop will be by invitation only. Those wishing to attend
>      >      >      > > should submit a position paper to the address above; it may take the
>      >      >      > > form of an Internet-Draft.
>      >      >      > >
>      >      >      > > All inputs submitted and considered relevant will be published on the
>      >      >      > > workshop website. The organisers will decide whom to invite based on
>      >      >      > > the submissions received. Sessions will be organized according to
>      >      >      > > content, and not every accepted submission or invited attendee will
>      >      >      > > have an opportunity to present as the intent is to foster discussion
>      >      >      > > and not simply to have a sequence of presentations.
>      >      >      > >
>      >      >      > > Position papers from those not planning to attend the virtual sessions
>      >      >      > > themselves are also encouraged. A workshop report will be published
>      >      >      > > afterwards.
>      >      >      > >
>      >      >      > > Overview:
>      >      >      > >
>      >      >      > > "We believe that one of the major factors behind this lack of progress
>      >      >      > > is the popular perception that throughput is the often sole measure of
>      >      >      > > the quality of Internet connectivity. With such narrow focus, people
>      >      >      > > don’t consider questions such as:
>      >      >      > >
>      >      >      > > What is the latency under typical working conditions?
>      >      >      > > How reliable is the connectivity across longer time periods?
>      >      >      > > Does the network allow the use of a broad range of protocols?
>      >      >      > > What services can be run by clients of the network?
>      >      >      > > What kind of IPv4, NAT or IPv6 connectivity is offered, and are there firewalls?
>      >      >      > > What security mechanisms are available for local services, such as DNS?
>      >      >      > > To what degree are the privacy, confidentiality, integrity and
>      >      >      > > authenticity of user communications guarded?
>      >      >      > >
>      >      >      > > Improving these aspects of network quality will likely depend on
>      >      >      > > measurement and exposing metrics to all involved parties, including to
>      >      >      > > end users in a meaningful way. Such measurements and exposure of the
>      >      >      > > right metrics will allow service providers and network operators to
>      >      >      > > focus on the aspects that impacts the users’ experience most and at
>      >      >      > > the same time empowers users to choose the Internet service that will
>      >      >      > > give them the best experience."
>      >      >      > >
>      >      >      > >
>      >      >      > > --
>      >      >      > > Latest Podcast:
>      >      >      > > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>     <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
>      >     <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>>
>      >      >      > >
>      >      >      > > Dave Täht CTO, TekLibre, LLC
>      >      >      > > _______________________________________________
>      >      >      > > Cerowrt-devel mailing list
>      >      >      > > Cerowrt-devel@lists.bufferbloat.net <mailto:Cerowrt-devel@lists.bufferbloat.net> <mailto:Cerowrt-devel@lists.bufferbloat.net
>     <mailto:Cerowrt-devel@lists.bufferbloat.net>> <mailto:Cerowrt-devel@lists.bufferbloat.net <mailto:Cerowrt-devel@lists.bufferbloat.net>
>      >     <mailto:Cerowrt-devel@lists.bufferbloat.net <mailto:Cerowrt-devel@lists.bufferbloat.net>>>
>      >      >      > > https://lists.bufferbloat.net/listinfo/cerowrt-devel <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
>     <https://lists.bufferbloat.net/listinfo/cerowrt-devel <https://lists.bufferbloat.net/listinfo/cerowrt-devel>>
>      >      >      > >
>      >      >
>      >      >
>      >      >
>      >      >     --
>      >      >     Latest Podcast:
>      >      > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>     <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/> <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>     <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>>
>      >      >
>      >      >     Dave Täht CTO, TekLibre, LLC
>      >      >     _______________________________________________
>      >      >     Make-wifi-fast mailing list
>      >      > Make-wifi-fast@lists.bufferbloat.net <mailto:Make-wifi-fast@lists.bufferbloat.net> <mailto:Make-wifi-fast@lists.bufferbloat.net
>     <mailto:Make-wifi-fast@lists.bufferbloat.net>> <mailto:Make-wifi-fast@lists.bufferbloat.net <mailto:Make-wifi-fast@lists.bufferbloat.net>
>      >     <mailto:Make-wifi-fast@lists.bufferbloat.net <mailto:Make-wifi-fast@lists.bufferbloat.net>>>
>      >      > https://lists.bufferbloat.net/listinfo/make-wifi-fast <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
>     <https://lists.bufferbloat.net/listinfo/make-wifi-fast <https://lists.bufferbloat.net/listinfo/make-wifi-fast>>
>      >      >
>      >      >
>      >      > This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely
>     for the
>      >     use of
>      >      > the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or
>      >     otherwise
>      >      > restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended
>      >     recipient,
>      >      > you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly
>     prohibited. If you
>      >      > received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
>      >      >
>      >      > _______________________________________________
>      >      > Starlink mailing list
>      >      > Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net> <mailto:Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>>
>      >      > https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>      >      >
>      >
>      >
>      >     --
>      >     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com> <mailto:greearb@candelatech.com <mailto:greearb@candelatech.com>>>
>      >     Candela Technologies Inc http://www.candelatech.com <http://www.candelatech.com>
>      >
>      >
>      > This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the
>     use of
>      > the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or
>     otherwise
>      > restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended
>     recipient,
>      > you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you
>      > received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
> 
> 
>     -- 
>     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com>>
>     Candela Technologies Inc http://www.candelatech.com <http://www.candelatech.com>
> 
> 
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of 
> the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise 
> restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, 
> you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you 
> received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.


-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-07 13:34               ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Ben Greear
@ 2021-07-07 19:19                 ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-07 19:19 UTC (permalink / raw)
  To: Ben Greear
  Cc: Dave Taht, starlink, Make-Wifi-fast, David P. Reed, Cake List,
	codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 25480 bytes --]

I can't speak for others. I've been successful in early prototyping using
one for simplistic off-diagonal h-matrix testing, i.e. varying the h-matrix
condition numbers. I see this as a small step in the right direction for
conductive, automated, and reproducible testing.

Developing a system that supports multiple RF channels to test against
seems hard. Then add to that causing the energy states per "peer RF
traffic" makes the challenge even more difficult.

Bob



On Wed, Jul 7, 2021 at 6:36 AM Ben Greear <greearb@candelatech.com> wrote:

> Thanks for the clarification.  There are vendors that make these, but we
> have not tried
> integrating our software control with any of them.  To date, not many of
> our customers
> have been interested in this, so it did not seem worth the effort.
>
> Do you see this as being useful for normal AP vendors/users, or mostly
> just radio manufacturers such
> as BCM?
>
> In case someone has one of these that has a sane API, I'd consider adding
> automation support
> to drive it while running throughput or RvR or whatever other types of
> tests seem interesting.
>
> Thanks,
> Ben
>
> On 7/6/21 3:05 PM, Bob McMahon wrote:
> > Sorry, I should have been more clear. Not a fixed butler matrix but a
> device with solid state, programmable, phase shifters, 0 - 360 degrees.
> It's a way to
> > create multiple phy channels and affect and vary the off diagonal
> elements of a MIMO H-matrix using conducted parts. Then automation software
> can have more
> > robust RF MIMO test scenarios that are reproducible.
> >
> >
> https://web.stanford.edu/~dntse/Chapters_PDF/Fundamentals_Wireless_Communication_chapter7.pdf
> > <
> https://web.stanford.edu/~dntse/Chapters_PDF/Fundamentals_Wireless_Communication_chapter7.pdf
> >
> >
> > Bob
> >
> > On Tue, Jul 6, 2021 at 2:24 PM Ben Greear <greearb@candelatech.com
> <mailto:greearb@candelatech.com>> wrote:
> >
> >     We tried adding in an external butler matrix in the past, but could
> not notice any useful difference.  Possibly
> >     we didn't have the right use case.
> >
> >     Typically we are competitive on price for full testing solutions,
> but you can get stand-alone attenuators
> >     cheaper from specialized vendors.  Happy to discuss pricing offlist
> if you wish.
> >
> >     Thanks,
> >     Ben
> >
> >     On 7/6/21 1:43 PM, Bob McMahon wrote:
> >      > The four part attenuator part would be more interesting to me if
> it also had a solid state phase shifters.  This allows for testing 2x2 MIMO
> testing per
> >      > affecting the spatial stream eigen vectors/values.
> >      >
> >      > Bob
> >      >
> >      > PS. The price per port isn't competitive. Probably a good idea to
> survey the market competition.
> >      >
> >      > On Tue, Jul 6, 2021 at 6:46 AM Ben Greear <
> greearb@candelatech.com <mailto:greearb@candelatech.com> <mailto:
> greearb@candelatech.com
> >     <mailto:greearb@candelatech.com>>> wrote:
> >      >
> >      >     Hello,
> >      >
> >      >     I am interested to hear wish lists for network testing
> features.  We make test equipment, supporting lots
> >      >     of wifi stations and a distributed architecture, with
> built-in udp, tcp, ipv6, http, ... protocols,
> >      >     and open to creating/improving some of our automated tests.
> >      >
> >      >     I know Dave has some test scripts already, so I'm not
> necessarily looking to reimplement that,
> >      >     but more fishing for other/new ideas.
> >      >
> >      >     Thanks,
> >      >     Ben
> >      >
> >      >     On 7/2/21 4:28 PM, Bob McMahon wrote:
> >      >      > I think we need the language of math here. It seems like
> the network power metric, introduced by Kleinrock and Jaffe in the late
> 70s, is something
> >     useful.
> >      >      > Effective end/end queue depths per Little's law also seems
> useful. Both are available in iperf 2 from a test perspective. Repurposing
> test
> >     techniques to
> >      >     actual
> >      >      > traffic could be useful. Hence the question around what
> exact telemetry is useful to apps making socket write() and read() calls.
> >      >      >
> >      >      > Bob
> >      >      >
> >      >      > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <
> dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:
> dave.taht@gmail.com
> >     <mailto:dave.taht@gmail.com>> <mailto:dave.taht@gmail.com <mailto:
> dave.taht@gmail.com> <mailto:dave.taht@gmail.com <mailto:
> dave.taht@gmail.com>>>> wrote:
> >      >      >
> >      >      >     In terms of trying to find "Quality" I have tried to
> encourage folk to
> >      >      >     both read "zen and the art of motorcycle
> maintenance"[0], and Deming's
> >      >      >     work on "total quality management".
> >      >      >
> >      >      >     My own slice at this network, computer and lifestyle
> "issue" is aiming
> >      >      >     for "imperceptible latency" in all things. [1].
> There's a lot of
> >      >      >     fallout from that in terms of not just addressing
> queuing delay, but
> >      >      >     caching, prefetching, and learning more about what a
> user really needs
> >      >      >     (as opposed to wants) to know via intelligent agents.
> >      >      >
> >      >      >     [0] If you want to get depressed, read Pirsig's
> successor to "zen...",
> >      >      >     lila, which is in part about what happens when an
> engineer hits an
> >      >      >     insoluble problem.
> >      >      >     [1]
> https://www.internetsociety.org/events/latency2013/ <
> https://www.internetsociety.org/events/latency2013/>
> >     <https://www.internetsociety.org/events/latency2013/ <
> https://www.internetsociety.org/events/latency2013/>>
> >      >      >
> >      >      >
> >      >      >
> >      >      >     On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <
> dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:
> dpreed@deepplum.com
> >     <mailto:dpreed@deepplum.com>> <mailto:dpreed@deepplum.com <mailto:
> dpreed@deepplum.com>
> >      >     <mailto:dpreed@deepplum.com <mailto:dpreed@deepplum.com>>>>
> wrote:
> >      >      >      >
> >      >      >      > Well, nice that the folks doing the conference  are
> willing to consider that quality of user experience has little to do with
> signalling
> >     rate at the
> >      >      >     physical layer or throughput of FTP transfers.
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      > But honestly, the fact that they call the problem
> "network quality" suggests that they REALLY, REALLY don't understand the
> Internet isn't the
> >      >     hardware or
> >      >      >     the routers or even the routing algorithms *to its
> users*.
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      > By ignoring the diversity of applications now and
> in the future, and the fact that we DON'T KNOW what will be coming up, this
> conference will
> >      >     likely fall
> >      >      >     into the usual trap that net-heads fall into -
> optimizing for some imaginary reality that doesn't exist, and in fact will
> probably never be
> >     what users
> >      >      >     actually will do given the chance.
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      > I saw this issue in 1976 in the group developing
> the original Internet protocols - a desire to put *into the network*
> special tricks to
> >     optimize ASR33
> >      >      >     logins to remote computers from terminal concentrators
> (aka remote login), bulk file transfers between file systems on different
> time-sharing
> >      >     systems, and
> >      >      >     "sessions" (virtual circuits) that required logins.
> And then trying to exploit underlying "multicast" by building it into the
> IP layer,
> >     because someone
> >      >      >     thought that TV broadcast would be the dominant
> application.
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      > Frankly, to think of "quality" as something that
> can be "provided" by "the network" misses the entire point of "end-to-end
> argument in
> >     system design".
> >      >      >     Quality is not a property defined or created by The
> Network. If you want to talk about Quality, you need to talk about users -
> all the users
> >     at all
> >      >     times,
> >      >      >     now and into the future, and that's something you
> can't do if you don't bother to include current and future users talking
> about what they might
> >      >     expect to
> >      >      >     experience that they don't experience.
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      > There was much fighting back in 1976 that basically
> involved "network experts" saying that the network was the place to "solve"
> such issues as
> >      >     quality,
> >      >      >     so applications could avoid having to solve such
> issues.
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      > What some of us managed to do was to argue that you
> can't "solve" such issues. All you can do is provide a framework that
> enables different
> >     uses to
> >      >      >     *cooperate* in some way.
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      > Which is why the Internet drops packets rather than
> queueing them, and why diffserv cannot work.
> >      >      >      >
> >      >      >      > (I know the latter is conftroversial, but at the
> moment, ALL of diffserv attempts to talk about end-to-end applicaiton
> specific metrics, but
> >      >     never, ever
> >      >      >     explains what the diffserv control points actually do
> w.r.t. what the IP layer can actually control. So it is meaningless -
> another violation
> >     of the
> >      >      >     so-called end-to-end principle).
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      > Networks are about getting packets from here to
> there, multiplexing the underlying resources. That's it. Quality is a whole
> different thing.
> >      >     Quality can
> >      >      >     be improved by end-to-end approaches, if the
> underlying network provides some kind of thing that actually creates a way
> for end-to-end
> >     applications to
> >      >      >     affect queueing and routing decisions, and more
> importantly getting "telemetry" from the network regarding what is actually
> going on with the
> >     other
> >      >      >     end-to-end users sharing the infrastructure.
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      > This conference won't talk about it this way. So
> don't waste your time.
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      >
> >      >      >      > On Wednesday, June 30, 2021 8:12pm, "Dave Taht" <
> dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:
> dave.taht@gmail.com
> >     <mailto:dave.taht@gmail.com>> <mailto:dave.taht@gmail.com <mailto:
> dave.taht@gmail.com>
> >      >     <mailto:dave.taht@gmail.com <mailto:dave.taht@gmail.com>>>>
> said:
> >      >      >      >
> >      >      >      > > The program committee members are *amazing*.
> Perhaps, finally, we can
> >      >      >      > > move the bar for the internet's quality metrics
> past endless, blind
> >      >      >      > > repetitions of speedtest.
> >      >      >      > >
> >      >      >      > > For complete details, please see:
> >      >      >      > >
> https://www.iab.org/activities/workshops/network-quality/ <
> https://www.iab.org/activities/workshops/network-quality/>
> >     <https://www.iab.org/activities/workshops/network-quality/ <
> https://www.iab.org/activities/workshops/network-quality/>>
> >      >      >      > >
> >      >      >      > > Submissions Due: Monday 2nd August 2021, midnight
> AOE (Anywhere On Earth)
> >      >      >      > > Invitations Issued by: Monday 16th August 2021
> >      >      >      > >
> >      >      >      > > Workshop Date: This will be a virtual workshop,
> spread over three days:
> >      >      >      > >
> >      >      >      > > 1400-1800 UTC Tue 14th September 2021
> >      >      >      > > 1400-1800 UTC Wed 15th September 2021
> >      >      >      > > 1400-1800 UTC Thu 16th September 2021
> >      >      >      > >
> >      >      >      > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov,
> Omer Shapira
> >      >      >      > >
> >      >      >      > > The Program Committee members:
> >      >      >      > >
> >      >      >      > > Jari Arkko, Olivier Bonaventure, Vint Cerf,
> Stuart Cheshire, Sam
> >      >      >      > > Crowford, Nick Feamster, Jim Gettys, Toke
> Hoiland-Jorgensen, Geoff
> >      >      >      > > Huston, Cullen Jennings, Katarzyna Kosek-Szott,
> Mirja Kuehlewind,
> >      >      >      > > Jason Livingood, Matt Mathias, Randall Meyer,
> Kathleen Nichols,
> >      >      >      > > Christoph Paasch, Tommy Pauly, Greg White, Keith
> Winstein.
> >      >      >      > >
> >      >      >      > > Send Submissions to:
> network-quality-workshop-pc@iab.org <mailto:
> network-quality-workshop-pc@iab.org>
> >     <mailto:network-quality-workshop-pc@iab.org <mailto:
> network-quality-workshop-pc@iab.org>>
> >      >     <mailto:network-quality-workshop-pc@iab.org <mailto:
> network-quality-workshop-pc@iab.org> <mailto:
> network-quality-workshop-pc@iab.org
> >     <mailto:network-quality-workshop-pc@iab.org>>>.
> >      >      >      > >
> >      >      >      > > Position papers from academia, industry, the open
> source community and
> >      >      >      > > others that focus on measurements, experiences,
> observations and
> >      >      >      > > advice for the future are welcome. Papers that
> reflect experience
> >      >      >      > > based on deployed services are especially
> welcome. The organizers
> >      >      >      > > understand that specific actions taken by
> operators are unlikely to be
> >      >      >      > > discussed in detail, so papers discussing general
> categories of
> >      >      >      > > actions and issues without naming specific
> technologies, products, or
> >      >      >      > > other players in the ecosystem are expected.
> Papers should not focus
> >      >      >      > > on specific protocol solutions.
> >      >      >      > >
> >      >      >      > > The workshop will be by invitation only. Those
> wishing to attend
> >      >      >      > > should submit a position paper to the address
> above; it may take the
> >      >      >      > > form of an Internet-Draft.
> >      >      >      > >
> >      >      >      > > All inputs submitted and considered relevant will
> be published on the
> >      >      >      > > workshop website. The organisers will decide whom
> to invite based on
> >      >      >      > > the submissions received. Sessions will be
> organized according to
> >      >      >      > > content, and not every accepted submission or
> invited attendee will
> >      >      >      > > have an opportunity to present as the intent is
> to foster discussion
> >      >      >      > > and not simply to have a sequence of
> presentations.
> >      >      >      > >
> >      >      >      > > Position papers from those not planning to attend
> the virtual sessions
> >      >      >      > > themselves are also encouraged. A workshop report
> will be published
> >      >      >      > > afterwards.
> >      >      >      > >
> >      >      >      > > Overview:
> >      >      >      > >
> >      >      >      > > "We believe that one of the major factors behind
> this lack of progress
> >      >      >      > > is the popular perception that throughput is the
> often sole measure of
> >      >      >      > > the quality of Internet connectivity. With such
> narrow focus, people
> >      >      >      > > don’t consider questions such as:
> >      >      >      > >
> >      >      >      > > What is the latency under typical working
> conditions?
> >      >      >      > > How reliable is the connectivity across longer
> time periods?
> >      >      >      > > Does the network allow the use of a broad range
> of protocols?
> >      >      >      > > What services can be run by clients of the
> network?
> >      >      >      > > What kind of IPv4, NAT or IPv6 connectivity is
> offered, and are there firewalls?
> >      >      >      > > What security mechanisms are available for local
> services, such as DNS?
> >      >      >      > > To what degree are the privacy, confidentiality,
> integrity and
> >      >      >      > > authenticity of user communications guarded?
> >      >      >      > >
> >      >      >      > > Improving these aspects of network quality will
> likely depend on
> >      >      >      > > measurement and exposing metrics to all involved
> parties, including to
> >      >      >      > > end users in a meaningful way. Such measurements
> and exposure of the
> >      >      >      > > right metrics will allow service providers and
> network operators to
> >      >      >      > > focus on the aspects that impacts the users’
> experience most and at
> >      >      >      > > the same time empowers users to choose the
> Internet service that will
> >      >      >      > > give them the best experience."
> >      >      >      > >
> >      >      >      > >
> >      >      >      > > --
> >      >      >      > > Latest Podcast:
> >      >      >      > >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >     <
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> >      >     <
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >>
> >      >      >      > >
> >      >      >      > > Dave Täht CTO, TekLibre, LLC
> >      >      >      > > _______________________________________________
> >      >      >      > > Cerowrt-devel mailing list
> >      >      >      > > Cerowrt-devel@lists.bufferbloat.net <mailto:
> Cerowrt-devel@lists.bufferbloat.net> <mailto:
> Cerowrt-devel@lists.bufferbloat.net
> >     <mailto:Cerowrt-devel@lists.bufferbloat.net>> <mailto:
> Cerowrt-devel@lists.bufferbloat.net <mailto:
> Cerowrt-devel@lists.bufferbloat.net>
> >      >     <mailto:Cerowrt-devel@lists.bufferbloat.net <mailto:
> Cerowrt-devel@lists.bufferbloat.net>>>
> >      >      >      > >
> https://lists.bufferbloat.net/listinfo/cerowrt-devel <
> https://lists.bufferbloat.net/listinfo/cerowrt-devel>
> >     <https://lists.bufferbloat.net/listinfo/cerowrt-devel <
> https://lists.bufferbloat.net/listinfo/cerowrt-devel>>
> >      >      >      > >
> >      >      >
> >      >      >
> >      >      >
> >      >      >     --
> >      >      >     Latest Podcast:
> >      >      >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >     <
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >     <
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> >>
> >      >      >
> >      >      >     Dave Täht CTO, TekLibre, LLC
> >      >      >     _______________________________________________
> >      >      >     Make-wifi-fast mailing list
> >      >      > Make-wifi-fast@lists.bufferbloat.net <mailto:
> Make-wifi-fast@lists.bufferbloat.net> <mailto:
> Make-wifi-fast@lists.bufferbloat.net
> >     <mailto:Make-wifi-fast@lists.bufferbloat.net>> <mailto:
> Make-wifi-fast@lists.bufferbloat.net <mailto:
> Make-wifi-fast@lists.bufferbloat.net>
> >      >     <mailto:Make-wifi-fast@lists.bufferbloat.net <mailto:
> Make-wifi-fast@lists.bufferbloat.net>>>
> >      >      > https://lists.bufferbloat.net/listinfo/make-wifi-fast <
> https://lists.bufferbloat.net/listinfo/make-wifi-fast>
> >     <https://lists.bufferbloat.net/listinfo/make-wifi-fast <
> https://lists.bufferbloat.net/listinfo/make-wifi-fast>>
> >      >      >
> >      >      >
> >      >      > This electronic communication and the information and any
> files transmitted with it, or attached to it, are confidential and are
> intended solely
> >     for the
> >      >     use of
> >      >      > the individual or entity to whom it is addressed and may
> contain information that is confidential, legally privileged, protected by
> privacy laws, or
> >      >     otherwise
> >      >      > restricted from disclosure to anyone else. If you are not
> the intended recipient or the person responsible for delivering the e-mail
> to the intended
> >      >     recipient,
> >      >      > you are hereby notified that any use, copying,
> distributing, dissemination, forwarding, printing, or copying of this
> e-mail is strictly
> >     prohibited. If you
> >      >      > received this e-mail in error, please return the e-mail to
> the sender, delete it from your computer, and destroy any printed copy of
> it.
> >      >      >
> >      >      > _______________________________________________
> >      >      > Starlink mailing list
> >      >      > Starlink@lists.bufferbloat.net <mailto:
> Starlink@lists.bufferbloat.net> <mailto:Starlink@lists.bufferbloat.net
> <mailto:Starlink@lists.bufferbloat.net>>
> >      >      > https://lists.bufferbloat.net/listinfo/starlink <
> https://lists.bufferbloat.net/listinfo/starlink>
> >      >      >
> >      >
> >      >
> >      >     --
> >      >     Ben Greear <greearb@candelatech.com <mailto:
> greearb@candelatech.com> <mailto:greearb@candelatech.com <mailto:
> greearb@candelatech.com>>>
> >      >     Candela Technologies Inc http://www.candelatech.com <
> http://www.candelatech.com>
> >      >
> >      >
> >      > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the
> >     use of
> >      > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or
> >     otherwise
> >      > restricted from disclosure to anyone else. If you are not the
> intended recipient or the person responsible for delivering the e-mail to
> the intended
> >     recipient,
> >      > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> >      > received this e-mail in error, please return the e-mail to the
> sender, delete it from your computer, and destroy any printed copy of it.
> >
> >
> >     --
> >     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com
> >>
> >     Candela Technologies Inc http://www.candelatech.com <
> http://www.candelatech.com>
> >
> >
> > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of
> > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or otherwise
> > restricted from disclosure to anyone else. If you are not the intended
> recipient or the person responsible for delivering the e-mail to the
> intended recipient,
> > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> > received this e-mail in error, please return the e-mail to the sender,
> delete it from your computer, and destroy any printed copy of it.
>
>
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc  http://www.candelatech.com
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 39450 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-06 13:46       ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Ben Greear
  2021-07-06 20:43         ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
@ 2021-07-08 19:38         ` David P. Reed
  2021-07-08 22:51           ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
  2021-07-09  3:08           ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Leonard Kleinrock
  1 sibling, 2 replies; 108+ messages in thread
From: David P. Reed @ 2021-07-08 19:38 UTC (permalink / raw)
  To: Ben Greear
  Cc: Bob McMahon, Dave Taht, starlink, Make-Wifi-fast, Cake List,
	codel, cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 14486 bytes --]


I will tell you flat out that the arrival time distribution assumption made by Little's Lemma that allows "estimation of queue depth" is totally unreasonable on ANY Internet in practice.
 
The assumption is a Poisson Arrival Process. In reality, traffic arrivals in real internet applications are extremely far from Poisson, and, of course, using TCP windowing, become highly intercorrelated with crossing traffic that shares the same queue.
 
So, as I've tried to tell many, many net-heads (people who ignore applications layer behavior, like the people that think latency doesn't matter to end users, only throughput), end-to-end packet arrival times on a practical network are incredibly far from Poisson - and they are more like fractal probability distributions, very irregular at all scales of time.
 
So, the idea that iperf can estimate queue depth by Little's Lemma by just measuring saturation of capacity of a path is bogus.The less Poisson, the worse the estimate gets, by a huge factor.
 
 
Where does the Poisson assumption come from?  Well, like many theorems, it is the simplest tractable closed form solution - it creates a simplified view, by being a "single-parameter" distribution (the parameter is called lambda for a Poisson distribution).  And the analysis of a simple queue with poisson arrival distribution and a static, fixed service time is the first interesting Queueing Theory example in most textbooks. It is suggestive of an interesting phenomenon, but it does NOT characterize any real system.
 
It's the queueing theory equivalent of "First, we assume a spherical cow...". in doing an example in a freshman physics class.
 
Unfortunately, most networking engineers understand neither queuing theory nor application networking usage in interactive applications. Which makes them arrogant. They assume all distributions are poisson!
 
 
On Tuesday, July 6, 2021 9:46am, "Ben Greear" <greearb@candelatech.com> said:



> Hello,
> 
> I am interested to hear wish lists for network testing features. We make test
> equipment, supporting lots
> of wifi stations and a distributed architecture, with built-in udp, tcp, ipv6,
> http, ... protocols,
> and open to creating/improving some of our automated tests.
> 
> I know Dave has some test scripts already, so I'm not necessarily looking to
> reimplement that,
> but more fishing for other/new ideas.
> 
> Thanks,
> Ben
> 
> On 7/2/21 4:28 PM, Bob McMahon wrote:
> > I think we need the language of math here. It seems like the network
> power metric, introduced by Kleinrock and Jaffe in the late 70s, is something
> useful.
> > Effective end/end queue depths per Little's law also seems useful. Both are
> available in iperf 2 from a test perspective. Repurposing test techniques to
> actual
> > traffic could be useful. Hence the question around what exact telemetry
> is useful to apps making socket write() and read() calls.
> >
> > Bob
> >
> > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com
> <mailto:dave.taht@gmail.com>> wrote:
> >
> > In terms of trying to find "Quality" I have tried to encourage folk to
> > both read "zen and the art of motorcycle maintenance"[0], and Deming's
> > work on "total quality management".
> >
> > My own slice at this network, computer and lifestyle "issue" is aiming
> > for "imperceptible latency" in all things. [1]. There's a lot of
> > fallout from that in terms of not just addressing queuing delay, but
> > caching, prefetching, and learning more about what a user really needs
> > (as opposed to wants) to know via intelligent agents.
> >
> > [0] If you want to get depressed, read Pirsig's successor to "zen...",
> > lila, which is in part about what happens when an engineer hits an
> > insoluble problem.
> > [1] https://www.internetsociety.org/events/latency2013/
> <https://www.internetsociety.org/events/latency2013/>
> >
> >
> >
> > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com
> <mailto:dpreed@deepplum.com>> wrote:
> > >
> > > Well, nice that the folks doing the conference  are willing to
> consider that quality of user experience has little to do with signalling rate at
> the
> > physical layer or throughput of FTP transfers.
> > >
> > >
> > >
> > > But honestly, the fact that they call the problem "network quality"
> suggests that they REALLY, REALLY don't understand the Internet isn't the hardware
> or
> > the routers or even the routing algorithms *to its users*.
> > >
> > >
> > >
> > > By ignoring the diversity of applications now and in the future,
> and the fact that we DON'T KNOW what will be coming up, this conference will
> likely fall
> > into the usual trap that net-heads fall into - optimizing for some
> imaginary reality that doesn't exist, and in fact will probably never be what
> users
> > actually will do given the chance.
> > >
> > >
> > >
> > > I saw this issue in 1976 in the group developing the original
> Internet protocols - a desire to put *into the network* special tricks to optimize
> ASR33
> > logins to remote computers from terminal concentrators (aka remote
> login), bulk file transfers between file systems on different time-sharing
> systems, and
> > "sessions" (virtual circuits) that required logins. And then trying to
> exploit underlying "multicast" by building it into the IP layer, because someone
> > thought that TV broadcast would be the dominant application.
> > >
> > >
> > >
> > > Frankly, to think of "quality" as something that can be "provided"
> by "the network" misses the entire point of "end-to-end argument in system
> design".
> > Quality is not a property defined or created by The Network. If you want
> to talk about Quality, you need to talk about users - all the users at all times,
> > now and into the future, and that's something you can't do if you don't
> bother to include current and future users talking about what they might expect
> to
> > experience that they don't experience.
> > >
> > >
> > >
> > > There was much fighting back in 1976 that basically involved
> "network experts" saying that the network was the place to "solve" such issues as
> quality,
> > so applications could avoid having to solve such issues.
> > >
> > >
> > >
> > > What some of us managed to do was to argue that you can't "solve"
> such issues. All you can do is provide a framework that enables different uses to
> > *cooperate* in some way.
> > >
> > >
> > >
> > > Which is why the Internet drops packets rather than queueing them,
> and why diffserv cannot work.
> > >
> > > (I know the latter is conftroversial, but at the moment, ALL of
> diffserv attempts to talk about end-to-end applicaiton specific metrics, but
> never, ever
> > explains what the diffserv control points actually do w.r.t. what the IP
> layer can actually control. So it is meaningless - another violation of the
> > so-called end-to-end principle).
> > >
> > >
> > >
> > > Networks are about getting packets from here to there, multiplexing
> the underlying resources. That's it. Quality is a whole different thing. Quality
> can
> > be improved by end-to-end approaches, if the underlying network provides
> some kind of thing that actually creates a way for end-to-end applications to
> > affect queueing and routing decisions, and more importantly getting
> "telemetry" from the network regarding what is actually going on with the other
> > end-to-end users sharing the infrastructure.
> > >
> > >
> > >
> > > This conference won't talk about it this way. So don't waste your
> time.
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"
> <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> said:
> > >
> > > > The program committee members are *amazing*. Perhaps, finally,
> we can
> > > > move the bar for the internet's quality metrics past endless,
> blind
> > > > repetitions of speedtest.
> > > >
> > > > For complete details, please see:
> > > > https://www.iab.org/activities/workshops/network-quality/
> <https://www.iab.org/activities/workshops/network-quality/>
> > > >
> > > > Submissions Due: Monday 2nd August 2021, midnight AOE
> (Anywhere On Earth)
> > > > Invitations Issued by: Monday 16th August 2021
> > > >
> > > > Workshop Date: This will be a virtual workshop, spread over
> three days:
> > > >
> > > > 1400-1800 UTC Tue 14th September 2021
> > > > 1400-1800 UTC Wed 15th September 2021
> > > > 1400-1800 UTC Thu 16th September 2021
> > > >
> > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> > > >
> > > > The Program Committee members:
> > > >
> > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
> Sam
> > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
> Geoff
> > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
> Kuehlewind,
> > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
> Nichols,
> > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> > > >
> > > > Send Submissions to: network-quality-workshop-pc@iab.org
> <mailto:network-quality-workshop-pc@iab.org>.
> > > >
> > > > Position papers from academia, industry, the open source
> community and
> > > > others that focus on measurements, experiences, observations
> and
> > > > advice for the future are welcome. Papers that reflect
> experience
> > > > based on deployed services are especially welcome. The
> organizers
> > > > understand that specific actions taken by operators are
> unlikely to be
> > > > discussed in detail, so papers discussing general categories
> of
> > > > actions and issues without naming specific technologies,
> products, or
> > > > other players in the ecosystem are expected. Papers should not
> focus
> > > > on specific protocol solutions.
> > > >
> > > > The workshop will be by invitation only. Those wishing to
> attend
> > > > should submit a position paper to the address above; it may
> take the
> > > > form of an Internet-Draft.
> > > >
> > > > All inputs submitted and considered relevant will be published
> on the
> > > > workshop website. The organisers will decide whom to invite
> based on
> > > > the submissions received. Sessions will be organized according
> to
> > > > content, and not every accepted submission or invited attendee
> will
> > > > have an opportunity to present as the intent is to foster
> discussion
> > > > and not simply to have a sequence of presentations.
> > > >
> > > > Position papers from those not planning to attend the virtual
> sessions
> > > > themselves are also encouraged. A workshop report will be
> published
> > > > afterwards.
> > > >
> > > > Overview:
> > > >
> > > > "We believe that one of the major factors behind this lack of
> progress
> > > > is the popular perception that throughput is the often sole
> measure of
> > > > the quality of Internet connectivity. With such narrow focus,
> people
> > > > don’t consider questions such as:
> > > >
> > > > What is the latency under typical working conditions?
> > > > How reliable is the connectivity across longer time periods?
> > > > Does the network allow the use of a broad range of protocols?
> > > > What services can be run by clients of the network?
> > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and
> are there firewalls?
> > > > What security mechanisms are available for local services,
> such as DNS?
> > > > To what degree are the privacy, confidentiality, integrity
> and
> > > > authenticity of user communications guarded?
> > > >
> > > > Improving these aspects of network quality will likely depend
> on
> > > > measurement and exposing metrics to all involved parties,
> including to
> > > > end users in a meaningful way. Such measurements and exposure
> of the
> > > > right metrics will allow service providers and network
> operators to
> > > > focus on the aspects that impacts the users’ experience
> most and at
> > > > the same time empowers users to choose the Internet service
> that will
> > > > give them the best experience."
> > > >
> > > >
> > > > --
> > > > Latest Podcast:
> > > >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> > > >
> > > > Dave Täht CTO, TekLibre, LLC
> > > > _______________________________________________
> > > > Cerowrt-devel mailing list
> > > > Cerowrt-devel@lists.bufferbloat.net
> <mailto:Cerowrt-devel@lists.bufferbloat.net>
> > > > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
> > > >
> >
> >
> >
> > --
> > Latest Podcast:
> > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> >
> > Dave Täht CTO, TekLibre, LLC
> > _______________________________________________
> > Make-wifi-fast mailing list
> > Make-wifi-fast@lists.bufferbloat.net
> <mailto:Make-wifi-fast@lists.bufferbloat.net>
> > https://lists.bufferbloat.net/listinfo/make-wifi-fast
> <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
> >
> >
> > This electronic communication and the information and any files transmitted
> with it, or attached to it, are confidential and are intended solely for the use
> of
> > the individual or entity to whom it is addressed and may contain information
> that is confidential, legally privileged, protected by privacy laws, or otherwise
> > restricted from disclosure to anyone else. If you are not the intended
> recipient or the person responsible for delivering the e-mail to the intended
> recipient,
> > you are hereby notified that any use, copying, distributing, dissemination,
> forwarding, printing, or copying of this e-mail is strictly prohibited. If you
> > received this e-mail in error, please return the e-mail to the sender, delete
> it from your computer, and destroy any printed copy of it.
> >
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
> >
> 
> 
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc http://www.candelatech.com
> 

[-- Attachment #2: Type: text/html, Size: 20026 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-08 19:38         ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " David P. Reed
@ 2021-07-08 22:51           ` Bob McMahon
  2021-07-09  3:08           ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Leonard Kleinrock
  1 sibling, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-08 22:51 UTC (permalink / raw)
  To: David P. Reed
  Cc: Ben Greear, Dave Taht, starlink, Make-Wifi-fast, Cake List,
	codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 16948 bytes --]

Thanks very much for this response. I need to dig in a bit more for sure.

iperf 2 will give every UDP packet's OWD (if the clocks are sync'd) and
will also provide TCP write to read latencies, both supported in histogram
forms. So that's raw samples so to speak. I'm hooking up some units across
geography including across the Pacific (sync'd to GPS atomic time) to see
how "fractal" these distributions look, at least anecdotally.

On top of all the "spherical cow queueing theory" (which made me laugh,)
we've got bluetooth sometimes sharing the radio. So the transport latency
of TCP writes can be all over the map so-to-speak. And bluetooth traffic is
also highly correlated.

Bob




On Thu, Jul 8, 2021 at 12:38 PM David P. Reed <dpreed@deepplum.com> wrote:

> I will tell you flat out that the arrival time distribution assumption
> made by Little's Lemma that allows "estimation of queue depth" is totally
> unreasonable on ANY Internet in practice.
>
>
>
> The assumption is a Poisson Arrival Process. In reality, traffic arrivals
> in real internet applications are extremely far from Poisson, and, of
> course, using TCP windowing, become highly intercorrelated with crossing
> traffic that shares the same queue.
>
>
>
> So, as I've tried to tell many, many net-heads (people who ignore
> applications layer behavior, like the people that think latency doesn't
> matter to end users, only throughput), end-to-end packet arrival times on a
> practical network are incredibly far from Poisson - and they are more like
> fractal probability distributions, very irregular at all scales of time.
>
>
>
> So, the idea that iperf can estimate queue depth by Little's Lemma by just
> measuring saturation of capacity of a path is bogus.The less Poisson, the
> worse the estimate gets, by a huge factor.
>
>
>
>
>
> Where does the Poisson assumption come from?  Well, like many theorems, it
> is the simplest tractable closed form solution - it creates a simplified
> view, by being a "single-parameter" distribution (the parameter is called
> lambda for a Poisson distribution).  And the analysis of a simple queue
> with poisson arrival distribution and a static, fixed service time is the
> first interesting Queueing Theory example in most textbooks. It is
> suggestive of an interesting phenomenon, but it does NOT characterize any
> real system.
>
>
>
> It's the queueing theory equivalent of "First, we assume a spherical
> cow...". in doing an example in a freshman physics class.
>
>
>
> Unfortunately, most networking engineers understand neither queuing theory
> nor application networking usage in interactive applications. Which makes
> them arrogant. They assume all distributions are poisson!
>
>
>
>
>
> On Tuesday, July 6, 2021 9:46am, "Ben Greear" <greearb@candelatech.com>
> said:
>
> > Hello,
> >
> > I am interested to hear wish lists for network testing features. We make
> test
> > equipment, supporting lots
> > of wifi stations and a distributed architecture, with built-in udp, tcp,
> ipv6,
> > http, ... protocols,
> > and open to creating/improving some of our automated tests.
> >
> > I know Dave has some test scripts already, so I'm not necessarily
> looking to
> > reimplement that,
> > but more fishing for other/new ideas.
> >
> > Thanks,
> > Ben
> >
> > On 7/2/21 4:28 PM, Bob McMahon wrote:
> > > I think we need the language of math here. It seems like the network
> > power metric, introduced by Kleinrock and Jaffe in the late 70s, is
> something
> > useful.
> > > Effective end/end queue depths per Little's law also seems useful.
> Both are
> > available in iperf 2 from a test perspective. Repurposing test
> techniques to
> > actual
> > > traffic could be useful. Hence the question around what exact telemetry
> > is useful to apps making socket write() and read() calls.
> > >
> > > Bob
> > >
> > > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com
> > <mailto:dave.taht@gmail.com>> wrote:
> > >
> > > In terms of trying to find "Quality" I have tried to encourage folk to
> > > both read "zen and the art of motorcycle maintenance"[0], and Deming's
> > > work on "total quality management".
> > >
> > > My own slice at this network, computer and lifestyle "issue" is aiming
> > > for "imperceptible latency" in all things. [1]. There's a lot of
> > > fallout from that in terms of not just addressing queuing delay, but
> > > caching, prefetching, and learning more about what a user really needs
> > > (as opposed to wants) to know via intelligent agents.
> > >
> > > [0] If you want to get depressed, read Pirsig's successor to "zen...",
> > > lila, which is in part about what happens when an engineer hits an
> > > insoluble problem.
> > > [1] https://www.internetsociety.org/events/latency2013/
> > <https://www.internetsociety.org/events/latency2013/>
> > >
> > >
> > >
> > > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com
> > <mailto:dpreed@deepplum.com>> wrote:
> > > >
> > > > Well, nice that the folks doing the conference  are willing to
> > consider that quality of user experience has little to do with
> signalling rate at
> > the
> > > physical layer or throughput of FTP transfers.
> > > >
> > > >
> > > >
> > > > But honestly, the fact that they call the problem "network quality"
> > suggests that they REALLY, REALLY don't understand the Internet isn't
> the hardware
> > or
> > > the routers or even the routing algorithms *to its users*.
> > > >
> > > >
> > > >
> > > > By ignoring the diversity of applications now and in the future,
> > and the fact that we DON'T KNOW what will be coming up, this conference
> will
> > likely fall
> > > into the usual trap that net-heads fall into - optimizing for some
> > imaginary reality that doesn't exist, and in fact will probably never be
> what
> > users
> > > actually will do given the chance.
> > > >
> > > >
> > > >
> > > > I saw this issue in 1976 in the group developing the original
> > Internet protocols - a desire to put *into the network* special tricks
> to optimize
> > ASR33
> > > logins to remote computers from terminal concentrators (aka remote
> > login), bulk file transfers between file systems on different
> time-sharing
> > systems, and
> > > "sessions" (virtual circuits) that required logins. And then trying to
> > exploit underlying "multicast" by building it into the IP layer, because
> someone
> > > thought that TV broadcast would be the dominant application.
> > > >
> > > >
> > > >
> > > > Frankly, to think of "quality" as something that can be "provided"
> > by "the network" misses the entire point of "end-to-end argument in
> system
> > design".
> > > Quality is not a property defined or created by The Network. If you
> want
> > to talk about Quality, you need to talk about users - all the users at
> all times,
> > > now and into the future, and that's something you can't do if you don't
> > bother to include current and future users talking about what they might
> expect
> > to
> > > experience that they don't experience.
> > > >
> > > >
> > > >
> > > > There was much fighting back in 1976 that basically involved
> > "network experts" saying that the network was the place to "solve" such
> issues as
> > quality,
> > > so applications could avoid having to solve such issues.
> > > >
> > > >
> > > >
> > > > What some of us managed to do was to argue that you can't "solve"
> > such issues. All you can do is provide a framework that enables
> different uses to
> > > *cooperate* in some way.
> > > >
> > > >
> > > >
> > > > Which is why the Internet drops packets rather than queueing them,
> > and why diffserv cannot work.
> > > >
> > > > (I know the latter is conftroversial, but at the moment, ALL of
> > diffserv attempts to talk about end-to-end applicaiton specific metrics,
> but
> > never, ever
> > > explains what the diffserv control points actually do w.r.t. what the
> IP
> > layer can actually control. So it is meaningless - another violation of
> the
> > > so-called end-to-end principle).
> > > >
> > > >
> > > >
> > > > Networks are about getting packets from here to there, multiplexing
> > the underlying resources. That's it. Quality is a whole different thing.
> Quality
> > can
> > > be improved by end-to-end approaches, if the underlying network
> provides
> > some kind of thing that actually creates a way for end-to-end
> applications to
> > > affect queueing and routing decisions, and more importantly getting
> > "telemetry" from the network regarding what is actually going on with
> the other
> > > end-to-end users sharing the infrastructure.
> > > >
> > > >
> > > >
> > > > This conference won't talk about it this way. So don't waste your
> > time.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"
> > <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> said:
> > > >
> > > > > The program committee members are *amazing*. Perhaps, finally,
> > we can
> > > > > move the bar for the internet's quality metrics past endless,
> > blind
> > > > > repetitions of speedtest.
> > > > >
> > > > > For complete details, please see:
> > > > > https://www.iab.org/activities/workshops/network-quality/
> > <https://www.iab.org/activities/workshops/network-quality/>
> > > > >
> > > > > Submissions Due: Monday 2nd August 2021, midnight AOE
> > (Anywhere On Earth)
> > > > > Invitations Issued by: Monday 16th August 2021
> > > > >
> > > > > Workshop Date: This will be a virtual workshop, spread over
> > three days:
> > > > >
> > > > > 1400-1800 UTC Tue 14th September 2021
> > > > > 1400-1800 UTC Wed 15th September 2021
> > > > > 1400-1800 UTC Thu 16th September 2021
> > > > >
> > > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> > > > >
> > > > > The Program Committee members:
> > > > >
> > > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
> > Sam
> > > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
> > Geoff
> > > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
> > Kuehlewind,
> > > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
> > Nichols,
> > > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> > > > >
> > > > > Send Submissions to: network-quality-workshop-pc@iab.org
> > <mailto:network-quality-workshop-pc@iab.org>.
> > > > >
> > > > > Position papers from academia, industry, the open source
> > community and
> > > > > others that focus on measurements, experiences, observations
> > and
> > > > > advice for the future are welcome. Papers that reflect
> > experience
> > > > > based on deployed services are especially welcome. The
> > organizers
> > > > > understand that specific actions taken by operators are
> > unlikely to be
> > > > > discussed in detail, so papers discussing general categories
> > of
> > > > > actions and issues without naming specific technologies,
> > products, or
> > > > > other players in the ecosystem are expected. Papers should not
> > focus
> > > > > on specific protocol solutions.
> > > > >
> > > > > The workshop will be by invitation only. Those wishing to
> > attend
> > > > > should submit a position paper to the address above; it may
> > take the
> > > > > form of an Internet-Draft.
> > > > >
> > > > > All inputs submitted and considered relevant will be published
> > on the
> > > > > workshop website. The organisers will decide whom to invite
> > based on
> > > > > the submissions received. Sessions will be organized according
> > to
> > > > > content, and not every accepted submission or invited attendee
> > will
> > > > > have an opportunity to present as the intent is to foster
> > discussion
> > > > > and not simply to have a sequence of presentations.
> > > > >
> > > > > Position papers from those not planning to attend the virtual
> > sessions
> > > > > themselves are also encouraged. A workshop report will be
> > published
> > > > > afterwards.
> > > > >
> > > > > Overview:
> > > > >
> > > > > "We believe that one of the major factors behind this lack of
> > progress
> > > > > is the popular perception that throughput is the often sole
> > measure of
> > > > > the quality of Internet connectivity. With such narrow focus,
> > people
> > > > > don’t consider questions such as:
> > > > >
> > > > > What is the latency under typical working conditions?
> > > > > How reliable is the connectivity across longer time periods?
> > > > > Does the network allow the use of a broad range of protocols?
> > > > > What services can be run by clients of the network?
> > > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and
> > are there firewalls?
> > > > > What security mechanisms are available for local services,
> > such as DNS?
> > > > > To what degree are the privacy, confidentiality, integrity
> > and
> > > > > authenticity of user communications guarded?
> > > > >
> > > > > Improving these aspects of network quality will likely depend
> > on
> > > > > measurement and exposing metrics to all involved parties,
> > including to
> > > > > end users in a meaningful way. Such measurements and exposure
> > of the
> > > > > right metrics will allow service providers and network
> > operators to
> > > > > focus on the aspects that impacts the users’ experience
> > most and at
> > > > > the same time empowers users to choose the Internet service
> > that will
> > > > > give them the best experience."
> > > > >
> > > > >
> > > > > --
> > > > > Latest Podcast:
> > > > >
> >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> > <
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> > > > >
> > > > > Dave Täht CTO, TekLibre, LLC
> > > > > _______________________________________________
> > > > > Cerowrt-devel mailing list
> > > > > Cerowrt-devel@lists.bufferbloat.net
> > <mailto:Cerowrt-devel@lists.bufferbloat.net>
> > > > > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> > <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
> > > > >
> > >
> > >
> > >
> > > --
> > > Latest Podcast:
> > >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> > <
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> > >
> > > Dave Täht CTO, TekLibre, LLC
> > > _______________________________________________
> > > Make-wifi-fast mailing list
> > > Make-wifi-fast@lists.bufferbloat.net
> > <mailto:Make-wifi-fast@lists.bufferbloat.net>
> > > https://lists.bufferbloat.net/listinfo/make-wifi-fast
> > <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
> > >
> > >
> > > This electronic communication and the information and any files
> transmitted
> > with it, or attached to it, are confidential and are intended solely for
> the use
> > of
> > > the individual or entity to whom it is addressed and may contain
> information
> > that is confidential, legally privileged, protected by privacy laws, or
> otherwise
> > > restricted from disclosure to anyone else. If you are not the intended
> > recipient or the person responsible for delivering the e-mail to the
> intended
> > recipient,
> > > you are hereby notified that any use, copying, distributing,
> dissemination,
> > forwarding, printing, or copying of this e-mail is strictly prohibited.
> If you
> > > received this e-mail in error, please return the e-mail to the sender,
> delete
> > it from your computer, and destroy any printed copy of it.
> > >
> > > _______________________________________________
> > > Starlink mailing list
> > > Starlink@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/starlink
> > >
> >
> >
> > --
> > Ben Greear <greearb@candelatech.com>
> > Candela Technologies Inc http://www.candelatech.com
> >
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 23362 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-08 19:38         ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " David P. Reed
  2021-07-08 22:51           ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
@ 2021-07-09  3:08           ` Leonard Kleinrock
  2021-07-09 10:05             ` [Cerowrt-devel] [Make-wifi-fast] [Starlink] " Luca Muscariello
  1 sibling, 1 reply; 108+ messages in thread
From: Leonard Kleinrock @ 2021-07-09  3:08 UTC (permalink / raw)
  To: David P. Reed
  Cc: Leonard Kleinrock, Ben Greear, Cake List, Make-Wifi-fast,
	Bob McMahon, starlink, codel, cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 16588 bytes --]

David,

I totally appreciate  your attention to when and when not analytical modeling works. Let me clarify a few things from your note.

First, Little's law (also known as Little’s lemma or, as I use in my book, Little’s result) does not assume Poisson arrivals -  it is good for any arrival process and any service process and is an equality between time averages.  It states that the time average of the number in a system (for a sample path w) is equal to the average arrival rate to the system multiplied by the time-averaged time in the system for that sample path.  This is often written as   NTimeAvg =λ·TTimeAvg .  Moreover, if the system is also ergodic, then the time average equals the ensemble average and we often write it as N ̄ = λ T ̄ .  In any case, this requires neither Poisson arrivals nor exponential service times.  

Queueing theorists often do study the case of Poisson arrivals.  True, it makes the analysis easier, yet there is a better reason it is often used, and that is because the sum of a large number of independent stationary renewal processes approaches a Poisson process.  So nature often gives us Poisson arrivals.  

Best,
Len



> On Jul 8, 2021, at 12:38 PM, David P. Reed <dpreed@deepplum.com> wrote:
> 
> I will tell you flat out that the arrival time distribution assumption made by Little's Lemma that allows "estimation of queue depth" is totally unreasonable on ANY Internet in practice.
>  
> The assumption is a Poisson Arrival Process. In reality, traffic arrivals in real internet applications are extremely far from Poisson, and, of course, using TCP windowing, become highly intercorrelated with crossing traffic that shares the same queue.
>  
> So, as I've tried to tell many, many net-heads (people who ignore applications layer behavior, like the people that think latency doesn't matter to end users, only throughput), end-to-end packet arrival times on a practical network are incredibly far from Poisson - and they are more like fractal probability distributions, very irregular at all scales of time.
>  
> So, the idea that iperf can estimate queue depth by Little's Lemma by just measuring saturation of capacity of a path is bogus.The less Poisson, the worse the estimate gets, by a huge factor.
>  
>  
> Where does the Poisson assumption come from?  Well, like many theorems, it is the simplest tractable closed form solution - it creates a simplified view, by being a "single-parameter" distribution (the parameter is called lambda for a Poisson distribution).  And the analysis of a simple queue with poisson arrival distribution and a static, fixed service time is the first interesting Queueing Theory example in most textbooks. It is suggestive of an interesting phenomenon, but it does NOT characterize any real system.
>  
> It's the queueing theory equivalent of "First, we assume a spherical cow...". in doing an example in a freshman physics class.
>  
> Unfortunately, most networking engineers understand neither queuing theory nor application networking usage in interactive applications. Which makes them arrogant. They assume all distributions are poisson!
>  
>  
> On Tuesday, July 6, 2021 9:46am, "Ben Greear" <greearb@candelatech.com> said:
> 
> > Hello,
> > 
> > I am interested to hear wish lists for network testing features. We make test
> > equipment, supporting lots
> > of wifi stations and a distributed architecture, with built-in udp, tcp, ipv6,
> > http, ... protocols,
> > and open to creating/improving some of our automated tests.
> > 
> > I know Dave has some test scripts already, so I'm not necessarily looking to
> > reimplement that,
> > but more fishing for other/new ideas.
> > 
> > Thanks,
> > Ben
> > 
> > On 7/2/21 4:28 PM, Bob McMahon wrote:
> > > I think we need the language of math here. It seems like the network
> > power metric, introduced by Kleinrock and Jaffe in the late 70s, is something
> > useful.
> > > Effective end/end queue depths per Little's law also seems useful. Both are
> > available in iperf 2 from a test perspective. Repurposing test techniques to
> > actual
> > > traffic could be useful. Hence the question around what exact telemetry
> > is useful to apps making socket write() and read() calls.
> > >
> > > Bob
> > >
> > > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com
> > <mailto:dave.taht@gmail.com>> wrote:
> > >
> > > In terms of trying to find "Quality" I have tried to encourage folk to
> > > both read "zen and the art of motorcycle maintenance"[0], and Deming's
> > > work on "total quality management".
> > >
> > > My own slice at this network, computer and lifestyle "issue" is aiming
> > > for "imperceptible latency" in all things. [1]. There's a lot of
> > > fallout from that in terms of not just addressing queuing delay, but
> > > caching, prefetching, and learning more about what a user really needs
> > > (as opposed to wants) to know via intelligent agents.
> > >
> > > [0] If you want to get depressed, read Pirsig's successor to "zen...",
> > > lila, which is in part about what happens when an engineer hits an
> > > insoluble problem.
> > > [1] https://www.internetsociety.org/events/latency2013/
> > <https://www.internetsociety.org/events/latency2013/>
> > >
> > >
> > >
> > > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com
> > <mailto:dpreed@deepplum.com>> wrote:
> > > >
> > > > Well, nice that the folks doing the conference  are willing to
> > consider that quality of user experience has little to do with signalling rate at
> > the
> > > physical layer or throughput of FTP transfers.
> > > >
> > > >
> > > >
> > > > But honestly, the fact that they call the problem "network quality"
> > suggests that they REALLY, REALLY don't understand the Internet isn't the hardware
> > or
> > > the routers or even the routing algorithms *to its users*.
> > > >
> > > >
> > > >
> > > > By ignoring the diversity of applications now and in the future,
> > and the fact that we DON'T KNOW what will be coming up, this conference will
> > likely fall
> > > into the usual trap that net-heads fall into - optimizing for some
> > imaginary reality that doesn't exist, and in fact will probably never be what
> > users
> > > actually will do given the chance.
> > > >
> > > >
> > > >
> > > > I saw this issue in 1976 in the group developing the original
> > Internet protocols - a desire to put *into the network* special tricks to optimize
> > ASR33
> > > logins to remote computers from terminal concentrators (aka remote
> > login), bulk file transfers between file systems on different time-sharing
> > systems, and
> > > "sessions" (virtual circuits) that required logins. And then trying to
> > exploit underlying "multicast" by building it into the IP layer, because someone
> > > thought that TV broadcast would be the dominant application.
> > > >
> > > >
> > > >
> > > > Frankly, to think of "quality" as something that can be "provided"
> > by "the network" misses the entire point of "end-to-end argument in system
> > design".
> > > Quality is not a property defined or created by The Network. If you want
> > to talk about Quality, you need to talk about users - all the users at all times,
> > > now and into the future, and that's something you can't do if you don't
> > bother to include current and future users talking about what they might expect
> > to
> > > experience that they don't experience.
> > > >
> > > >
> > > >
> > > > There was much fighting back in 1976 that basically involved
> > "network experts" saying that the network was the place to "solve" such issues as
> > quality,
> > > so applications could avoid having to solve such issues.
> > > >
> > > >
> > > >
> > > > What some of us managed to do was to argue that you can't "solve"
> > such issues. All you can do is provide a framework that enables different uses to
> > > *cooperate* in some way.
> > > >
> > > >
> > > >
> > > > Which is why the Internet drops packets rather than queueing them,
> > and why diffserv cannot work.
> > > >
> > > > (I know the latter is conftroversial, but at the moment, ALL of
> > diffserv attempts to talk about end-to-end applicaiton specific metrics, but
> > never, ever
> > > explains what the diffserv control points actually do w.r.t. what the IP
> > layer can actually control. So it is meaningless - another violation of the
> > > so-called end-to-end principle).
> > > >
> > > >
> > > >
> > > > Networks are about getting packets from here to there, multiplexing
> > the underlying resources. That's it. Quality is a whole different thing. Quality
> > can
> > > be improved by end-to-end approaches, if the underlying network provides
> > some kind of thing that actually creates a way for end-to-end applications to
> > > affect queueing and routing decisions, and more importantly getting
> > "telemetry" from the network regarding what is actually going on with the other
> > > end-to-end users sharing the infrastructure.
> > > >
> > > >
> > > >
> > > > This conference won't talk about it this way. So don't waste your
> > time.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"
> > <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> said:
> > > >
> > > > > The program committee members are *amazing*. Perhaps, finally,
> > we can
> > > > > move the bar for the internet's quality metrics past endless,
> > blind
> > > > > repetitions of speedtest.
> > > > >
> > > > > For complete details, please see:
> > > > > https://www.iab.org/activities/workshops/network-quality/
> > <https://www.iab.org/activities/workshops/network-quality/>
> > > > >
> > > > > Submissions Due: Monday 2nd August 2021, midnight AOE
> > (Anywhere On Earth)
> > > > > Invitations Issued by: Monday 16th August 2021
> > > > >
> > > > > Workshop Date: This will be a virtual workshop, spread over
> > three days:
> > > > >
> > > > > 1400-1800 UTC Tue 14th September 2021
> > > > > 1400-1800 UTC Wed 15th September 2021
> > > > > 1400-1800 UTC Thu 16th September 2021
> > > > >
> > > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> > > > >
> > > > > The Program Committee members:
> > > > >
> > > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
> > Sam
> > > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
> > Geoff
> > > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
> > Kuehlewind,
> > > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
> > Nichols,
> > > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> > > > >
> > > > > Send Submissions to: network-quality-workshop-pc@iab.org
> > <mailto:network-quality-workshop-pc@iab.org>.
> > > > >
> > > > > Position papers from academia, industry, the open source
> > community and
> > > > > others that focus on measurements, experiences, observations
> > and
> > > > > advice for the future are welcome. Papers that reflect
> > experience
> > > > > based on deployed services are especially welcome. The
> > organizers
> > > > > understand that specific actions taken by operators are
> > unlikely to be
> > > > > discussed in detail, so papers discussing general categories
> > of
> > > > > actions and issues without naming specific technologies,
> > products, or
> > > > > other players in the ecosystem are expected. Papers should not
> > focus
> > > > > on specific protocol solutions.
> > > > >
> > > > > The workshop will be by invitation only. Those wishing to
> > attend
> > > > > should submit a position paper to the address above; it may
> > take the
> > > > > form of an Internet-Draft.
> > > > >
> > > > > All inputs submitted and considered relevant will be published
> > on the
> > > > > workshop website. The organisers will decide whom to invite
> > based on
> > > > > the submissions received. Sessions will be organized according
> > to
> > > > > content, and not every accepted submission or invited attendee
> > will
> > > > > have an opportunity to present as the intent is to foster
> > discussion
> > > > > and not simply to have a sequence of presentations.
> > > > >
> > > > > Position papers from those not planning to attend the virtual
> > sessions
> > > > > themselves are also encouraged. A workshop report will be
> > published
> > > > > afterwards.
> > > > >
> > > > > Overview:
> > > > >
> > > > > "We believe that one of the major factors behind this lack of
> > progress
> > > > > is the popular perception that throughput is the often sole
> > measure of
> > > > > the quality of Internet connectivity. With such narrow focus,
> > people
> > > > > don’t consider questions such as:
> > > > >
> > > > > What is the latency under typical working conditions?
> > > > > How reliable is the connectivity across longer time periods?
> > > > > Does the network allow the use of a broad range of protocols?
> > > > > What services can be run by clients of the network?
> > > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and
> > are there firewalls?
> > > > > What security mechanisms are available for local services,
> > such as DNS?
> > > > > To what degree are the privacy, confidentiality, integrity
> > and
> > > > > authenticity of user communications guarded?
> > > > >
> > > > > Improving these aspects of network quality will likely depend
> > on
> > > > > measurement and exposing metrics to all involved parties,
> > including to
> > > > > end users in a meaningful way. Such measurements and exposure
> > of the
> > > > > right metrics will allow service providers and network
> > operators to
> > > > > focus on the aspects that impacts the users’ experience
> > most and at
> > > > > the same time empowers users to choose the Internet service
> > that will
> > > > > give them the best experience."
> > > > >
> > > > >
> > > > > --
> > > > > Latest Podcast:
> > > > >
> > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> > <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> > > > >
> > > > > Dave Täht CTO, TekLibre, LLC
> > > > > _______________________________________________
> > > > > Cerowrt-devel mailing list
> > > > > Cerowrt-devel@lists.bufferbloat.net
> > <mailto:Cerowrt-devel@lists.bufferbloat.net>
> > > > > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> > <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
> > > > >
> > >
> > >
> > >
> > > --
> > > Latest Podcast:
> > > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> > <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> > >
> > > Dave Täht CTO, TekLibre, LLC
> > > _______________________________________________
> > > Make-wifi-fast mailing list
> > > Make-wifi-fast@lists.bufferbloat.net
> > <mailto:Make-wifi-fast@lists.bufferbloat.net>
> > > https://lists.bufferbloat.net/listinfo/make-wifi-fast
> > <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
> > >
> > >
> > > This electronic communication and the information and any files transmitted
> > with it, or attached to it, are confidential and are intended solely for the use
> > of
> > > the individual or entity to whom it is addressed and may contain information
> > that is confidential, legally privileged, protected by privacy laws, or otherwise
> > > restricted from disclosure to anyone else. If you are not the intended
> > recipient or the person responsible for delivering the e-mail to the intended
> > recipient,
> > > you are hereby notified that any use, copying, distributing, dissemination,
> > forwarding, printing, or copying of this e-mail is strictly prohibited. If you
> > > received this e-mail in error, please return the e-mail to the sender, delete
> > it from your computer, and destroy any printed copy of it.
> > >
> > > _______________________________________________
> > > Starlink mailing list
> > > Starlink@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/starlink
> > >
> > 
> > 
> > --
> > Ben Greear <greearb@candelatech.com>
> > Candela Technologies Inc http://www.candelatech.com
> >
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink


[-- Attachment #2: Type: text/html, Size: 27949 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-09  3:08           ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Leonard Kleinrock
@ 2021-07-09 10:05             ` Luca Muscariello
  2021-07-09 19:31               ` [Cerowrt-devel] Little's Law mea culpa, but not invalidating my main point David P. Reed
  2021-08-02 22:59               ` [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board Bob McMahon
  0 siblings, 2 replies; 108+ messages in thread
From: Luca Muscariello @ 2021-07-09 10:05 UTC (permalink / raw)
  To: Leonard Kleinrock
  Cc: David P. Reed, starlink, Make-Wifi-fast, Bob McMahon, Cake List,
	codel, cerowrt-devel, bloat, Ben Greear

[-- Attachment #1: Type: text/plain, Size: 17734 bytes --]

For those who might be interested in Little's law
there is a nice paper by John Little on the occasion
of the 50th anniversary  of the result.

https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary

https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf

Nice read.
Luca

P.S.
Who has not a copy of L. Kleinrock's books? I do have and am not ready to
lend them!

On Fri, Jul 9, 2021 at 11:01 AM Leonard Kleinrock <lk@cs.ucla.edu> wrote:

> David,
>
> I totally appreciate  your attention to when and when not analytical
> modeling works. Let me clarify a few things from your note.
>
> First, Little's law (also known as Little’s lemma or, as I use in my book,
> Little’s result) does not assume Poisson arrivals -  it is good for *any*
> arrival process and any service process and is an equality between time
> averages.  It states that the time average of the number in a system (for a
> sample path *w)* is equal to the average arrival rate to the system
> multiplied by the time-averaged time in the system for that sample path.
> This is often written as   NTimeAvg =λ·TTimeAvg .  Moreover, if the
> system is also ergodic, then the time average equals the ensemble average
> and we often write it as N ̄ = λ T ̄ .  In any case, this requires
> neither Poisson arrivals nor exponential service times.
>
> Queueing theorists often do study the case of Poisson arrivals.  True, it
> makes the analysis easier, yet there is a better reason it is often used,
> and that is because the sum of a large number of independent stationary
> renewal processes approaches a Poisson process.  So nature often gives us
> Poisson arrivals.
>
> Best,
> Len
>
>
>
> On Jul 8, 2021, at 12:38 PM, David P. Reed <dpreed@deepplum.com> wrote:
>
> I will tell you flat out that the arrival time distribution assumption
> made by Little's Lemma that allows "estimation of queue depth" is totally
> unreasonable on ANY Internet in practice.
>
>
> The assumption is a Poisson Arrival Process. In reality, traffic arrivals
> in real internet applications are extremely far from Poisson, and, of
> course, using TCP windowing, become highly intercorrelated with crossing
> traffic that shares the same queue.
>
>
> So, as I've tried to tell many, many net-heads (people who ignore
> applications layer behavior, like the people that think latency doesn't
> matter to end users, only throughput), end-to-end packet arrival times on a
> practical network are incredibly far from Poisson - and they are more like
> fractal probability distributions, very irregular at all scales of time.
>
>
> So, the idea that iperf can estimate queue depth by Little's Lemma by just
> measuring saturation of capacity of a path is bogus.The less Poisson, the
> worse the estimate gets, by a huge factor.
>
>
>
>
> Where does the Poisson assumption come from?  Well, like many theorems, it
> is the simplest tractable closed form solution - it creates a simplified
> view, by being a "single-parameter" distribution (the parameter is called
> lambda for a Poisson distribution).  And the analysis of a simple queue
> with poisson arrival distribution and a static, fixed service time is the
> first interesting Queueing Theory example in most textbooks. It is
> suggestive of an interesting phenomenon, but it does NOT characterize any
> real system.
>
>
> It's the queueing theory equivalent of "First, we assume a spherical
> cow...". in doing an example in a freshman physics class.
>
>
> Unfortunately, most networking engineers understand neither queuing theory
> nor application networking usage in interactive applications. Which makes
> them arrogant. They assume all distributions are poisson!
>
>
>
>
> On Tuesday, July 6, 2021 9:46am, "Ben Greear" <greearb@candelatech.com>
> said:
>
> > Hello,
> >
> > I am interested to hear wish lists for network testing features. We make
> test
> > equipment, supporting lots
> > of wifi stations and a distributed architecture, with built-in udp, tcp,
> ipv6,
> > http, ... protocols,
> > and open to creating/improving some of our automated tests.
> >
> > I know Dave has some test scripts already, so I'm not necessarily
> looking to
> > reimplement that,
> > but more fishing for other/new ideas.
> >
> > Thanks,
> > Ben
> >
> > On 7/2/21 4:28 PM, Bob McMahon wrote:
> > > I think we need the language of math here. It seems like the network
> > power metric, introduced by Kleinrock and Jaffe in the late 70s, is
> something
> > useful.
> > > Effective end/end queue depths per Little's law also seems useful.
> Both are
> > available in iperf 2 from a test perspective. Repurposing test
> techniques to
> > actual
> > > traffic could be useful. Hence the question around what exact telemetry
> > is useful to apps making socket write() and read() calls.
> > >
> > > Bob
> > >
> > > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com
> > <mailto:dave.taht@gmail.com <dave.taht@gmail.com>>> wrote:
> > >
> > > In terms of trying to find "Quality" I have tried to encourage folk to
> > > both read "zen and the art of motorcycle maintenance"[0], and Deming's
> > > work on "total quality management".
> > >
> > > My own slice at this network, computer and lifestyle "issue" is aiming
> > > for "imperceptible latency" in all things. [1]. There's a lot of
> > > fallout from that in terms of not just addressing queuing delay, but
> > > caching, prefetching, and learning more about what a user really needs
> > > (as opposed to wants) to know via intelligent agents.
> > >
> > > [0] If you want to get depressed, read Pirsig's successor to "zen...",
> > > lila, which is in part about what happens when an engineer hits an
> > > insoluble problem.
> > > [1] https://www.internetsociety.org/events/latency2013/
> > <https://www.internetsociety.org/events/latency2013/>
> > >
> > >
> > >
> > > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com
> > <mailto:dpreed@deepplum.com <dpreed@deepplum.com>>> wrote:
> > > >
> > > > Well, nice that the folks doing the conference  are willing to
> > consider that quality of user experience has little to do with
> signalling rate at
> > the
> > > physical layer or throughput of FTP transfers.
> > > >
> > > >
> > > >
> > > > But honestly, the fact that they call the problem "network quality"
> > suggests that they REALLY, REALLY don't understand the Internet isn't
> the hardware
> > or
> > > the routers or even the routing algorithms *to its users*.
> > > >
> > > >
> > > >
> > > > By ignoring the diversity of applications now and in the future,
> > and the fact that we DON'T KNOW what will be coming up, this conference
> will
> > likely fall
> > > into the usual trap that net-heads fall into - optimizing for some
> > imaginary reality that doesn't exist, and in fact will probably never be
> what
> > users
> > > actually will do given the chance.
> > > >
> > > >
> > > >
> > > > I saw this issue in 1976 in the group developing the original
> > Internet protocols - a desire to put *into the network* special tricks
> to optimize
> > ASR33
> > > logins to remote computers from terminal concentrators (aka remote
> > login), bulk file transfers between file systems on different
> time-sharing
> > systems, and
> > > "sessions" (virtual circuits) that required logins. And then trying to
> > exploit underlying "multicast" by building it into the IP layer, because
> someone
> > > thought that TV broadcast would be the dominant application.
> > > >
> > > >
> > > >
> > > > Frankly, to think of "quality" as something that can be "provided"
> > by "the network" misses the entire point of "end-to-end argument in
> system
> > design".
> > > Quality is not a property defined or created by The Network. If you
> want
> > to talk about Quality, you need to talk about users - all the users at
> all times,
> > > now and into the future, and that's something you can't do if you don't
> > bother to include current and future users talking about what they might
> expect
> > to
> > > experience that they don't experience.
> > > >
> > > >
> > > >
> > > > There was much fighting back in 1976 that basically involved
> > "network experts" saying that the network was the place to "solve" such
> issues as
> > quality,
> > > so applications could avoid having to solve such issues.
> > > >
> > > >
> > > >
> > > > What some of us managed to do was to argue that you can't "solve"
> > such issues. All you can do is provide a framework that enables
> different uses to
> > > *cooperate* in some way.
> > > >
> > > >
> > > >
> > > > Which is why the Internet drops packets rather than queueing them,
> > and why diffserv cannot work.
> > > >
> > > > (I know the latter is conftroversial, but at the moment, ALL of
> > diffserv attempts to talk about end-to-end applicaiton specific metrics,
> but
> > never, ever
> > > explains what the diffserv control points actually do w.r.t. what the
> IP
> > layer can actually control. So it is meaningless - another violation of
> the
> > > so-called end-to-end principle).
> > > >
> > > >
> > > >
> > > > Networks are about getting packets from here to there, multiplexing
> > the underlying resources. That's it. Quality is a whole different thing.
> Quality
> > can
> > > be improved by end-to-end approaches, if the underlying network
> provides
> > some kind of thing that actually creates a way for end-to-end
> applications to
> > > affect queueing and routing decisions, and more importantly getting
> > "telemetry" from the network regarding what is actually going on with
> the other
> > > end-to-end users sharing the infrastructure.
> > > >
> > > >
> > > >
> > > > This conference won't talk about it this way. So don't waste your
> > time.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"
> > <dave.taht@gmail.com <mailto:dave.taht@gmail.com <dave.taht@gmail.com>>>
> said:
> > > >
> > > > > The program committee members are *amazing*. Perhaps, finally,
> > we can
> > > > > move the bar for the internet's quality metrics past endless,
> > blind
> > > > > repetitions of speedtest.
> > > > >
> > > > > For complete details, please see:
> > > > > https://www.iab.org/activities/workshops/network-quality/
> > <https://www.iab.org/activities/workshops/network-quality/>
> > > > >
> > > > > Submissions Due: Monday 2nd August 2021, midnight AOE
> > (Anywhere On Earth)
> > > > > Invitations Issued by: Monday 16th August 2021
> > > > >
> > > > > Workshop Date: This will be a virtual workshop, spread over
> > three days:
> > > > >
> > > > > 1400-1800 UTC Tue 14th September 2021
> > > > > 1400-1800 UTC Wed 15th September 2021
> > > > > 1400-1800 UTC Thu 16th September 2021
> > > > >
> > > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> > > > >
> > > > > The Program Committee members:
> > > > >
> > > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
> > Sam
> > > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
> > Geoff
> > > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
> > Kuehlewind,
> > > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
> > Nichols,
> > > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> > > > >
> > > > > Send Submissions to: network-quality-workshop-pc@iab.org
> > <mailto:network-quality-workshop-pc@iab.org
> <network-quality-workshop-pc@iab.org>>.
> > > > >
> > > > > Position papers from academia, industry, the open source
> > community and
> > > > > others that focus on measurements, experiences, observations
> > and
> > > > > advice for the future are welcome. Papers that reflect
> > experience
> > > > > based on deployed services are especially welcome. The
> > organizers
> > > > > understand that specific actions taken by operators are
> > unlikely to be
> > > > > discussed in detail, so papers discussing general categories
> > of
> > > > > actions and issues without naming specific technologies,
> > products, or
> > > > > other players in the ecosystem are expected. Papers should not
> > focus
> > > > > on specific protocol solutions.
> > > > >
> > > > > The workshop will be by invitation only. Those wishing to
> > attend
> > > > > should submit a position paper to the address above; it may
> > take the
> > > > > form of an Internet-Draft.
> > > > >
> > > > > All inputs submitted and considered relevant will be published
> > on the
> > > > > workshop website. The organisers will decide whom to invite
> > based on
> > > > > the submissions received. Sessions will be organized according
> > to
> > > > > content, and not every accepted submission or invited attendee
> > will
> > > > > have an opportunity to present as the intent is to foster
> > discussion
> > > > > and not simply to have a sequence of presentations.
> > > > >
> > > > > Position papers from those not planning to attend the virtual
> > sessions
> > > > > themselves are also encouraged. A workshop report will be
> > published
> > > > > afterwards.
> > > > >
> > > > > Overview:
> > > > >
> > > > > "We believe that one of the major factors behind this lack of
> > progress
> > > > > is the popular perception that throughput is the often sole
> > measure of
> > > > > the quality of Internet connectivity. With such narrow focus,
> > people
> > > > > don’t consider questions such as:
> > > > >
> > > > > What is the latency under typical working conditions?
> > > > > How reliable is the connectivity across longer time periods?
> > > > > Does the network allow the use of a broad range of protocols?
> > > > > What services can be run by clients of the network?
> > > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and
> > are there firewalls?
> > > > > What security mechanisms are available for local services,
> > such as DNS?
> > > > > To what degree are the privacy, confidentiality, integrity
> > and
> > > > > authenticity of user communications guarded?
> > > > >
> > > > > Improving these aspects of network quality will likely depend
> > on
> > > > > measurement and exposing metrics to all involved parties,
> > including to
> > > > > end users in a meaningful way. Such measurements and exposure
> > of the
> > > > > right metrics will allow service providers and network
> > operators to
> > > > > focus on the aspects that impacts the users’ experience
> > most and at
> > > > > the same time empowers users to choose the Internet service
> > that will
> > > > > give them the best experience."
> > > > >
> > > > >
> > > > > --
> > > > > Latest Podcast:
> > > > >
> >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> > <
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> > > > >
> > > > > Dave Täht CTO, TekLibre, LLC
> > > > > _______________________________________________
> > > > > Cerowrt-devel mailing list
> > > > > Cerowrt-devel@lists.bufferbloat.net
> > <mailto:Cerowrt-devel@lists.bufferbloat.net
> <Cerowrt-devel@lists.bufferbloat.net>>
> > > > > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> > <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
> > > > >
> > >
> > >
> > >
> > > --
> > > Latest Podcast:
> > >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
> > <
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> > >
> > > Dave Täht CTO, TekLibre, LLC
> > > _______________________________________________
> > > Make-wifi-fast mailing list
> > > Make-wifi-fast@lists.bufferbloat.net
> > <mailto:Make-wifi-fast@lists.bufferbloat.net
> <Make-wifi-fast@lists.bufferbloat.net>>
> > > https://lists.bufferbloat.net/listinfo/make-wifi-fast
> > <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
> > >
> > >
> > > This electronic communication and the information and any files
> transmitted
> > with it, or attached to it, are confidential and are intended solely for
> the use
> > of
> > > the individual or entity to whom it is addressed and may contain
> information
> > that is confidential, legally privileged, protected by privacy laws, or
> otherwise
> > > restricted from disclosure to anyone else. If you are not the intended
> > recipient or the person responsible for delivering the e-mail to the
> intended
> > recipient,
> > > you are hereby notified that any use, copying, distributing,
> dissemination,
> > forwarding, printing, or copying of this e-mail is strictly prohibited.
> If you
> > > received this e-mail in error, please return the e-mail to the sender,
> delete
> > it from your computer, and destroy any printed copy of it.
> > >
> > > _______________________________________________
> > > Starlink mailing list
> > > Starlink@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/starlink
> > >
> >
> >
> > --
> > Ben Greear <greearb@candelatech.com>
> > Candela Technologies Inc http://www.candelatech.com
> >
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast

[-- Attachment #2: Type: text/html, Size: 26493 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] Little's Law mea culpa, but not invalidating my main point
  2021-07-09 10:05             ` [Cerowrt-devel] [Make-wifi-fast] [Starlink] " Luca Muscariello
@ 2021-07-09 19:31               ` David P. Reed
  2021-07-09 20:24                 ` Bob McMahon
                                   ` (4 more replies)
  2021-08-02 22:59               ` [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board Bob McMahon
  1 sibling, 5 replies; 108+ messages in thread
From: David P. Reed @ 2021-07-09 19:31 UTC (permalink / raw)
  To: Luca Muscariello
  Cc: Leonard Kleinrock, starlink, Make-Wifi-fast, Bob McMahon,
	Cake List, codel, cerowrt-devel, bloat, Ben Greear

[-- Attachment #1: Type: text/plain, Size: 25424 bytes --]


Len - I admit I made a mistake in challenging Little's Law as being based on Poisson processes. It is more general. But it tells you an "average" in its base form, and latency averages are not useful for end user applications.
 
However, Little's Law does assume something that is not actually valid about the kind of distributions seen in the network, and in fact, it is NOT true that networks converge on Poisson arrival times.
 
The key issue is well-described in the sandard analysis of the M/M/1 queue (e.g. [ https://en.wikipedia.org/wiki/M/M/1_queue ]( https://en.wikipedia.org/wiki/M/M/1_queue )) , which is done only for Poisson processes, and is also limited to "stable" systems. But networks are never stable when fully loaded. They get unstable and those instabilities persist for a long time in the network. Instability is at core the underlying *requirement* of the Internet's usage.
 
So specifically: real networks, even large ones, and certainly the Internet today, are not asymptotic limits of sums of stationary stochastic arrival processes. Each esternal terminal of any real network has a real user there, running a real application, and the network is a complex graph. This makes it completely unlike a single queue. Even the links within a network carry a relatively small number of application flows. There's no ability to apply the Law of Large Numbers to the distributions, because any particular path contains only a small number of serialized flows with hightly variable rates.
 
Here's an example of what really happens in a real network (I've observed this in 5 different cities on ATT's cellular network, back when it was running Alcatel Lucent HSPA+ gear in those cities).
But you can see this on any network where transient overload occurs, creating instability.
 
 
At 7 AM, the data transmission of the network is roughty stable. That's because no links are overloaded within the network. Little's Law can tell you by observing the delay and throughput on any path that the average delay in the network is X.
 
Continue sampling delay in the network as the day wears on. At about 10 AM, ping delay starts to soar into the multiple second range. No packers are lost. The peak ping time is about 4000 milliseconds - 4 seconds in most of the networks. This is in downtown, no radio errors are reported, no link errors.
So it is all queueing delay. 
 
Now what Little's law doesn't tell you much about average delay, because clearly *some* subpiece of the network is fully saturated. But what is interesting here is what is happening and where. You can't tell what is saturated, and in fact the entire network is quite unstable, because the peak is constantly varying and you don't know where the throughput is. All the packets are now arriving 4 seconds or so later.
 
Why is the situaton not worse than 4 seconds? Well, there are multiple things going on:
 
1) TCP may be doing a lot of retransmissions (non-Poisson at all, not random either. The arrival process is entirely deterministic in each source, based on the retransmission timeout) or it may not be.
 
2) Users are pissed off, because they clicked on a web page, and got nothing back. They retry on their screen, or they try another site. Meanwhile, the underlying TCP connection remains there, pumping the network full of more packets on that old path, which is still backed up with packets that haven't been delivered that are sitting in queues. The real arrival process is not Poisson at all, its a deterministic, repeated retrsnsmission plus a new attempt to connect to a new site.
 
3) When the users get a web page back eventually, it is filled with names of other pieces needed to display that web page, which causes some number (often as many as 100) new pages to be fetched, ALL at the same time. Certainly not a stochastic process that will just obey the law of large numbers.
 
All of these things are the result of initial instability, causing queues to build up.
 
So what is the state of the system? is it stable? is it stochastic? Is it the sum of enough stochastic stable flows to average out to Poisson?
 
The answer is clearly NO. Control theory (not queuing theory) suggests that this system is completely uncontrolled and unstable.
 
So if the system is in this state, what does Little's Lemma tell us? What is the meaning of that hightly variable 4 second delay on ping packets, in terms of average utilizaton of the network?
 
We don't even know what all the users really might need, if the system hadn't become unstable, because some users have given up, and others are trying even harder, and new users are arriving.
 
What we do know, because ATT (at my suggestion) reconfigured their system after blaming Apple Computer company for "bugs" in the original iPhone in public, is that simply *dropping* packets sitting in queues more than a couple milliseconds MADE THE USERS HAPPY. Apparently the required capacity was there all along! 
 
So I conclude that the 4 second delay was the largest delay users could barely tolerate before deciding the network was DOWN and going away. And that the backup was the accumulation of useless packets sitting in queues because none of the end systems were receiving congestion signals (which for the Internet stack begins with packet dropping).
 
I should say that most operators, and especially ATT in this case, do not measure end-to-end latency. Instead they use Little's Lemma to query routers for their current throughput in bits per second, and calculate latency as if Little's Lemma applied. This results in reports to management that literally say:
 
  The network is not dropping packets, utilization is near 100% on many of our switches and routers.
 
And management responds, Hooray! Because utilization of 100% of their hardware is their investors' metric of maximizing profits. The hardware they are operating is fully utilized. No waste! And users are happy because no packets have been dropped!
 
Hmm... what's wrong with this picture? I can see why Donovan, CTO, would accuse Apple of lousy software that was ruining iPhone user experience!  His network was operating without ANY problems.
So it must be Apple!
 
Well, no. The entire problem, as we saw when ATT just changed to shorten egress queues and drop packets when the egress queues overflowed, was that ATT's network was amplifying instability, not at the link level, but at the network level.
 
And queueing theory can help with that, but *intro queueing theory* cannot.
 
And a big part of that problem is the pervasive belief that, at the network boundary, *Poisson arrival* is a reasonable model for use in all cases.
 
 
 
 
 
 
 
 
 
 
On Friday, July 9, 2021 6:05am, "Luca Muscariello" <muscariello@ieee.org> said:







For those who might be interested in Little's law
there is a nice paper by John Little on the occasion 
of the 50th anniversary  of the result.
[ https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary ]( https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary )
[ https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf ]( https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf )
 
Nice read. 
Luca 
 
P.S. 
Who has not a copy of L. Kleinrock's books? I do have and am not ready to lend them!

On Fri, Jul 9, 2021 at 11:01 AM Leonard Kleinrock <[ lk@cs.ucla.edu ]( mailto:lk@cs.ucla.edu )> wrote:
David,
I totally appreciate  your attention to when and when not analytical modeling works. Let me clarify a few things from your note.
First, Little's law (also known as Little’s lemma or, as I use in my book, Little’s result) does not assume Poisson arrivals -  it is good for any arrival process and any service process and is an equality between time averages.  It states that the time average of the number in a system (for a sample path w) is equal to the average arrival rate to the system multiplied by the time-averaged time in the system for that sample path.  This is often written as   NTimeAvg =λ·TTimeAvg .  Moreover, if the system is also ergodic, then the time average equals the ensemble average and we often write it as N ̄ = λ T ̄ .  In any case, this requires neither Poisson arrivals nor exponential service times.  
 
Queueing theorists often do study the case of Poisson arrivals.  True, it makes the analysis easier, yet there is a better reason it is often used, and that is because the sum of a large number of independent stationary renewal processes approaches a Poisson process.  So nature often gives us Poisson arrivals.  
Best,
Len


On Jul 8, 2021, at 12:38 PM, David P. Reed <[ dpreed@deepplum.com ]( mailto:dpreed@deepplum.com )> wrote:


I will tell you flat out that the arrival time distribution assumption made by Little's Lemma that allows "estimation of queue depth" is totally unreasonable on ANY Internet in practice.
 
The assumption is a Poisson Arrival Process. In reality, traffic arrivals in real internet applications are extremely far from Poisson, and, of course, using TCP windowing, become highly intercorrelated with crossing traffic that shares the same queue.
 
So, as I've tried to tell many, many net-heads (people who ignore applications layer behavior, like the people that think latency doesn't matter to end users, only throughput), end-to-end packet arrival times on a practical network are incredibly far from Poisson - and they are more like fractal probability distributions, very irregular at all scales of time.
 
So, the idea that iperf can estimate queue depth by Little's Lemma by just measuring saturation of capacity of a path is bogus.The less Poisson, the worse the estimate gets, by a huge factor.
 
 
Where does the Poisson assumption come from?  Well, like many theorems, it is the simplest tractable closed form solution - it creates a simplified view, by being a "single-parameter" distribution (the parameter is called lambda for a Poisson distribution).  And the analysis of a simple queue with poisson arrival distribution and a static, fixed service time is the first interesting Queueing Theory example in most textbooks. It is suggestive of an interesting phenomenon, but it does NOT characterize any real system.
 
It's the queueing theory equivalent of "First, we assume a spherical cow...". in doing an example in a freshman physics class.
 
Unfortunately, most networking engineers understand neither queuing theory nor application networking usage in interactive applications. Which makes them arrogant. They assume all distributions are poisson!
 
 
On Tuesday, July 6, 2021 9:46am, "Ben Greear" <[ greearb@candelatech.com ]( mailto:greearb@candelatech.com )> said:



> Hello,
> 
> I am interested to hear wish lists for network testing features. We make test
> equipment, supporting lots
> of wifi stations and a distributed architecture, with built-in udp, tcp, ipv6,
> http, ... protocols,
> and open to creating/improving some of our automated tests.
> 
> I know Dave has some test scripts already, so I'm not necessarily looking to
> reimplement that,
> but more fishing for other/new ideas.
> 
> Thanks,
> Ben
> 
> On 7/2/21 4:28 PM, Bob McMahon wrote:
> > I think we need the language of math here. It seems like the network
> power metric, introduced by Kleinrock and Jaffe in the late 70s, is something
> useful.
> > Effective end/end queue depths per Little's law also seems useful. Both are
> available in iperf 2 from a test perspective. Repurposing test techniques to
> actual
> > traffic could be useful. Hence the question around what exact telemetry
> is useful to apps making socket write() and read() calls.
> >
> > Bob
> >
> > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <[ dave.taht@gmail.com ]( mailto:dave.taht@gmail.com )
> <[ mailto:dave.taht@gmail.com ]( mailto:dave.taht@gmail.com )>> wrote:
> >
> > In terms of trying to find "Quality" I have tried to encourage folk to
> > both read "zen and the art of motorcycle maintenance"[0], and Deming's
> > work on "total quality management".
> >
> > My own slice at this network, computer and lifestyle "issue" is aiming
> > for "imperceptible latency" in all things. [1]. There's a lot of
> > fallout from that in terms of not just addressing queuing delay, but
> > caching, prefetching, and learning more about what a user really needs
> > (as opposed to wants) to know via intelligent agents.
> >
> > [0] If you want to get depressed, read Pirsig's successor to "zen...",
> > lila, which is in part about what happens when an engineer hits an
> > insoluble problem.
> > [1] [ https://www.internetsociety.org/events/latency2013/ ]( https://www.internetsociety.org/events/latency2013/ )
> <[ https://www.internetsociety.org/events/latency2013/ ]( https://www.internetsociety.org/events/latency2013/ )>
> >
> >
> >
> > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <[ dpreed@deepplum.com ]( mailto:dpreed@deepplum.com )
> <[ mailto:dpreed@deepplum.com ]( mailto:dpreed@deepplum.com )>> wrote:
> > >
> > > Well, nice that the folks doing the conference  are willing to
> consider that quality of user experience has little to do with signalling rate at
> the
> > physical layer or throughput of FTP transfers.
> > >
> > >
> > >
> > > But honestly, the fact that they call the problem "network quality"
> suggests that they REALLY, REALLY don't understand the Internet isn't the hardware
> or
> > the routers or even the routing algorithms *to its users*.
> > >
> > >
> > >
> > > By ignoring the diversity of applications now and in the future,
> and the fact that we DON'T KNOW what will be coming up, this conference will
> likely fall
> > into the usual trap that net-heads fall into - optimizing for some
> imaginary reality that doesn't exist, and in fact will probably never be what
> users
> > actually will do given the chance.
> > >
> > >
> > >
> > > I saw this issue in 1976 in the group developing the original
> Internet protocols - a desire to put *into the network* special tricks to optimize
> ASR33
> > logins to remote computers from terminal concentrators (aka remote
> login), bulk file transfers between file systems on different time-sharing
> systems, and
> > "sessions" (virtual circuits) that required logins. And then trying to
> exploit underlying "multicast" by building it into the IP layer, because someone
> > thought that TV broadcast would be the dominant application.
> > >
> > >
> > >
> > > Frankly, to think of "quality" as something that can be "provided"
> by "the network" misses the entire point of "end-to-end argument in system
> design".
> > Quality is not a property defined or created by The Network. If you want
> to talk about Quality, you need to talk about users - all the users at all times,
> > now and into the future, and that's something you can't do if you don't
> bother to include current and future users talking about what they might expect
> to
> > experience that they don't experience.
> > >
> > >
> > >
> > > There was much fighting back in 1976 that basically involved
> "network experts" saying that the network was the place to "solve" such issues as
> quality,
> > so applications could avoid having to solve such issues.
> > >
> > >
> > >
> > > What some of us managed to do was to argue that you can't "solve"
> such issues. All you can do is provide a framework that enables different uses to
> > *cooperate* in some way.
> > >
> > >
> > >
> > > Which is why the Internet drops packets rather than queueing them,
> and why diffserv cannot work.
> > >
> > > (I know the latter is conftroversial, but at the moment, ALL of
> diffserv attempts to talk about end-to-end applicaiton specific metrics, but
> never, ever
> > explains what the diffserv control points actually do w.r.t. what the IP
> layer can actually control. So it is meaningless - another violation of the
> > so-called end-to-end principle).
> > >
> > >
> > >
> > > Networks are about getting packets from here to there, multiplexing
> the underlying resources. That's it. Quality is a whole different thing. Quality
> can
> > be improved by end-to-end approaches, if the underlying network provides
> some kind of thing that actually creates a way for end-to-end applications to
> > affect queueing and routing decisions, and more importantly getting
> "telemetry" from the network regarding what is actually going on with the other
> > end-to-end users sharing the infrastructure.
> > >
> > >
> > >
> > > This conference won't talk about it this way. So don't waste your
> time.
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"
> <[ dave.taht@gmail.com ]( mailto:dave.taht@gmail.com ) <[ mailto:dave.taht@gmail.com ]( mailto:dave.taht@gmail.com )>> said:
> > >
> > > > The program committee members are *amazing*. Perhaps, finally,
> we can
> > > > move the bar for the internet's quality metrics past endless,
> blind
> > > > repetitions of speedtest.
> > > >
> > > > For complete details, please see:
> > > > [ https://www.iab.org/activities/workshops/network-quality/ ]( https://www.iab.org/activities/workshops/network-quality/ )
> <[ https://www.iab.org/activities/workshops/network-quality/ ]( https://www.iab.org/activities/workshops/network-quality/ )>
> > > >
> > > > Submissions Due: Monday 2nd August 2021, midnight AOE
> (Anywhere On Earth)
> > > > Invitations Issued by: Monday 16th August 2021
> > > >
> > > > Workshop Date: This will be a virtual workshop, spread over
> three days:
> > > >
> > > > 1400-1800 UTC Tue 14th September 2021
> > > > 1400-1800 UTC Wed 15th September 2021
> > > > 1400-1800 UTC Thu 16th September 2021
> > > >
> > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> > > >
> > > > The Program Committee members:
> > > >
> > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
> Sam
> > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
> Geoff
> > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
> Kuehlewind,
> > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
> Nichols,
> > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> > > >
> > > > Send Submissions to: [ network-quality-workshop-pc@iab.org ]( mailto:network-quality-workshop-pc@iab.org )
> <[ mailto:network-quality-workshop-pc@iab.org ]( mailto:network-quality-workshop-pc@iab.org )>.
> > > >
> > > > Position papers from academia, industry, the open source
> community and
> > > > others that focus on measurements, experiences, observations
> and
> > > > advice for the future are welcome. Papers that reflect
> experience
> > > > based on deployed services are especially welcome. The
> organizers
> > > > understand that specific actions taken by operators are
> unlikely to be
> > > > discussed in detail, so papers discussing general categories
> of
> > > > actions and issues without naming specific technologies,
> products, or
> > > > other players in the ecosystem are expected. Papers should not
> focus
> > > > on specific protocol solutions.
> > > >
> > > > The workshop will be by invitation only. Those wishing to
> attend
> > > > should submit a position paper to the address above; it may
> take the
> > > > form of an Internet-Draft.
> > > >
> > > > All inputs submitted and considered relevant will be published
> on the
> > > > workshop website. The organisers will decide whom to invite
> based on
> > > > the submissions received. Sessions will be organized according
> to
> > > > content, and not every accepted submission or invited attendee
> will
> > > > have an opportunity to present as the intent is to foster
> discussion
> > > > and not simply to have a sequence of presentations.
> > > >
> > > > Position papers from those not planning to attend the virtual
> sessions
> > > > themselves are also encouraged. A workshop report will be
> published
> > > > afterwards.
> > > >
> > > > Overview:
> > > >
> > > > "We believe that one of the major factors behind this lack of
> progress
> > > > is the popular perception that throughput is the often sole
> measure of
> > > > the quality of Internet connectivity. With such narrow focus,
> people
> > > > don’t consider questions such as:
> > > >
> > > > What is the latency under typical working conditions?
> > > > How reliable is the connectivity across longer time periods?
> > > > Does the network allow the use of a broad range of protocols?
> > > > What services can be run by clients of the network?
> > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and
> are there firewalls?
> > > > What security mechanisms are available for local services,
> such as DNS?
> > > > To what degree are the privacy, confidentiality, integrity
> and
> > > > authenticity of user communications guarded?
> > > >
> > > > Improving these aspects of network quality will likely depend
> on
> > > > measurement and exposing metrics to all involved parties,
> including to
> > > > end users in a meaningful way. Such measurements and exposure
> of the
> > > > right metrics will allow service providers and network
> operators to
> > > > focus on the aspects that impacts the users’ experience
> most and at
> > > > the same time empowers users to choose the Internet service
> that will
> > > > give them the best experience."
> > > >
> > > >
> > > > --
> > > > Latest Podcast:
> > > >
> [ https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ ]( https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ )
> <[ https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ ]( https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ )>
> > > >
> > > > Dave Täht CTO, TekLibre, LLC
> > > > _______________________________________________
> > > > Cerowrt-devel mailing list
> > > > [ Cerowrt-devel@lists.bufferbloat.net ]( mailto:Cerowrt-devel@lists.bufferbloat.net )
> <[ mailto:Cerowrt-devel@lists.bufferbloat.net ]( mailto:Cerowrt-devel@lists.bufferbloat.net )>
> > > > [ https://lists.bufferbloat.net/listinfo/cerowrt-devel ]( https://lists.bufferbloat.net/listinfo/cerowrt-devel )
> <[ https://lists.bufferbloat.net/listinfo/cerowrt-devel ]( https://lists.bufferbloat.net/listinfo/cerowrt-devel )>
> > > >
> >
> >
> >
> > --
> > Latest Podcast:
> > [ https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ ]( https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ )
> <[ https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ ]( https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ )>
> >
> > Dave Täht CTO, TekLibre, LLC
> > _______________________________________________
> > Make-wifi-fast mailing list
> > [ Make-wifi-fast@lists.bufferbloat.net ]( mailto:Make-wifi-fast@lists.bufferbloat.net )
> <[ mailto:Make-wifi-fast@lists.bufferbloat.net ]( mailto:Make-wifi-fast@lists.bufferbloat.net )>
> > [ https://lists.bufferbloat.net/listinfo/make-wifi-fast ]( https://lists.bufferbloat.net/listinfo/make-wifi-fast )
> <[ https://lists.bufferbloat.net/listinfo/make-wifi-fast ]( https://lists.bufferbloat.net/listinfo/make-wifi-fast )>
> >
> >
> > This electronic communication and the information and any files transmitted
> with it, or attached to it, are confidential and are intended solely for the use
> of
> > the individual or entity to whom it is addressed and may contain information
> that is confidential, legally privileged, protected by privacy laws, or otherwise
> > restricted from disclosure to anyone else. If you are not the intended
> recipient or the person responsible for delivering the e-mail to the intended
> recipient,
> > you are hereby notified that any use, copying, distributing, dissemination,
> forwarding, printing, or copying of this e-mail is strictly prohibited. If you
> > received this e-mail in error, please return the e-mail to the sender, delete
> it from your computer, and destroy any printed copy of it.
> >
> > _______________________________________________
> > Starlink mailing list
> > [ Starlink@lists.bufferbloat.net ]( mailto:Starlink@lists.bufferbloat.net )
> > [ https://lists.bufferbloat.net/listinfo/starlink ]( https://lists.bufferbloat.net/listinfo/starlink )
> >
> 
> 
> --
> Ben Greear <[ greearb@candelatech.com ]( mailto:greearb@candelatech.com )>
> Candela Technologies Inc [ http://www.candelatech.com ]( http://www.candelatech.com )
>_______________________________________________
Starlink mailing list
[ Starlink@lists.bufferbloat.net ]( mailto:Starlink@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/starlink ]( https://lists.bufferbloat.net/listinfo/starlink )_______________________________________________
 Make-wifi-fast mailing list
[ Make-wifi-fast@lists.bufferbloat.net ]( mailto:Make-wifi-fast@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/make-wifi-fast ]( https://lists.bufferbloat.net/listinfo/make-wifi-fast )

[-- Attachment #2: Type: text/html, Size: 40841 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: Little's Law mea culpa, but not invalidating my main point
  2021-07-09 19:31               ` [Cerowrt-devel] Little's Law mea culpa, but not invalidating my main point David P. Reed
@ 2021-07-09 20:24                 ` Bob McMahon
  2021-07-09 22:57                 ` [Bloat] " Holland, Jake
                                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-09 20:24 UTC (permalink / raw)
  To: David P. Reed
  Cc: Luca Muscariello, Leonard Kleinrock, starlink, Make-Wifi-fast,
	Cake List, codel, cerowrt-devel, bloat, Ben Greear


[-- Attachment #1.1: Type: text/plain, Size: 26828 bytes --]

A bit off topic per the control and queueing theory discussion; a four
second latency is going to fail our regression automation rigs. Way too
many WiFi users, particularly for games, require sub few hundreds of
milliseconds and sometimes even much lower. A TCP connect() getting behind
a 4 second buffer bloat queue is also a big fail for these test rigs.
Completely agree that average latency isn't what "users" complain about -
it typically requires a tail analysis.

Bob


On Fri, Jul 9, 2021 at 12:31 PM David P. Reed <dpreed@deepplum.com> wrote:

> Len - I admit I made a mistake in challenging Little's Law as being based
> on Poisson processes. It is more general. But it tells you an "average" in
> its base form, and latency averages are not useful for end user
> applications.
>
>
>
> However, Little's Law does assume something that is not actually valid
> about the kind of distributions seen in the network, and in fact, it is NOT
> true that networks converge on Poisson arrival times.
>
>
>
> The key issue is well-described in the sandard analysis of the M/M/1 queue
> (e.g. https://en.wikipedia.org/wiki/M/M/1_queue) , which is done only for
> Poisson processes, and is also limited to "stable" systems. But networks
> are never stable when fully loaded. They get unstable and those
> instabilities persist for a long time in the network. Instability is at
> core the underlying *requirement* of the Internet's usage.
>
>
>
> So specifically: real networks, even large ones, and certainly the
> Internet today, are not asymptotic limits of sums of stationary stochastic
> arrival processes. Each esternal terminal of any real network has a real
> user there, running a real application, and the network is a complex graph.
> This makes it completely unlike a single queue. Even the links within a
> network carry a relatively small number of application flows. There's no
> ability to apply the Law of Large Numbers to the distributions, because any
> particular path contains only a small number of serialized flows with
> hightly variable rates.
>
>
>
> Here's an example of what really happens in a real network (I've observed
> this in 5 different cities on ATT's cellular network, back when it was
> running Alcatel Lucent HSPA+ gear in those cities).
>
> But you can see this on any network where transient overload occurs,
> creating instability.
>
>
>
>
>
> At 7 AM, the data transmission of the network is roughty stable. That's
> because no links are overloaded within the network. Little's Law can tell
> you by observing the delay and throughput on any path that the average
> delay in the network is X.
>
>
>
> Continue sampling delay in the network as the day wears on. At about 10
> AM, ping delay starts to soar into the multiple second range. No packers
> are lost. The peak ping time is about 4000 milliseconds - 4 seconds in most
> of the networks. This is in downtown, no radio errors are reported, no link
> errors.
>
> So it is all queueing delay.
>
>
>
> Now what Little's law doesn't tell you much about average delay, because
> clearly *some* subpiece of the network is fully saturated. But what is
> interesting here is what is happening and where. You can't tell what is
> saturated, and in fact the entire network is quite unstable, because the
> peak is constantly varying and you don't know where the throughput is. All
> the packets are now arriving 4 seconds or so later.
>
>
>
> Why is the situaton not worse than 4 seconds? Well, there are multiple
> things going on:
>
>
>
> 1) TCP may be doing a lot of retransmissions (non-Poisson at all, not
> random either. The arrival process is entirely deterministic in each
> source, based on the retransmission timeout) or it may not be.
>
>
>
> 2) Users are pissed off, because they clicked on a web page, and got
> nothing back. They retry on their screen, or they try another site.
> Meanwhile, the underlying TCP connection remains there, pumping the network
> full of more packets on that old path, which is still backed up with
> packets that haven't been delivered that are sitting in queues. The real
> arrival process is not Poisson at all, its a deterministic, repeated
> retrsnsmission plus a new attempt to connect to a new site.
>
>
>
> 3) When the users get a web page back eventually, it is filled with names
> of other pieces needed to display that web page, which causes some number
> (often as many as 100) new pages to be fetched, ALL at the same time.
> Certainly not a stochastic process that will just obey the law of large
> numbers.
>
>
>
> All of these things are the result of initial instability, causing queues
> to build up.
>
>
>
> So what is the state of the system? is it stable? is it stochastic? Is it
> the sum of enough stochastic stable flows to average out to Poisson?
>
>
>
> The answer is clearly NO. Control theory (not queuing theory) suggests
> that this system is completely uncontrolled and unstable.
>
>
>
> So if the system is in this state, what does Little's Lemma tell us? What
> is the meaning of that hightly variable 4 second delay on ping packets, in
> terms of average utilizaton of the network?
>
>
>
> We don't even know what all the users really might need, if the system
> hadn't become unstable, because some users have given up, and others are
> trying even harder, and new users are arriving.
>
>
>
> What we do know, because ATT (at my suggestion) reconfigured their system
> after blaming Apple Computer company for "bugs" in the original iPhone in
> public, is that simply *dropping* packets sitting in queues more than a
> couple milliseconds MADE THE USERS HAPPY. Apparently the required capacity
> was there all along!
>
>
>
> So I conclude that the 4 second delay was the largest delay users could
> barely tolerate before deciding the network was DOWN and going away. And
> that the backup was the accumulation of useless packets sitting in queues
> because none of the end systems were receiving congestion signals (which
> for the Internet stack begins with packet dropping).
>
>
>
> I should say that most operators, and especially ATT in this case, do not
> measure end-to-end latency. Instead they use Little's Lemma to query
> routers for their current throughput in bits per second, and calculate
> latency as if Little's Lemma applied. This results in reports to management
> that literally say:
>
>
>
>   The network is not dropping packets, utilization is near 100% on many of
> our switches and routers.
>
>
>
> And management responds, Hooray! Because utilization of 100% of their
> hardware is their investors' metric of maximizing profits. The hardware
> they are operating is fully utilized. No waste! And users are happy because
> no packets have been dropped!
>
>
>
> Hmm... what's wrong with this picture? I can see why Donovan, CTO, would
> accuse Apple of lousy software that was ruining iPhone user experience!
> His network was operating without ANY problems.
>
> So it must be Apple!
>
>
>
> Well, no. The entire problem, as we saw when ATT just changed to shorten
> egress queues and drop packets when the egress queues overflowed, was that
> ATT's network was amplifying instability, not at the link level, but at the
> network level.
>
>
>
> And queueing theory can help with that, but *intro queueing theory* cannot.
>
>
>
> And a big part of that problem is the pervasive belief that, at the
> network boundary, *Poisson arrival* is a reasonable model for use in all
> cases.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Friday, July 9, 2021 6:05am, "Luca Muscariello" <muscariello@ieee.org>
> said:
>
> For those who might be interested in Little's law
> there is a nice paper by John Little on the occasion
> of the 50th anniversary  of the result.
>
> https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary
>
> https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf
>
> Nice read.
> Luca
>
> P.S.
> Who has not a copy of L. Kleinrock's books? I do have and am not ready to
> lend them!
> On Fri, Jul 9, 2021 at 11:01 AM Leonard Kleinrock <lk@cs.ucla.edu> wrote:
>
>> David,
>> I totally appreciate  your attention to when and when not analytical
>> modeling works. Let me clarify a few things from your note.
>> First, Little's law (also known as Little’s lemma or, as I use in my
>> book, Little’s result) does not assume Poisson arrivals -  it is good for
>> *any* arrival process and any service process and is an equality between
>> time averages.  It states that the time average of the number in a system
>> (for a sample path *w)* is equal to the average arrival rate to the
>> system multiplied by the time-averaged time in the system for that sample
>> path.  This is often written as   NTimeAvg =λ·TTimeAvg .  Moreover, if
>> the system is also ergodic, then the time average equals the ensemble
>> average and we often write it as N ̄ = λ T ̄ .  In any case, this
>> requires neither Poisson arrivals nor exponential service times.
>>
>> Queueing theorists often do study the case of Poisson arrivals.  True, it
>> makes the analysis easier, yet there is a better reason it is often used,
>> and that is because the sum of a large number of independent stationary
>> renewal processes approaches a Poisson process.  So nature often gives us
>> Poisson arrivals.
>> Best,
>> Len
>>
>> On Jul 8, 2021, at 12:38 PM, David P. Reed <dpreed@deepplum.com> wrote:
>>
>> I will tell you flat out that the arrival time distribution assumption
>> made by Little's Lemma that allows "estimation of queue depth" is totally
>> unreasonable on ANY Internet in practice.
>>
>>
>> The assumption is a Poisson Arrival Process. In reality, traffic arrivals
>> in real internet applications are extremely far from Poisson, and, of
>> course, using TCP windowing, become highly intercorrelated with crossing
>> traffic that shares the same queue.
>>
>>
>> So, as I've tried to tell many, many net-heads (people who ignore
>> applications layer behavior, like the people that think latency doesn't
>> matter to end users, only throughput), end-to-end packet arrival times on a
>> practical network are incredibly far from Poisson - and they are more like
>> fractal probability distributions, very irregular at all scales of time.
>>
>>
>> So, the idea that iperf can estimate queue depth by Little's Lemma by
>> just measuring saturation of capacity of a path is bogus.The less Poisson,
>> the worse the estimate gets, by a huge factor.
>>
>>
>>
>>
>> Where does the Poisson assumption come from?  Well, like many theorems,
>> it is the simplest tractable closed form solution - it creates a simplified
>> view, by being a "single-parameter" distribution (the parameter is called
>> lambda for a Poisson distribution).  And the analysis of a simple queue
>> with poisson arrival distribution and a static, fixed service time is the
>> first interesting Queueing Theory example in most textbooks. It is
>> suggestive of an interesting phenomenon, but it does NOT characterize any
>> real system.
>>
>>
>> It's the queueing theory equivalent of "First, we assume a spherical
>> cow...". in doing an example in a freshman physics class.
>>
>>
>> Unfortunately, most networking engineers understand neither queuing
>> theory nor application networking usage in interactive applications. Which
>> makes them arrogant. They assume all distributions are poisson!
>>
>>
>>
>>
>> On Tuesday, July 6, 2021 9:46am, "Ben Greear" <greearb@candelatech.com>
>> said:
>>
>> > Hello,
>> >
>> > I am interested to hear wish lists for network testing features. We
>> make test
>> > equipment, supporting lots
>> > of wifi stations and a distributed architecture, with built-in udp,
>> tcp, ipv6,
>> > http, ... protocols,
>> > and open to creating/improving some of our automated tests.
>> >
>> > I know Dave has some test scripts already, so I'm not necessarily
>> looking to
>> > reimplement that,
>> > but more fishing for other/new ideas.
>> >
>> > Thanks,
>> > Ben
>> >
>> > On 7/2/21 4:28 PM, Bob McMahon wrote:
>> > > I think we need the language of math here. It seems like the network
>> > power metric, introduced by Kleinrock and Jaffe in the late 70s, is
>> something
>> > useful.
>> > > Effective end/end queue depths per Little's law also seems useful.
>> Both are
>> > available in iperf 2 from a test perspective. Repurposing test
>> techniques to
>> > actual
>> > > traffic could be useful. Hence the question around what exact
>> telemetry
>> > is useful to apps making socket write() and read() calls.
>> > >
>> > > Bob
>> > >
>> > > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com
>> > <mailto:dave.taht@gmail.com <dave.taht@gmail.com>>> wrote:
>> > >
>> > > In terms of trying to find "Quality" I have tried to encourage folk to
>> > > both read "zen and the art of motorcycle maintenance"[0], and Deming's
>> > > work on "total quality management".
>> > >
>> > > My own slice at this network, computer and lifestyle "issue" is aiming
>> > > for "imperceptible latency" in all things. [1]. There's a lot of
>> > > fallout from that in terms of not just addressing queuing delay, but
>> > > caching, prefetching, and learning more about what a user really needs
>> > > (as opposed to wants) to know via intelligent agents.
>> > >
>> > > [0] If you want to get depressed, read Pirsig's successor to "zen...",
>> > > lila, which is in part about what happens when an engineer hits an
>> > > insoluble problem.
>> > > [1] https://www.internetsociety.org/events/latency2013/
>> > <https://www.internetsociety.org/events/latency2013/>
>> > >
>> > >
>> > >
>> > > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com
>> > <mailto:dpreed@deepplum.com <dpreed@deepplum.com>>> wrote:
>> > > >
>> > > > Well, nice that the folks doing the conference  are willing to
>> > consider that quality of user experience has little to do with
>> signalling rate at
>> > the
>> > > physical layer or throughput of FTP transfers.
>> > > >
>> > > >
>> > > >
>> > > > But honestly, the fact that they call the problem "network quality"
>> > suggests that they REALLY, REALLY don't understand the Internet isn't
>> the hardware
>> > or
>> > > the routers or even the routing algorithms *to its users*.
>> > > >
>> > > >
>> > > >
>> > > > By ignoring the diversity of applications now and in the future,
>> > and the fact that we DON'T KNOW what will be coming up, this conference
>> will
>> > likely fall
>> > > into the usual trap that net-heads fall into - optimizing for some
>> > imaginary reality that doesn't exist, and in fact will probably never
>> be what
>> > users
>> > > actually will do given the chance.
>> > > >
>> > > >
>> > > >
>> > > > I saw this issue in 1976 in the group developing the original
>> > Internet protocols - a desire to put *into the network* special tricks
>> to optimize
>> > ASR33
>> > > logins to remote computers from terminal concentrators (aka remote
>> > login), bulk file transfers between file systems on different
>> time-sharing
>> > systems, and
>> > > "sessions" (virtual circuits) that required logins. And then trying to
>> > exploit underlying "multicast" by building it into the IP layer,
>> because someone
>> > > thought that TV broadcast would be the dominant application.
>> > > >
>> > > >
>> > > >
>> > > > Frankly, to think of "quality" as something that can be "provided"
>> > by "the network" misses the entire point of "end-to-end argument in
>> system
>> > design".
>> > > Quality is not a property defined or created by The Network. If you
>> want
>> > to talk about Quality, you need to talk about users - all the users at
>> all times,
>> > > now and into the future, and that's something you can't do if you
>> don't
>> > bother to include current and future users talking about what they
>> might expect
>> > to
>> > > experience that they don't experience.
>> > > >
>> > > >
>> > > >
>> > > > There was much fighting back in 1976 that basically involved
>> > "network experts" saying that the network was the place to "solve" such
>> issues as
>> > quality,
>> > > so applications could avoid having to solve such issues.
>> > > >
>> > > >
>> > > >
>> > > > What some of us managed to do was to argue that you can't "solve"
>> > such issues. All you can do is provide a framework that enables
>> different uses to
>> > > *cooperate* in some way.
>> > > >
>> > > >
>> > > >
>> > > > Which is why the Internet drops packets rather than queueing them,
>> > and why diffserv cannot work.
>> > > >
>> > > > (I know the latter is conftroversial, but at the moment, ALL of
>> > diffserv attempts to talk about end-to-end applicaiton specific
>> metrics, but
>> > never, ever
>> > > explains what the diffserv control points actually do w.r.t. what the
>> IP
>> > layer can actually control. So it is meaningless - another violation of
>> the
>> > > so-called end-to-end principle).
>> > > >
>> > > >
>> > > >
>> > > > Networks are about getting packets from here to there, multiplexing
>> > the underlying resources. That's it. Quality is a whole different
>> thing. Quality
>> > can
>> > > be improved by end-to-end approaches, if the underlying network
>> provides
>> > some kind of thing that actually creates a way for end-to-end
>> applications to
>> > > affect queueing and routing decisions, and more importantly getting
>> > "telemetry" from the network regarding what is actually going on with
>> the other
>> > > end-to-end users sharing the infrastructure.
>> > > >
>> > > >
>> > > >
>> > > > This conference won't talk about it this way. So don't waste your
>> > time.
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"
>> > <dave.taht@gmail.com <mailto:dave.taht@gmail.com <dave.taht@gmail.com>>>
>> said:
>> > > >
>> > > > > The program committee members are *amazing*. Perhaps, finally,
>> > we can
>> > > > > move the bar for the internet's quality metrics past endless,
>> > blind
>> > > > > repetitions of speedtest.
>> > > > >
>> > > > > For complete details, please see:
>> > > > > https://www.iab.org/activities/workshops/network-quality/
>> > <https://www.iab.org/activities/workshops/network-quality/>
>> > > > >
>> > > > > Submissions Due: Monday 2nd August 2021, midnight AOE
>> > (Anywhere On Earth)
>> > > > > Invitations Issued by: Monday 16th August 2021
>> > > > >
>> > > > > Workshop Date: This will be a virtual workshop, spread over
>> > three days:
>> > > > >
>> > > > > 1400-1800 UTC Tue 14th September 2021
>> > > > > 1400-1800 UTC Wed 15th September 2021
>> > > > > 1400-1800 UTC Thu 16th September 2021
>> > > > >
>> > > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
>> > > > >
>> > > > > The Program Committee members:
>> > > > >
>> > > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
>> > Sam
>> > > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
>> > Geoff
>> > > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
>> > Kuehlewind,
>> > > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
>> > Nichols,
>> > > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
>> > > > >
>> > > > > Send Submissions to: network-quality-workshop-pc@iab.org
>> > <mailto:network-quality-workshop-pc@iab.org
>> <network-quality-workshop-pc@iab.org>>.
>> > > > >
>> > > > > Position papers from academia, industry, the open source
>> > community and
>> > > > > others that focus on measurements, experiences, observations
>> > and
>> > > > > advice for the future are welcome. Papers that reflect
>> > experience
>> > > > > based on deployed services are especially welcome. The
>> > organizers
>> > > > > understand that specific actions taken by operators are
>> > unlikely to be
>> > > > > discussed in detail, so papers discussing general categories
>> > of
>> > > > > actions and issues without naming specific technologies,
>> > products, or
>> > > > > other players in the ecosystem are expected. Papers should not
>> > focus
>> > > > > on specific protocol solutions.
>> > > > >
>> > > > > The workshop will be by invitation only. Those wishing to
>> > attend
>> > > > > should submit a position paper to the address above; it may
>> > take the
>> > > > > form of an Internet-Draft.
>> > > > >
>> > > > > All inputs submitted and considered relevant will be published
>> > on the
>> > > > > workshop website. The organisers will decide whom to invite
>> > based on
>> > > > > the submissions received. Sessions will be organized according
>> > to
>> > > > > content, and not every accepted submission or invited attendee
>> > will
>> > > > > have an opportunity to present as the intent is to foster
>> > discussion
>> > > > > and not simply to have a sequence of presentations.
>> > > > >
>> > > > > Position papers from those not planning to attend the virtual
>> > sessions
>> > > > > themselves are also encouraged. A workshop report will be
>> > published
>> > > > > afterwards.
>> > > > >
>> > > > > Overview:
>> > > > >
>> > > > > "We believe that one of the major factors behind this lack of
>> > progress
>> > > > > is the popular perception that throughput is the often sole
>> > measure of
>> > > > > the quality of Internet connectivity. With such narrow focus,
>> > people
>> > > > > don’t consider questions such as:
>> > > > >
>> > > > > What is the latency under typical working conditions?
>> > > > > How reliable is the connectivity across longer time periods?
>> > > > > Does the network allow the use of a broad range of protocols?
>> > > > > What services can be run by clients of the network?
>> > > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and
>> > are there firewalls?
>> > > > > What security mechanisms are available for local services,
>> > such as DNS?
>> > > > > To what degree are the privacy, confidentiality, integrity
>> > and
>> > > > > authenticity of user communications guarded?
>> > > > >
>> > > > > Improving these aspects of network quality will likely depend
>> > on
>> > > > > measurement and exposing metrics to all involved parties,
>> > including to
>> > > > > end users in a meaningful way. Such measurements and exposure
>> > of the
>> > > > > right metrics will allow service providers and network
>> > operators to
>> > > > > focus on the aspects that impacts the users’ experience
>> > most and at
>> > > > > the same time empowers users to choose the Internet service
>> > that will
>> > > > > give them the best experience."
>> > > > >
>> > > > >
>> > > > > --
>> > > > > Latest Podcast:
>> > > > >
>> >
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> > <
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> >
>> > > > >
>> > > > > Dave Täht CTO, TekLibre, LLC
>> > > > > _______________________________________________
>> > > > > Cerowrt-devel mailing list
>> > > > > Cerowrt-devel@lists.bufferbloat.net
>> > <mailto:Cerowrt-devel@lists.bufferbloat.net
>> <Cerowrt-devel@lists.bufferbloat.net>>
>> > > > > https://lists.bufferbloat.net/listinfo/cerowrt-devel
>> > <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
>> > > > >
>> > >
>> > >
>> > >
>> > > --
>> > > Latest Podcast:
>> > >
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> > <
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> >
>> > >
>> > > Dave Täht CTO, TekLibre, LLC
>> > > _______________________________________________
>> > > Make-wifi-fast mailing list
>> > > Make-wifi-fast@lists.bufferbloat.net
>> > <mailto:Make-wifi-fast@lists.bufferbloat.net
>> <Make-wifi-fast@lists.bufferbloat.net>>
>> > > https://lists.bufferbloat.net/listinfo/make-wifi-fast
>> > <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
>> > >
>> > >
>> > > This electronic communication and the information and any files
>> transmitted
>> > with it, or attached to it, are confidential and are intended solely
>> for the use
>> > of
>> > > the individual or entity to whom it is addressed and may contain
>> information
>> > that is confidential, legally privileged, protected by privacy laws, or
>> otherwise
>> > > restricted from disclosure to anyone else. If you are not the intended
>> > recipient or the person responsible for delivering the e-mail to the
>> intended
>> > recipient,
>> > > you are hereby notified that any use, copying, distributing,
>> dissemination,
>> > forwarding, printing, or copying of this e-mail is strictly prohibited.
>> If you
>> > > received this e-mail in error, please return the e-mail to the
>> sender, delete
>> > it from your computer, and destroy any printed copy of it.
>> > >
>> > > _______________________________________________
>> > > Starlink mailing list
>> > > Starlink@lists.bufferbloat.net
>> > > https://lists.bufferbloat.net/listinfo/starlink
>> > >
>> >
>> >
>> > --
>> > Ben Greear <greearb@candelatech.com>
>> > Candela Technologies Inc http://www.candelatech.com
>> >
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>> _______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 40092 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-09 19:31               ` [Cerowrt-devel] Little's Law mea culpa, but not invalidating my main point David P. Reed
  2021-07-09 20:24                 ` Bob McMahon
@ 2021-07-09 22:57                 ` Holland, Jake
  2021-07-09 23:37                   ` Toke Høiland-Jørgensen
  2021-07-09 23:01                 ` [Cerowrt-devel] " Leonard Kleinrock
                                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 108+ messages in thread
From: Holland, Jake @ 2021-07-09 22:57 UTC (permalink / raw)
  To: David P. Reed, Luca Muscariello
  Cc: Cake List, Make-Wifi-fast, Leonard Kleinrock, Bob McMahon,
	starlink, codel, cerowrt-devel, bloat, Ben Greear

[-- Attachment #1: Type: text/plain, Size: 29971 bytes --]

Hi David,

That’s an interesting point, and I think you’re right that packet arrival is poorly modeled as a Poisson process, because in practice packet transmissions are very rarely unrelated to other packet transmissions.

But now you’ve got me wondering what the right approach is.  Do you have any advice for how to improve this kind of modeling?

I’m thinking maybe a useful adjustment is to use Poisson start times on packet bursts, with a distribution on some burst characteristics? (Maybe like duration, rate, choppiness?)  Part of the point being that burst parameters then have a chance to become significant, as well as the load from aggregate user behavior.

And although I think user behavior is probably often ok to model as independent (outside of a background average that changes by time of day), in some contexts maybe it needs a 2nd overlay for bursts in user activity to address user-synchronizing events...  But for some problems I expect this kind of approach might still miss important feedback loop effects, and maybe for some problems it needs a more generalized suite of patterns besides a “burst”.  But maybe it would still be a step in the right direction for examining network loading problems in the abstract?

Or maybe it’s better to ask a different question:
Are there any good exemplars to follow here?  Any network traffic analysis (or related) work you’d recommend as having useful results that apply more broadly than a specific set of simulation/testing parameters, and that you wish more people would follow their example?

Also related: any particular papers come to mind that you wish someone would re-do with a better model?


Anyway, coming back to where that can of worms opened, I gotta say I like the “language of math” idea as a goal to aim for, and it would be surprising to me if no such useful information could be extracted from iperf runs.

A Little’s Law-based average queue estimate sounds possibly useful to me (especially compared across different runs or against external stats on background cross-traffic activity), and some kind of tail analysis on latency samples also sounds relevant to user experience. Maybe there’s some other things that would be better to include?

Best regards,
Jake


From: "David P. Reed" <dpreed@deepplum.com>
Date: Fri,2021-07-09 at 12:31 PM
To: Luca Muscariello <muscariello@ieee.org>
Cc: Cake List <cake@lists.bufferbloat.net>, Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>, Leonard Kleinrock <lk@cs.ucla.edu>, Bob McMahon <bob.mcmahon@broadcom.com>, "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>, "codel@lists.bufferbloat.net" <codel@lists.bufferbloat.net>, cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>, bloat <bloat@lists.bufferbloat.net>, Ben Greear <greearb@candelatech.com>
Subject: Re: [Bloat] Little's Law mea culpa, but not invalidating my main point


Len - I admit I made a mistake in challenging Little's Law as being based on Poisson processes. It is more general. But it tells you an "average" in its base form, and latency averages are not useful for end user applications.



However, Little's Law does assume something that is not actually valid about the kind of distributions seen in the network, and in fact, it is NOT true that networks converge on Poisson arrival times.



The key issue is well-described in the sandard analysis of the M/M/1 queue (e.g. https://en.wikipedia.org/wiki/M/M/1_queue<https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/M/M/1_queue__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lFrNBSSE$>) , which is done only for Poisson processes, and is also limited to "stable" systems. But networks are never stable when fully loaded. They get unstable and those instabilities persist for a long time in the network. Instability is at core the underlying *requirement* of the Internet's usage.



So specifically: real networks, even large ones, and certainly the Internet today, are not asymptotic limits of sums of stationary stochastic arrival processes. Each esternal terminal of any real network has a real user there, running a real application, and the network is a complex graph. This makes it completely unlike a single queue. Even the links within a network carry a relatively small number of application flows. There's no ability to apply the Law of Large Numbers to the distributions, because any particular path contains only a small number of serialized flows with hightly variable rates.



Here's an example of what really happens in a real network (I've observed this in 5 different cities on ATT's cellular network, back when it was running Alcatel Lucent HSPA+ gear in those cities).

But you can see this on any network where transient overload occurs, creating instability.





At 7 AM, the data transmission of the network is roughty stable. That's because no links are overloaded within the network. Little's Law can tell you by observing the delay and throughput on any path that the average delay in the network is X.



Continue sampling delay in the network as the day wears on. At about 10 AM, ping delay starts to soar into the multiple second range. No packers are lost. The peak ping time is about 4000 milliseconds - 4 seconds in most of the networks. This is in downtown, no radio errors are reported, no link errors.

So it is all queueing delay.



Now what Little's law doesn't tell you much about average delay, because clearly *some* subpiece of the network is fully saturated. But what is interesting here is what is happening and where. You can't tell what is saturated, and in fact the entire network is quite unstable, because the peak is constantly varying and you don't know where the throughput is. All the packets are now arriving 4 seconds or so later.



Why is the situaton not worse than 4 seconds? Well, there are multiple things going on:



1) TCP may be doing a lot of retransmissions (non-Poisson at all, not random either. The arrival process is entirely deterministic in each source, based on the retransmission timeout) or it may not be.



2) Users are pissed off, because they clicked on a web page, and got nothing back. They retry on their screen, or they try another site. Meanwhile, the underlying TCP connection remains there, pumping the network full of more packets on that old path, which is still backed up with packets that haven't been delivered that are sitting in queues. The real arrival process is not Poisson at all, its a deterministic, repeated retrsnsmission plus a new attempt to connect to a new site.



3) When the users get a web page back eventually, it is filled with names of other pieces needed to display that web page, which causes some number (often as many as 100) new pages to be fetched, ALL at the same time. Certainly not a stochastic process that will just obey the law of large numbers.



All of these things are the result of initial instability, causing queues to build up.



So what is the state of the system? is it stable? is it stochastic? Is it the sum of enough stochastic stable flows to average out to Poisson?



The answer is clearly NO. Control theory (not queuing theory) suggests that this system is completely uncontrolled and unstable.



So if the system is in this state, what does Little's Lemma tell us? What is the meaning of that hightly variable 4 second delay on ping packets, in terms of average utilizaton of the network?



We don't even know what all the users really might need, if the system hadn't become unstable, because some users have given up, and others are trying even harder, and new users are arriving.



What we do know, because ATT (at my suggestion) reconfigured their system after blaming Apple Computer company for "bugs" in the original iPhone in public, is that simply *dropping* packets sitting in queues more than a couple milliseconds MADE THE USERS HAPPY. Apparently the required capacity was there all along!



So I conclude that the 4 second delay was the largest delay users could barely tolerate before deciding the network was DOWN and going away. And that the backup was the accumulation of useless packets sitting in queues because none of the end systems were receiving congestion signals (which for the Internet stack begins with packet dropping).



I should say that most operators, and especially ATT in this case, do not measure end-to-end latency. Instead they use Little's Lemma to query routers for their current throughput in bits per second, and calculate latency as if Little's Lemma applied. This results in reports to management that literally say:



  The network is not dropping packets, utilization is near 100% on many of our switches and routers.



And management responds, Hooray! Because utilization of 100% of their hardware is their investors' metric of maximizing profits. The hardware they are operating is fully utilized. No waste! And users are happy because no packets have been dropped!



Hmm... what's wrong with this picture? I can see why Donovan, CTO, would accuse Apple of lousy software that was ruining iPhone user experience!  His network was operating without ANY problems.

So it must be Apple!



Well, no. The entire problem, as we saw when ATT just changed to shorten egress queues and drop packets when the egress queues overflowed, was that ATT's network was amplifying instability, not at the link level, but at the network level.



And queueing theory can help with that, but *intro queueing theory* cannot.



And a big part of that problem is the pervasive belief that, at the network boundary, *Poisson arrival* is a reasonable model for use in all cases.





















On Friday, July 9, 2021 6:05am, "Luca Muscariello" <muscariello@ieee.org> said:
For those who might be interested in Little's law
there is a nice paper by John Little on the occasion
of the 50th anniversary  of the result.
https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary<https://urldefense.com/v3/__https:/www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lA4G3ETE$>
https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf<https://urldefense.com/v3/__https:/www.informs.org/content/download/255808/2414681/file/little_paper.pdf__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lO1NjKU4$>

Nice read.
Luca

P.S.
Who has not a copy of L. Kleinrock's books? I do have and am not ready to lend them!
On Fri, Jul 9, 2021 at 11:01 AM Leonard Kleinrock <lk@cs.ucla.edu<mailto:lk@cs.ucla.edu>> wrote:
David,
I totally appreciate  your attention to when and when not analytical modeling works. Let me clarify a few things from your note.
First, Little's law (also known as Little’s lemma or, as I use in my book, Little’s result) does not assume Poisson arrivals -  it is good for any arrival process and any service process and is an equality between time averages.  It states that the time average of the number in a system (for a sample path w) is equal to the average arrival rate to the system multiplied by the time-averaged time in the system for that sample path.  This is often written as   NTimeAvg =λ·TTimeAvg .  Moreover, if the system is also ergodic, then the time average equals the ensemble average and we often write it as N ̄ = λ T ̄ .  In any case, this requires neither Poisson arrivals nor exponential service times.

Queueing theorists often do study the case of Poisson arrivals.  True, it makes the analysis easier, yet there is a better reason it is often used, and that is because the sum of a large number of independent stationary renewal processes approaches a Poisson process.  So nature often gives us Poisson arrivals.
Best,
Len
On Jul 8, 2021, at 12:38 PM, David P. Reed <dpreed@deepplum.com<mailto:dpreed@deepplum.com>> wrote:

I will tell you flat out that the arrival time distribution assumption made by Little's Lemma that allows "estimation of queue depth" is totally unreasonable on ANY Internet in practice.


The assumption is a Poisson Arrival Process. In reality, traffic arrivals in real internet applications are extremely far from Poisson, and, of course, using TCP windowing, become highly intercorrelated with crossing traffic that shares the same queue.


So, as I've tried to tell many, many net-heads (people who ignore applications layer behavior, like the people that think latency doesn't matter to end users, only throughput), end-to-end packet arrival times on a practical network are incredibly far from Poisson - and they are more like fractal probability distributions, very irregular at all scales of time.


So, the idea that iperf can estimate queue depth by Little's Lemma by just measuring saturation of capacity of a path is bogus.The less Poisson, the worse the estimate gets, by a huge factor.




Where does the Poisson assumption come from?  Well, like many theorems, it is the simplest tractable closed form solution - it creates a simplified view, by being a "single-parameter" distribution (the parameter is called lambda for a Poisson distribution).  And the analysis of a simple queue with poisson arrival distribution and a static, fixed service time is the first interesting Queueing Theory example in most textbooks. It is suggestive of an interesting phenomenon, but it does NOT characterize any real system.


It's the queueing theory equivalent of "First, we assume a spherical cow...". in doing an example in a freshman physics class.


Unfortunately, most networking engineers understand neither queuing theory nor application networking usage in interactive applications. Which makes them arrogant. They assume all distributions are poisson!




On Tuesday, July 6, 2021 9:46am, "Ben Greear" <greearb@candelatech.com<mailto:greearb@candelatech.com>> said:
> Hello,
>
> I am interested to hear wish lists for network testing features. We make test
> equipment, supporting lots
> of wifi stations and a distributed architecture, with built-in udp, tcp, ipv6,
> http, ... protocols,
> and open to creating/improving some of our automated tests.
>
> I know Dave has some test scripts already, so I'm not necessarily looking to
> reimplement that,
> but more fishing for other/new ideas.
>
> Thanks,
> Ben
>
> On 7/2/21 4:28 PM, Bob McMahon wrote:
> > I think we need the language of math here. It seems like the network
> power metric, introduced by Kleinrock and Jaffe in the late 70s, is something
> useful.
> > Effective end/end queue depths per Little's law also seems useful. Both are
> available in iperf 2 from a test perspective. Repurposing test techniques to
> actual
> > traffic could be useful. Hence the question around what exact telemetry
> is useful to apps making socket write() and read() calls.
> >
> > Bob
> >
> > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com<mailto:dave.taht@gmail.com>
> <mailto:dave.taht@gmail.com>> wrote:
> >
> > In terms of trying to find "Quality" I have tried to encourage folk to
> > both read "zen and the art of motorcycle maintenance"[0], and Deming's
> > work on "total quality management".
> >
> > My own slice at this network, computer and lifestyle "issue" is aiming
> > for "imperceptible latency" in all things. [1]. There's a lot of
> > fallout from that in terms of not just addressing queuing delay, but
> > caching, prefetching, and learning more about what a user really needs
> > (as opposed to wants) to know via intelligent agents.
> >
> > [0] If you want to get depressed, read Pirsig's successor to "zen...",
> > lila, which is in part about what happens when an engineer hits an
> > insoluble problem.
> > [1] https://www.internetsociety.org/events/latency2013/<https://urldefense.com/v3/__https:/www.internetsociety.org/events/latency2013/__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4l8XdbVoc$>
> <https://www.internetsociety.org/events/latency2013/<https://urldefense.com/v3/__https:/www.internetsociety.org/events/latency2013/__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4l8XdbVoc$>>
> >
> >
> >
> > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com<mailto:dpreed@deepplum.com>
> <mailto:dpreed@deepplum.com>> wrote:
> > >
> > > Well, nice that the folks doing the conference  are willing to
> consider that quality of user experience has little to do with signalling rate at
> the
> > physical layer or throughput of FTP transfers.
> > >
> > >
> > >
> > > But honestly, the fact that they call the problem "network quality"
> suggests that they REALLY, REALLY don't understand the Internet isn't the hardware
> or
> > the routers or even the routing algorithms *to its users*.
> > >
> > >
> > >
> > > By ignoring the diversity of applications now and in the future,
> and the fact that we DON'T KNOW what will be coming up, this conference will
> likely fall
> > into the usual trap that net-heads fall into - optimizing for some
> imaginary reality that doesn't exist, and in fact will probably never be what
> users
> > actually will do given the chance.
> > >
> > >
> > >
> > > I saw this issue in 1976 in the group developing the original
> Internet protocols - a desire to put *into the network* special tricks to optimize
> ASR33
> > logins to remote computers from terminal concentrators (aka remote
> login), bulk file transfers between file systems on different time-sharing
> systems, and
> > "sessions" (virtual circuits) that required logins. And then trying to
> exploit underlying "multicast" by building it into the IP layer, because someone
> > thought that TV broadcast would be the dominant application.
> > >
> > >
> > >
> > > Frankly, to think of "quality" as something that can be "provided"
> by "the network" misses the entire point of "end-to-end argument in system
> design".
> > Quality is not a property defined or created by The Network. If you want
> to talk about Quality, you need to talk about users - all the users at all times,
> > now and into the future, and that's something you can't do if you don't
> bother to include current and future users talking about what they might expect
> to
> > experience that they don't experience.
> > >
> > >
> > >
> > > There was much fighting back in 1976 that basically involved
> "network experts" saying that the network was the place to "solve" such issues as
> quality,
> > so applications could avoid having to solve such issues.
> > >
> > >
> > >
> > > What some of us managed to do was to argue that you can't "solve"
> such issues. All you can do is provide a framework that enables different uses to
> > *cooperate* in some way.
> > >
> > >
> > >
> > > Which is why the Internet drops packets rather than queueing them,
> and why diffserv cannot work.
> > >
> > > (I know the latter is conftroversial, but at the moment, ALL of
> diffserv attempts to talk about end-to-end applicaiton specific metrics, but
> never, ever
> > explains what the diffserv control points actually do w.r.t. what the IP
> layer can actually control. So it is meaningless - another violation of the
> > so-called end-to-end principle).
> > >
> > >
> > >
> > > Networks are about getting packets from here to there, multiplexing
> the underlying resources. That's it. Quality is a whole different thing. Quality
> can
> > be improved by end-to-end approaches, if the underlying network provides
> some kind of thing that actually creates a way for end-to-end applications to
> > affect queueing and routing decisions, and more importantly getting
> "telemetry" from the network regarding what is actually going on with the other
> > end-to-end users sharing the infrastructure.
> > >
> > >
> > >
> > > This conference won't talk about it this way. So don't waste your
> time.
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"
> <dave.taht@gmail.com<mailto:dave.taht@gmail.com> <mailto:dave.taht@gmail.com>> said:
> > >
> > > > The program committee members are *amazing*. Perhaps, finally,
> we can
> > > > move the bar for the internet's quality metrics past endless,
> blind
> > > > repetitions of speedtest.
> > > >
> > > > For complete details, please see:
> > > > https://www.iab.org/activities/workshops/network-quality/<https://urldefense.com/v3/__https:/www.iab.org/activities/workshops/network-quality/__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4la_Ro0d0$>
> <https://www.iab.org/activities/workshops/network-quality/<https://urldefense.com/v3/__https:/www.iab.org/activities/workshops/network-quality/__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4la_Ro0d0$>>
> > > >
> > > > Submissions Due: Monday 2nd August 2021, midnight AOE
> (Anywhere On Earth)
> > > > Invitations Issued by: Monday 16th August 2021
> > > >
> > > > Workshop Date: This will be a virtual workshop, spread over
> three days:
> > > >
> > > > 1400-1800 UTC Tue 14th September 2021
> > > > 1400-1800 UTC Wed 15th September 2021
> > > > 1400-1800 UTC Thu 16th September 2021
> > > >
> > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> > > >
> > > > The Program Committee members:
> > > >
> > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
> Sam
> > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
> Geoff
> > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
> Kuehlewind,
> > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
> Nichols,
> > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> > > >
> > > > Send Submissions to: network-quality-workshop-pc@iab.org<mailto:network-quality-workshop-pc@iab.org>
> <mailto:network-quality-workshop-pc@iab.org>.
> > > >
> > > > Position papers from academia, industry, the open source
> community and
> > > > others that focus on measurements, experiences, observations
> and
> > > > advice for the future are welcome. Papers that reflect
> experience
> > > > based on deployed services are especially welcome. The
> organizers
> > > > understand that specific actions taken by operators are
> unlikely to be
> > > > discussed in detail, so papers discussing general categories
> of
> > > > actions and issues without naming specific technologies,
> products, or
> > > > other players in the ecosystem are expected. Papers should not
> focus
> > > > on specific protocol solutions.
> > > >
> > > > The workshop will be by invitation only. Those wishing to
> attend
> > > > should submit a position paper to the address above; it may
> take the
> > > > form of an Internet-Draft.
> > > >
> > > > All inputs submitted and considered relevant will be published
> on the
> > > > workshop website. The organisers will decide whom to invite
> based on
> > > > the submissions received. Sessions will be organized according
> to
> > > > content, and not every accepted submission or invited attendee
> will
> > > > have an opportunity to present as the intent is to foster
> discussion
> > > > and not simply to have a sequence of presentations.
> > > >
> > > > Position papers from those not planning to attend the virtual
> sessions
> > > > themselves are also encouraged. A workshop report will be
> published
> > > > afterwards.
> > > >
> > > > Overview:
> > > >
> > > > "We believe that one of the major factors behind this lack of
> progress
> > > > is the popular perception that throughput is the often sole
> measure of
> > > > the quality of Internet connectivity. With such narrow focus,
> people
> > > > don’t consider questions such as:
> > > >
> > > > What is the latency under typical working conditions?
> > > > How reliable is the connectivity across longer time periods?
> > > > Does the network allow the use of a broad range of protocols?
> > > > What services can be run by clients of the network?
> > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and
> are there firewalls?
> > > > What security mechanisms are available for local services,
> such as DNS?
> > > > To what degree are the privacy, confidentiality, integrity
> and
> > > > authenticity of user communications guarded?
> > > >
> > > > Improving these aspects of network quality will likely depend
> on
> > > > measurement and exposing metrics to all involved parties,
> including to
> > > > end users in a meaningful way. Such measurements and exposure
> of the
> > > > right metrics will allow service providers and network
> operators to
> > > > focus on the aspects that impacts the users’ experience
> most and at
> > > > the same time empowers users to choose the Internet service
> that will
> > > > give them the best experience."
> > > >
> > > >
> > > > --
> > > > Latest Podcast:
> > > >
> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/<https://urldefense.com/v3/__https:/www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lLMPP3q8$>
> <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/<https://urldefense.com/v3/__https:/www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lLMPP3q8$>>
> > > >
> > > > Dave Täht CTO, TekLibre, LLC
> > > > _______________________________________________
> > > > Cerowrt-devel mailing list
> > > > Cerowrt-devel@lists.bufferbloat.net<mailto:Cerowrt-devel@lists.bufferbloat.net>
> <mailto:Cerowrt-devel@lists.bufferbloat.net>
> > > > https://lists.bufferbloat.net/listinfo/cerowrt-devel<https://urldefense.com/v3/__https:/lists.bufferbloat.net/listinfo/cerowrt-devel__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lu5wVUaY$>
> <https://lists.bufferbloat.net/listinfo/cerowrt-devel<https://urldefense.com/v3/__https:/lists.bufferbloat.net/listinfo/cerowrt-devel__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lu5wVUaY$>>
> > > >
> >
> >
> >
> > --
> > Latest Podcast:
> > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/<https://urldefense.com/v3/__https:/www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lLMPP3q8$>
> <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/<https://urldefense.com/v3/__https:/www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lLMPP3q8$>>
> >
> > Dave Täht CTO, TekLibre, LLC
> > _______________________________________________
> > Make-wifi-fast mailing list
> > Make-wifi-fast@lists.bufferbloat.net<mailto:Make-wifi-fast@lists.bufferbloat.net>
> <mailto:Make-wifi-fast@lists.bufferbloat.net>
> > https://lists.bufferbloat.net/listinfo/make-wifi-fast<https://urldefense.com/v3/__https:/lists.bufferbloat.net/listinfo/make-wifi-fast__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lit3ThNE$>
> <https://lists.bufferbloat.net/listinfo/make-wifi-fast<https://urldefense.com/v3/__https:/lists.bufferbloat.net/listinfo/make-wifi-fast__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lit3ThNE$>>
> >
> >
> > This electronic communication and the information and any files transmitted
> with it, or attached to it, are confidential and are intended solely for the use
> of
> > the individual or entity to whom it is addressed and may contain information
> that is confidential, legally privileged, protected by privacy laws, or otherwise
> > restricted from disclosure to anyone else. If you are not the intended
> recipient or the person responsible for delivering the e-mail to the intended
> recipient,
> > you are hereby notified that any use, copying, distributing, dissemination,
> forwarding, printing, or copying of this e-mail is strictly prohibited. If you
> > received this e-mail in error, please return the e-mail to the sender, delete
> it from your computer, and destroy any printed copy of it.
> >
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net<mailto:Starlink@lists.bufferbloat.net>
> > https://lists.bufferbloat.net/listinfo/starlink<https://urldefense.com/v3/__https:/lists.bufferbloat.net/listinfo/starlink__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lgGXxLkE$>
> >
>
>
> --
> Ben Greear <greearb@candelatech.com<mailto:greearb@candelatech.com>>
> Candela Technologies Inc http://www.candelatech.com<https://urldefense.com/v3/__http:/www.candelatech.com__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4laCFWpfI$>
>
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net<mailto:Starlink@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/starlink<https://urldefense.com/v3/__https:/lists.bufferbloat.net/listinfo/starlink__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lgGXxLkE$>
_______________________________________________
Make-wifi-fast mailing list
Make-wifi-fast@lists.bufferbloat.net<mailto:Make-wifi-fast@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/make-wifi-fast<https://urldefense.com/v3/__https:/lists.bufferbloat.net/listinfo/make-wifi-fast__;!!GjvTz_vk!BT68CF5LoYeU1mMi8k8WTNLAhUoaYX7fzILHhoETQXGTqQJQK8-TaS4lit3ThNE$>

[-- Attachment #2: Type: text/html, Size: 55994 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] Little's Law mea culpa, but not invalidating my main point
  2021-07-09 19:31               ` [Cerowrt-devel] Little's Law mea culpa, but not invalidating my main point David P. Reed
  2021-07-09 20:24                 ` Bob McMahon
  2021-07-09 22:57                 ` [Bloat] " Holland, Jake
@ 2021-07-09 23:01                 ` Leonard Kleinrock
  2021-07-09 23:56                   ` [Cerowrt-devel] [Bloat] " Jonathan Morton
  2021-07-10 19:51                   ` Bob McMahon
  2021-07-12 13:46                 ` [Bloat] " Livingood, Jason
  2021-09-20  1:21                 ` [Cerowrt-devel] " Dave Taht
  4 siblings, 2 replies; 108+ messages in thread
From: Leonard Kleinrock @ 2021-07-09 23:01 UTC (permalink / raw)
  To: David P. Reed
  Cc: Leonard Kleinrock, Luca Muscariello, starlink, Make-Wifi-fast,
	Bob McMahon, Cake List, codel, cerowrt-devel, bloat, Ben Greear

[-- Attachment #1: Type: text/plain, Size: 27583 bytes --]

David,

No question that non-stationarity and instability are what we often see in networks.  And, non-stationarity and instability are both topics that lead to very complex analytical problems in queueing theory.  You can find some results on the transient analysis in the queueing theory literature (including the second volume of my Queueing Systems book), but they are limited and hard. Nevertheless, the literature does contain some works on transient analysis of queueing systems as applied to network congestion control - again limited. On the other hand, as you said, control theory addresses stability head on and does offer some tools as well, but again, it is hairy. 

Averages are only averages, but they can provide valuable information. For sure, latency can and does confound behavior.  But, as you point out, it is the proliferation of control protocols that are, in some cases, deployed willy-nilly in networks without proper evaluation of their behavior that can lead to the nasty cycle of large transient latency, frantic repeating of web requests, protocols sending multiple copies, lack of awareness of true capacity or queue size or throughput, etc, all of which you articulate so well, create the chaos and frustration in the network.  Analyzing that is really difficult, and if we don’t measure and sense, we have no hope of understanding, controlling, or ameliorating such situations.  

Len

> On Jul 9, 2021, at 12:31 PM, David P. Reed <dpreed@deepplum.com> wrote:
> 
> Len - I admit I made a mistake in challenging Little's Law as being based on Poisson processes. It is more general. But it tells you an "average" in its base form, and latency averages are not useful for end user applications.
>  
> However, Little's Law does assume something that is not actually valid about the kind of distributions seen in the network, and in fact, it is NOT true that networks converge on Poisson arrival times.
>  
> The key issue is well-described in the sandard analysis of the M/M/1 queue (e.g. https://en.wikipedia.org/wiki/M/M/1_queue <https://en.wikipedia.org/wiki/M/M/1_queue>) , which is done only for Poisson processes, and is also limited to "stable" systems. But networks are never stable when fully loaded. They get unstable and those instabilities persist for a long time in the network. Instability is at core the underlying *requirement* of the Internet's usage.
>  
> So specifically: real networks, even large ones, and certainly the Internet today, are not asymptotic limits of sums of stationary stochastic arrival processes. Each esternal terminal of any real network has a real user there, running a real application, and the network is a complex graph. This makes it completely unlike a single queue. Even the links within a network carry a relatively small number of application flows. There's no ability to apply the Law of Large Numbers to the distributions, because any particular path contains only a small number of serialized flows with hightly variable rates.
>  
> Here's an example of what really happens in a real network (I've observed this in 5 different cities on ATT's cellular network, back when it was running Alcatel Lucent HSPA+ gear in those cities).
> But you can see this on any network where transient overload occurs, creating instability.
>  
>  
> At 7 AM, the data transmission of the network is roughty stable. That's because no links are overloaded within the network. Little's Law can tell you by observing the delay and throughput on any path that the average delay in the network is X.
>  
> Continue sampling delay in the network as the day wears on. At about 10 AM, ping delay starts to soar into the multiple second range. No packers are lost. The peak ping time is about 4000 milliseconds - 4 seconds in most of the networks. This is in downtown, no radio errors are reported, no link errors.
> So it is all queueing delay. 
>  
> Now what Little's law doesn't tell you much about average delay, because clearly *some* subpiece of the network is fully saturated. But what is interesting here is what is happening and where. You can't tell what is saturated, and in fact the entire network is quite unstable, because the peak is constantly varying and you don't know where the throughput is. All the packets are now arriving 4 seconds or so later.
>  
> Why is the situaton not worse than 4 seconds? Well, there are multiple things going on:
>  
> 1) TCP may be doing a lot of retransmissions (non-Poisson at all, not random either. The arrival process is entirely deterministic in each source, based on the retransmission timeout) or it may not be.
>  
> 2) Users are pissed off, because they clicked on a web page, and got nothing back. They retry on their screen, or they try another site. Meanwhile, the underlying TCP connection remains there, pumping the network full of more packets on that old path, which is still backed up with packets that haven't been delivered that are sitting in queues. The real arrival process is not Poisson at all, its a deterministic, repeated retrsnsmission plus a new attempt to connect to a new site.
>  
> 3) When the users get a web page back eventually, it is filled with names of other pieces needed to display that web page, which causes some number (often as many as 100) new pages to be fetched, ALL at the same time. Certainly not a stochastic process that will just obey the law of large numbers.
>  
> All of these things are the result of initial instability, causing queues to build up.
>  
> So what is the state of the system? is it stable? is it stochastic? Is it the sum of enough stochastic stable flows to average out to Poisson?
>  
> The answer is clearly NO. Control theory (not queuing theory) suggests that this system is completely uncontrolled and unstable.
>  
> So if the system is in this state, what does Little's Lemma tell us? What is the meaning of that hightly variable 4 second delay on ping packets, in terms of average utilizaton of the network?
>  
> We don't even know what all the users really might need, if the system hadn't become unstable, because some users have given up, and others are trying even harder, and new users are arriving.
>  
> What we do know, because ATT (at my suggestion) reconfigured their system after blaming Apple Computer company for "bugs" in the original iPhone in public, is that simply *dropping* packets sitting in queues more than a couple milliseconds MADE THE USERS HAPPY. Apparently the required capacity was there all along! 
>  
> So I conclude that the 4 second delay was the largest delay users could barely tolerate before deciding the network was DOWN and going away. And that the backup was the accumulation of useless packets sitting in queues because none of the end systems were receiving congestion signals (which for the Internet stack begins with packet dropping).
>  
> I should say that most operators, and especially ATT in this case, do not measure end-to-end latency. Instead they use Little's Lemma to query routers for their current throughput in bits per second, and calculate latency as if Little's Lemma applied. This results in reports to management that literally say:
>  
>   The network is not dropping packets, utilization is near 100% on many of our switches and routers.
>  
> And management responds, Hooray! Because utilization of 100% of their hardware is their investors' metric of maximizing profits. The hardware they are operating is fully utilized. No waste! And users are happy because no packets have been dropped!
>  
> Hmm... what's wrong with this picture? I can see why Donovan, CTO, would accuse Apple of lousy software that was ruining iPhone user experience!  His network was operating without ANY problems.
> So it must be Apple!
>  
> Well, no. The entire problem, as we saw when ATT just changed to shorten egress queues and drop packets when the egress queues overflowed, was that ATT's network was amplifying instability, not at the link level, but at the network level.
>  
> And queueing theory can help with that, but *intro queueing theory* cannot.
>  
> And a big part of that problem is the pervasive belief that, at the network boundary, *Poisson arrival* is a reasonable model for use in all cases.
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
> On Friday, July 9, 2021 6:05am, "Luca Muscariello" <muscariello@ieee.org> said:
> 
> For those who might be interested in Little's law
> there is a nice paper by John Little on the occasion 
> of the 50th anniversary  of the result.
> https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary <https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary>
> https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf <https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf>
>  
> Nice read. 
> Luca 
>  
> P.S. 
> Who has not a copy of L. Kleinrock's books? I do have and am not ready to lend them!
> On Fri, Jul 9, 2021 at 11:01 AM Leonard Kleinrock <lk@cs.ucla.edu <mailto:lk@cs.ucla.edu>> wrote:
> David,
> I totally appreciate  your attention to when and when not analytical modeling works. Let me clarify a few things from your note.
> First, Little's law (also known as Little’s lemma or, as I use in my book, Little’s result) does not assume Poisson arrivals -  it is good for any arrival process and any service process and is an equality between time averages.  It states that the time average of the number in a system (for a sample path w) is equal to the average arrival rate to the system multiplied by the time-averaged time in the system for that sample path.  This is often written as   NTimeAvg =λ·TTimeAvg .  Moreover, if the system is also ergodic, then the time average equals the ensemble average and we often write it as N ̄ = λ T ̄ .  In any case, this requires neither Poisson arrivals nor exponential service times.  
>  
> Queueing theorists often do study the case of Poisson arrivals.  True, it makes the analysis easier, yet there is a better reason it is often used, and that is because the sum of a large number of independent stationary renewal processes approaches a Poisson process.  So nature often gives us Poisson arrivals.  
> Best,
> Len
> On Jul 8, 2021, at 12:38 PM, David P. Reed <dpreed@deepplum.com <mailto:dpreed@deepplum.com>> wrote:
> 
> I will tell you flat out that the arrival time distribution assumption made by Little's Lemma that allows "estimation of queue depth" is totally unreasonable on ANY Internet in practice.
>  
> The assumption is a Poisson Arrival Process. In reality, traffic arrivals in real internet applications are extremely far from Poisson, and, of course, using TCP windowing, become highly intercorrelated with crossing traffic that shares the same queue.
>  
> So, as I've tried to tell many, many net-heads (people who ignore applications layer behavior, like the people that think latency doesn't matter to end users, only throughput), end-to-end packet arrival times on a practical network are incredibly far from Poisson - and they are more like fractal probability distributions, very irregular at all scales of time.
>  
> So, the idea that iperf can estimate queue depth by Little's Lemma by just measuring saturation of capacity of a path is bogus.The less Poisson, the worse the estimate gets, by a huge factor.
>  
>  
> Where does the Poisson assumption come from?  Well, like many theorems, it is the simplest tractable closed form solution - it creates a simplified view, by being a "single-parameter" distribution (the parameter is called lambda for a Poisson distribution).  And the analysis of a simple queue with poisson arrival distribution and a static, fixed service time is the first interesting Queueing Theory example in most textbooks. It is suggestive of an interesting phenomenon, but it does NOT characterize any real system.
>  
> It's the queueing theory equivalent of "First, we assume a spherical cow...". in doing an example in a freshman physics class.
>  
> Unfortunately, most networking engineers understand neither queuing theory nor application networking usage in interactive applications. Which makes them arrogant. They assume all distributions are poisson!
>  
>  
> On Tuesday, July 6, 2021 9:46am, "Ben Greear" <greearb@candelatech.com <mailto:greearb@candelatech.com>> said:
> 
> > Hello,
> > 
> > I am interested to hear wish lists for network testing features. We make test
> > equipment, supporting lots
> > of wifi stations and a distributed architecture, with built-in udp, tcp, ipv6,
> > http, ... protocols,
> > and open to creating/improving some of our automated tests.
> > 
> > I know Dave has some test scripts already, so I'm not necessarily looking to
> > reimplement that,
> > but more fishing for other/new ideas.
> > 
> > Thanks,
> > Ben
> > 
> > On 7/2/21 4:28 PM, Bob McMahon wrote:
> > > I think we need the language of math here. It seems like the network
> > power metric, introduced by Kleinrock and Jaffe in the late 70s, is something
> > useful.
> > > Effective end/end queue depths per Little's law also seems useful. Both are
> > available in iperf 2 from a test perspective. Repurposing test techniques to
> > actual
> > > traffic could be useful. Hence the question around what exact telemetry
> > is useful to apps making socket write() and read() calls.
> > >
> > > Bob
> > >
> > > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>
> > <mailto:dave.taht@gmail.com <mailto:dave.taht@gmail.com>>> wrote:
> > >
> > > In terms of trying to find "Quality" I have tried to encourage folk to
> > > both read "zen and the art of motorcycle maintenance"[0], and Deming's
> > > work on "total quality management".
> > >
> > > My own slice at this network, computer and lifestyle "issue" is aiming
> > > for "imperceptible latency" in all things. [1]. There's a lot of
> > > fallout from that in terms of not just addressing queuing delay, but
> > > caching, prefetching, and learning more about what a user really needs
> > > (as opposed to wants) to know via intelligent agents.
> > >
> > > [0] If you want to get depressed, read Pirsig's successor to "zen...",
> > > lila, which is in part about what happens when an engineer hits an
> > > insoluble problem.
> > > [1] https://www.internetsociety.org/events/latency2013/ <https://www.internetsociety.org/events/latency2013/>
> > <https://www.internetsociety.org/events/latency2013/ <https://www.internetsociety.org/events/latency2013/>>
> > >
> > >
> > >
> > > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com <mailto:dpreed@deepplum.com>
> > <mailto:dpreed@deepplum.com <mailto:dpreed@deepplum.com>>> wrote:
> > > >
> > > > Well, nice that the folks doing the conference  are willing to
> > consider that quality of user experience has little to do with signalling rate at
> > the
> > > physical layer or throughput of FTP transfers.
> > > >
> > > >
> > > >
> > > > But honestly, the fact that they call the problem "network quality"
> > suggests that they REALLY, REALLY don't understand the Internet isn't the hardware
> > or
> > > the routers or even the routing algorithms *to its users*.
> > > >
> > > >
> > > >
> > > > By ignoring the diversity of applications now and in the future,
> > and the fact that we DON'T KNOW what will be coming up, this conference will
> > likely fall
> > > into the usual trap that net-heads fall into - optimizing for some
> > imaginary reality that doesn't exist, and in fact will probably never be what
> > users
> > > actually will do given the chance.
> > > >
> > > >
> > > >
> > > > I saw this issue in 1976 in the group developing the original
> > Internet protocols - a desire to put *into the network* special tricks to optimize
> > ASR33
> > > logins to remote computers from terminal concentrators (aka remote
> > login), bulk file transfers between file systems on different time-sharing
> > systems, and
> > > "sessions" (virtual circuits) that required logins. And then trying to
> > exploit underlying "multicast" by building it into the IP layer, because someone
> > > thought that TV broadcast would be the dominant application.
> > > >
> > > >
> > > >
> > > > Frankly, to think of "quality" as something that can be "provided"
> > by "the network" misses the entire point of "end-to-end argument in system
> > design".
> > > Quality is not a property defined or created by The Network. If you want
> > to talk about Quality, you need to talk about users - all the users at all times,
> > > now and into the future, and that's something you can't do if you don't
> > bother to include current and future users talking about what they might expect
> > to
> > > experience that they don't experience.
> > > >
> > > >
> > > >
> > > > There was much fighting back in 1976 that basically involved
> > "network experts" saying that the network was the place to "solve" such issues as
> > quality,
> > > so applications could avoid having to solve such issues.
> > > >
> > > >
> > > >
> > > > What some of us managed to do was to argue that you can't "solve"
> > such issues. All you can do is provide a framework that enables different uses to
> > > *cooperate* in some way.
> > > >
> > > >
> > > >
> > > > Which is why the Internet drops packets rather than queueing them,
> > and why diffserv cannot work.
> > > >
> > > > (I know the latter is conftroversial, but at the moment, ALL of
> > diffserv attempts to talk about end-to-end applicaiton specific metrics, but
> > never, ever
> > > explains what the diffserv control points actually do w.r.t. what the IP
> > layer can actually control. So it is meaningless - another violation of the
> > > so-called end-to-end principle).
> > > >
> > > >
> > > >
> > > > Networks are about getting packets from here to there, multiplexing
> > the underlying resources. That's it. Quality is a whole different thing. Quality
> > can
> > > be improved by end-to-end approaches, if the underlying network provides
> > some kind of thing that actually creates a way for end-to-end applications to
> > > affect queueing and routing decisions, and more importantly getting
> > "telemetry" from the network regarding what is actually going on with the other
> > > end-to-end users sharing the infrastructure.
> > > >
> > > >
> > > >
> > > > This conference won't talk about it this way. So don't waste your
> > time.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"
> > <dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:dave.taht@gmail.com <mailto:dave.taht@gmail.com>>> said:
> > > >
> > > > > The program committee members are *amazing*. Perhaps, finally,
> > we can
> > > > > move the bar for the internet's quality metrics past endless,
> > blind
> > > > > repetitions of speedtest.
> > > > >
> > > > > For complete details, please see:
> > > > > https://www.iab.org/activities/workshops/network-quality/ <https://www.iab.org/activities/workshops/network-quality/>
> > <https://www.iab.org/activities/workshops/network-quality/ <https://www.iab.org/activities/workshops/network-quality/>>
> > > > >
> > > > > Submissions Due: Monday 2nd August 2021, midnight AOE
> > (Anywhere On Earth)
> > > > > Invitations Issued by: Monday 16th August 2021
> > > > >
> > > > > Workshop Date: This will be a virtual workshop, spread over
> > three days:
> > > > >
> > > > > 1400-1800 UTC Tue 14th September 2021
> > > > > 1400-1800 UTC Wed 15th September 2021
> > > > > 1400-1800 UTC Thu 16th September 2021
> > > > >
> > > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> > > > >
> > > > > The Program Committee members:
> > > > >
> > > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
> > Sam
> > > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
> > Geoff
> > > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
> > Kuehlewind,
> > > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
> > Nichols,
> > > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> > > > >
> > > > > Send Submissions to: network-quality-workshop-pc@iab.org <mailto:network-quality-workshop-pc@iab.org>
> > <mailto:network-quality-workshop-pc@iab.org <mailto:network-quality-workshop-pc@iab.org>>.
> > > > >
> > > > > Position papers from academia, industry, the open source
> > community and
> > > > > others that focus on measurements, experiences, observations
> > and
> > > > > advice for the future are welcome. Papers that reflect
> > experience
> > > > > based on deployed services are especially welcome. The
> > organizers
> > > > > understand that specific actions taken by operators are
> > unlikely to be
> > > > > discussed in detail, so papers discussing general categories
> > of
> > > > > actions and issues without naming specific technologies,
> > products, or
> > > > > other players in the ecosystem are expected. Papers should not
> > focus
> > > > > on specific protocol solutions.
> > > > >
> > > > > The workshop will be by invitation only. Those wishing to
> > attend
> > > > > should submit a position paper to the address above; it may
> > take the
> > > > > form of an Internet-Draft.
> > > > >
> > > > > All inputs submitted and considered relevant will be published
> > on the
> > > > > workshop website. The organisers will decide whom to invite
> > based on
> > > > > the submissions received. Sessions will be organized according
> > to
> > > > > content, and not every accepted submission or invited attendee
> > will
> > > > > have an opportunity to present as the intent is to foster
> > discussion
> > > > > and not simply to have a sequence of presentations.
> > > > >
> > > > > Position papers from those not planning to attend the virtual
> > sessions
> > > > > themselves are also encouraged. A workshop report will be
> > published
> > > > > afterwards.
> > > > >
> > > > > Overview:
> > > > >
> > > > > "We believe that one of the major factors behind this lack of
> > progress
> > > > > is the popular perception that throughput is the often sole
> > measure of
> > > > > the quality of Internet connectivity. With such narrow focus,
> > people
> > > > > don’t consider questions such as:
> > > > >
> > > > > What is the latency under typical working conditions?
> > > > > How reliable is the connectivity across longer time periods?
> > > > > Does the network allow the use of a broad range of protocols?
> > > > > What services can be run by clients of the network?
> > > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and
> > are there firewalls?
> > > > > What security mechanisms are available for local services,
> > such as DNS?
> > > > > To what degree are the privacy, confidentiality, integrity
> > and
> > > > > authenticity of user communications guarded?
> > > > >
> > > > > Improving these aspects of network quality will likely depend
> > on
> > > > > measurement and exposing metrics to all involved parties,
> > including to
> > > > > end users in a meaningful way. Such measurements and exposure
> > of the
> > > > > right metrics will allow service providers and network
> > operators to
> > > > > focus on the aspects that impacts the users’ experience
> > most and at
> > > > > the same time empowers users to choose the Internet service
> > that will
> > > > > give them the best experience."
> > > > >
> > > > >
> > > > > --
> > > > > Latest Podcast:
> > > > >
> > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> > <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>>
> > > > >
> > > > > Dave Täht CTO, TekLibre, LLC
> > > > > _______________________________________________
> > > > > Cerowrt-devel mailing list
> > > > > Cerowrt-devel@lists.bufferbloat.net <mailto:Cerowrt-devel@lists.bufferbloat.net>
> > <mailto:Cerowrt-devel@lists.bufferbloat.net <mailto:Cerowrt-devel@lists.bufferbloat.net>>
> > > > > https://lists.bufferbloat.net/listinfo/cerowrt-devel <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
> > <https://lists.bufferbloat.net/listinfo/cerowrt-devel <https://lists.bufferbloat.net/listinfo/cerowrt-devel>>
> > > > >
> > >
> > >
> > >
> > > --
> > > Latest Podcast:
> > > https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>
> > <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/ <https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/>>
> > >
> > > Dave Täht CTO, TekLibre, LLC
> > > _______________________________________________
> > > Make-wifi-fast mailing list
> > > Make-wifi-fast@lists.bufferbloat.net <mailto:Make-wifi-fast@lists.bufferbloat.net>
> > <mailto:Make-wifi-fast@lists.bufferbloat.net <mailto:Make-wifi-fast@lists.bufferbloat.net>>
> > > https://lists.bufferbloat.net/listinfo/make-wifi-fast <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
> > <https://lists.bufferbloat.net/listinfo/make-wifi-fast <https://lists.bufferbloat.net/listinfo/make-wifi-fast>>
> > >
> > >
> > > This electronic communication and the information and any files transmitted
> > with it, or attached to it, are confidential and are intended solely for the use
> > of
> > > the individual or entity to whom it is addressed and may contain information
> > that is confidential, legally privileged, protected by privacy laws, or otherwise
> > > restricted from disclosure to anyone else. If you are not the intended
> > recipient or the person responsible for delivering the e-mail to the intended
> > recipient,
> > > you are hereby notified that any use, copying, distributing, dissemination,
> > forwarding, printing, or copying of this e-mail is strictly prohibited. If you
> > > received this e-mail in error, please return the e-mail to the sender, delete
> > it from your computer, and destroy any printed copy of it.
> > >
> > > _______________________________________________
> > > Starlink mailing list
> > > Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> > > https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
> > >
> > 
> > 
> > --
> > Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com>>
> > Candela Technologies Inc http://www.candelatech.com <http://www.candelatech.com/>
> >
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>_______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net <mailto:Make-wifi-fast@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/make-wifi-fast <https://lists.bufferbloat.net/listinfo/make-wifi-fast>

[-- Attachment #2: Type: text/html, Size: 47010 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-09 22:57                 ` [Bloat] " Holland, Jake
@ 2021-07-09 23:37                   ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 108+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-07-09 23:37 UTC (permalink / raw)
  To: Holland, Jake, David P. Reed, Luca Muscariello
  Cc: starlink, Make-Wifi-fast, Leonard Kleinrock, Bob McMahon,
	Cake List, codel, cerowrt-devel, bloat, Ben Greear

"Holland, Jake via Bloat" <bloat@lists.bufferbloat.net> writes:

> Hi David,
>
> That’s an interesting point, and I think you’re right that packet
> arrival is poorly modeled as a Poisson process, because in practice
> packet transmissions are very rarely unrelated to other packet
> transmissions.
>
> But now you’ve got me wondering what the right approach is. Do you
> have any advice for how to improve this kind of modeling?

I actually tried my hand at finding something better for my master's
thesis and came across something called a Markov-Modulated Poisson
Process (MMPP/D/1 queue)[0]. It looked promising, but unfortunately I
failed to make it produce any useful predictions. Most likely this was
as much a result of my own failings as a queueing theorist as it was the
fault of the model (I was in way over my head by the time I got to that
model); so I figured I'd mention it here in case anyone more qualified
would have any opinion on it.

I did manage to get the Linux kernel to produce queueing behaviour that
resembled that of a standard M/M/1 queue (if you squint a bit); all you
have to do is to use a traffic generator that emits packets with the
distribution the model assumes... :)

The full thesis is still available[1] for the perusal of morbidly curious.

-Toke

[0] https://www.sciencedirect.com/science/article/abs/pii/016653169390035S
[1] https://rucforsk.ruc.dk/ws/files/57613884/thesis-final.pdf

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-09 23:01                 ` [Cerowrt-devel] " Leonard Kleinrock
@ 2021-07-09 23:56                   ` Jonathan Morton
  2021-07-17 23:56                     ` [Cerowrt-devel] [Make-wifi-fast] " Aaron Wood
  2021-07-10 19:51                   ` Bob McMahon
  1 sibling, 1 reply; 108+ messages in thread
From: Jonathan Morton @ 2021-07-09 23:56 UTC (permalink / raw)
  To: Leonard Kleinrock
  Cc: David P. Reed, Cake List, Make-Wifi-fast, Bob McMahon, starlink,
	codel, cerowrt-devel, bloat, Ben Greear

> On 10 Jul, 2021, at 2:01 am, Leonard Kleinrock <lk@cs.ucla.edu> wrote:
> 
> No question that non-stationarity and instability are what we often see in networks.  And, non-stationarity and instability are both topics that lead to very complex analytical problems in queueing theory.  You can find some results on the transient analysis in the queueing theory literature (including the second volume of my Queueing Systems book), but they are limited and hard. Nevertheless, the literature does contain some works on transient analysis of queueing systems as applied to network congestion control - again limited. On the other hand, as you said, control theory addresses stability head on and does offer some tools as well, but again, it is hairy. 

I was just about to mention control theory.

One basic characteristic of Poisson traffic is that it is inelastic, and assumes there is no control feedback whatsoever.  This means it can only be a valid model when the following are both true:

1: The offered load is *below* the link capacity, for all links, averaged over time.

2: A high degree of statistical multiplexing exists.

If 1: is not true and the traffic is truly inelastic, then the queues will inevitably fill up and congestion collapse will result, as shown from ARPANET experience in the 1980s; the solution was to introduce control feedback to the traffic, initially in the form of TCP Reno.  If 2: is not true then the traffic cannot be approximated as Poisson arrivals, regardless of load relative to capacity, because the degree of correlation is too high.

Taking the iPhone introduction anecdote as an illustrative example, measuring utilisation as very close to 100% is a clear warning sign that the Poisson model was inappropriate, and a control-theory approach was needed instead, to capture the feedback effects of congestion control.  The high degree of statistical multiplexing inherent to a major ISP backhaul is irrelevant to that determination.

Such a model would have found that the primary source of control feedback was human users giving up in disgust.  However, different humans have different levels of tolerance and persistence, so this feedback was not sufficient to reduce the load sufficiently to give the majority of users a good service; instead, *all* users received a poor service and many users received no usable service.  Introducing a technological control feedback, in the form of packet loss upon overflow of correctly-sized queues, improved service for everyone.

(BTW, DNS becomes significantly unreliable around 1-2 seconds RTT, due to protocol timeouts, which is inherited by all applications that rely on DNS lookups.  Merely reducing the delays consistently below that threshold would have improved perceived reliability markedly.)

Conversely, when talking about the traffic on a single ISP subscriber's last-mile link, the Poisson model has to be discarded due to criterion 2 being false.  The number of flows going to even a family household is probably in the low dozens at best.  A control-theory approach can also work here.

 - Jonathan Morton

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: Little's Law mea culpa, but not invalidating my main point
  2021-07-09 23:01                 ` [Cerowrt-devel] " Leonard Kleinrock
  2021-07-09 23:56                   ` [Cerowrt-devel] [Bloat] " Jonathan Morton
@ 2021-07-10 19:51                   ` Bob McMahon
  2021-07-10 23:24                     ` Bob McMahon
  1 sibling, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-07-10 19:51 UTC (permalink / raw)
  To: Leonard Kleinrock
  Cc: David P. Reed, Luca Muscariello, starlink, Make-Wifi-fast,
	Cake List, codel, cerowrt-devel, bloat, Ben Greear


[-- Attachment #1.1: Type: text/plain, Size: 30169 bytes --]

"Analyzing that is really difficult, and if we don’t measure and sense, we
have no hope of understanding, controlling, or ameliorating such
situations."

It is truly a high honor to observe the queueing theory and control theory
discussions to the world class experts here. We simple test guys must
measure things and we'd like those things to be generally useful to all who
can help towards improvements. Hence back to my original question, what
network, or other, telemetry do experts here see as useful towards
measuring active traffic to help with this?

Just some background, and my apologies for the indulgence, but we'd like
our automation rigs to be able to better emulate "real world scenarios" and
use stochastic based regression type signals when something goes wrong
which, for us, is typically a side effect to a driver or firmware code
change and commit. (Humans need machine level support for this.) It's also
very frustrating that modern data centers aren't generally providing GPS
atomic time to servers. (I think part of the idea behind IP packets, etc.
was to mitigate fault domains and the PSTN stratum clocks were a huge weak
point.) I find, today, not having a common clock reference "accurate and
precise enough" is hindering progress towards understanding the complexity
and towards the ameliorating, at least from our attempts to map "bothersome
to machine and/or humans and relevant real world phenomenon" into our
automation environments allowing us to catch things early in the eng life
cycle.

A few of us have pushed over the last five or more years to add one way
delay (OWD) of the test traffic (which is not the same as 1/2 RTT nor an
ICMP ping delay) into iperf 2. That code is available to anyone. The lack
of adoption applied to OWD has been disheartening. One common response has
been, "We don't need that because users can't get their devices sync'd
to the atomic clock anyway." (Also 3 is a larger number than 2 so iperf3
must be better than iperf2 so let us keep using that as our measurement
tool - though I digress  ;) ;)

Bob

PS. One can get a stratum 1 clock with a raspberry pi working in a home for
about $200. I've got one in my home (along with a $2500 OCXO from
spectracom) and the Pi is reasonable.
https://www.satsignal.eu/ntp/Raspberry-Pi-NTP.html

On Fri, Jul 9, 2021 at 4:01 PM Leonard Kleinrock <lk@cs.ucla.edu> wrote:

> David,
>
> No question that non-stationarity and instability are what we often see in
> networks.  And, non-stationarity and instability are both topics that lead
> to very complex analytical problems in queueing theory.  You can find some
> results on the transient analysis in the queueing theory literature
> (including the second volume of my Queueing Systems book), but they are
> limited and hard. Nevertheless, the literature does contain some works on
> transient analysis of queueing systems as applied to network congestion
> control - again limited. On the other hand, as you said, control theory
> addresses stability head on and does offer some tools as well, but again,
> it is hairy.
>
> Averages are only averages, but they can provide valuable information. For
> sure, latency can and does confound behavior.  But, as you point out, it is
> the proliferation of control protocols that are, in some cases, deployed
> willy-nilly in networks without proper evaluation of their behavior that
> can lead to the nasty cycle of large transient latency, frantic repeating
> of web requests, protocols sending multiple copies, lack of awareness of
> true capacity or queue size or throughput, etc, all of which you articulate
> so well, create the chaos and frustration in the network.  Analyzing that
> is really difficult, and if we don’t measure and sense, we have no hope of
> understanding, controlling, or ameliorating such situations.
>
> Len
>
> On Jul 9, 2021, at 12:31 PM, David P. Reed <dpreed@deepplum.com> wrote:
>
> Len - I admit I made a mistake in challenging Little's Law as being based
> on Poisson processes. It is more general. But it tells you an "average" in
> its base form, and latency averages are not useful for end user
> applications.
>
>
> However, Little's Law does assume something that is not actually valid
> about the kind of distributions seen in the network, and in fact, it is NOT
> true that networks converge on Poisson arrival times.
>
>
> The key issue is well-described in the sandard analysis of the M/M/1 queue
> (e.g. https://en.wikipedia.org/wiki/M/M/1_queue) , which is done only for
> Poisson processes, and is also limited to "stable" systems. But networks
> are never stable when fully loaded. They get unstable and those
> instabilities persist for a long time in the network. Instability is at
> core the underlying *requirement* of the Internet's usage.
>
>
> So specifically: real networks, even large ones, and certainly the
> Internet today, are not asymptotic limits of sums of stationary stochastic
> arrival processes. Each esternal terminal of any real network has a real
> user there, running a real application, and the network is a complex graph.
> This makes it completely unlike a single queue. Even the links within a
> network carry a relatively small number of application flows. There's no
> ability to apply the Law of Large Numbers to the distributions, because any
> particular path contains only a small number of serialized flows with
> hightly variable rates.
>
>
> Here's an example of what really happens in a real network (I've observed
> this in 5 different cities on ATT's cellular network, back when it was
> running Alcatel Lucent HSPA+ gear in those cities).
> But you can see this on any network where transient overload occurs,
> creating instability.
>
>
>
>
> At 7 AM, the data transmission of the network is roughty stable. That's
> because no links are overloaded within the network. Little's Law can tell
> you by observing the delay and throughput on any path that the average
> delay in the network is X.
>
>
> Continue sampling delay in the network as the day wears on. At about 10
> AM, ping delay starts to soar into the multiple second range. No packers
> are lost. The peak ping time is about 4000 milliseconds - 4 seconds in most
> of the networks. This is in downtown, no radio errors are reported, no link
> errors.
> So it is all queueing delay.
>
>
> Now what Little's law doesn't tell you much about average delay, because
> clearly *some* subpiece of the network is fully saturated. But what is
> interesting here is what is happening and where. You can't tell what is
> saturated, and in fact the entire network is quite unstable, because the
> peak is constantly varying and you don't know where the throughput is. All
> the packets are now arriving 4 seconds or so later.
>
>
> Why is the situaton not worse than 4 seconds? Well, there are multiple
> things going on:
>
>
> 1) TCP may be doing a lot of retransmissions (non-Poisson at all, not
> random either. The arrival process is entirely deterministic in each
> source, based on the retransmission timeout) or it may not be.
>
>
> 2) Users are pissed off, because they clicked on a web page, and got
> nothing back. They retry on their screen, or they try another site.
> Meanwhile, the underlying TCP connection remains there, pumping the network
> full of more packets on that old path, which is still backed up with
> packets that haven't been delivered that are sitting in queues. The real
> arrival process is not Poisson at all, its a deterministic, repeated
> retrsnsmission plus a new attempt to connect to a new site.
>
>
> 3) When the users get a web page back eventually, it is filled with names
> of other pieces needed to display that web page, which causes some number
> (often as many as 100) new pages to be fetched, ALL at the same time.
> Certainly not a stochastic process that will just obey the law of large
> numbers.
>
>
> All of these things are the result of initial instability, causing queues
> to build up.
>
>
> So what is the state of the system? is it stable? is it stochastic? Is it
> the sum of enough stochastic stable flows to average out to Poisson?
>
>
> The answer is clearly NO. Control theory (not queuing theory) suggests
> that this system is completely uncontrolled and unstable.
>
>
> So if the system is in this state, what does Little's Lemma tell us? What
> is the meaning of that hightly variable 4 second delay on ping packets, in
> terms of average utilizaton of the network?
>
>
> We don't even know what all the users really might need, if the system
> hadn't become unstable, because some users have given up, and others are
> trying even harder, and new users are arriving.
>
>
> What we do know, because ATT (at my suggestion) reconfigured their system
> after blaming Apple Computer company for "bugs" in the original iPhone in
> public, is that simply *dropping* packets sitting in queues more than a
> couple milliseconds MADE THE USERS HAPPY. Apparently the required capacity
> was there all along!
>
>
> So I conclude that the 4 second delay was the largest delay users could
> barely tolerate before deciding the network was DOWN and going away. And
> that the backup was the accumulation of useless packets sitting in queues
> because none of the end systems were receiving congestion signals (which
> for the Internet stack begins with packet dropping).
>
>
> I should say that most operators, and especially ATT in this case, do not
> measure end-to-end latency. Instead they use Little's Lemma to query
> routers for their current throughput in bits per second, and calculate
> latency as if Little's Lemma applied. This results in reports to management
> that literally say:
>
>
>   The network is not dropping packets, utilization is near 100% on many of
> our switches and routers.
>
>
> And management responds, Hooray! Because utilization of 100% of their
> hardware is their investors' metric of maximizing profits. The hardware
> they are operating is fully utilized. No waste! And users are happy because
> no packets have been dropped!
>
>
> Hmm... what's wrong with this picture? I can see why Donovan, CTO, would
> accuse Apple of lousy software that was ruining iPhone user experience!
> His network was operating without ANY problems.
> So it must be Apple!
>
>
> Well, no. The entire problem, as we saw when ATT just changed to shorten
> egress queues and drop packets when the egress queues overflowed, was that
> ATT's network was amplifying instability, not at the link level, but at the
> network level.
>
>
> And queueing theory can help with that, but *intro queueing theory* cannot.
>
>
> And a big part of that problem is the pervasive belief that, at the
> network boundary, *Poisson arrival* is a reasonable model for use in all
> cases.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Friday, July 9, 2021 6:05am, "Luca Muscariello" <muscariello@ieee.org>
> said:
>
> For those who might be interested in Little's law
> there is a nice paper by John Little on the occasion
> of the 50th anniversary  of the result.
>
> https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary
>
> https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf
>
> Nice read.
> Luca
>
> P.S.
> Who has not a copy of L. Kleinrock's books? I do have and am not ready to
> lend them!
> On Fri, Jul 9, 2021 at 11:01 AM Leonard Kleinrock <lk@cs.ucla.edu> wrote:
>
>> David,
>> I totally appreciate  your attention to when and when not analytical
>> modeling works. Let me clarify a few things from your note.
>> First, Little's law (also known as Little’s lemma or, as I use in my
>> book, Little’s result) does not assume Poisson arrivals -  it is good for
>> *any* arrival process and any service process and is an equality between
>> time averages.  It states that the time average of the number in a system
>> (for a sample path *w)* is equal to the average arrival rate to the
>> system multiplied by the time-averaged time in the system for that sample
>> path.  This is often written as   NTimeAvg =λ·TTimeAvg .  Moreover, if
>> the system is also ergodic, then the time average equals the ensemble
>> average and we often write it as N ̄ = λ T ̄ .  In any case, this
>> requires neither Poisson arrivals nor exponential service times.
>>
>> Queueing theorists often do study the case of Poisson arrivals.  True, it
>> makes the analysis easier, yet there is a better reason it is often used,
>> and that is because the sum of a large number of independent stationary
>> renewal processes approaches a Poisson process.  So nature often gives us
>> Poisson arrivals.
>> Best,
>> Len
>>
>> On Jul 8, 2021, at 12:38 PM, David P. Reed <dpreed@deepplum.com> wrote:
>>
>> I will tell you flat out that the arrival time distribution assumption
>> made by Little's Lemma that allows "estimation of queue depth" is totally
>> unreasonable on ANY Internet in practice.
>>
>>
>> The assumption is a Poisson Arrival Process. In reality, traffic arrivals
>> in real internet applications are extremely far from Poisson, and, of
>> course, using TCP windowing, become highly intercorrelated with crossing
>> traffic that shares the same queue.
>>
>>
>> So, as I've tried to tell many, many net-heads (people who ignore
>> applications layer behavior, like the people that think latency doesn't
>> matter to end users, only throughput), end-to-end packet arrival times on a
>> practical network are incredibly far from Poisson - and they are more like
>> fractal probability distributions, very irregular at all scales of time.
>>
>>
>> So, the idea that iperf can estimate queue depth by Little's Lemma by
>> just measuring saturation of capacity of a path is bogus.The less Poisson,
>> the worse the estimate gets, by a huge factor.
>>
>>
>>
>>
>> Where does the Poisson assumption come from?  Well, like many theorems,
>> it is the simplest tractable closed form solution - it creates a simplified
>> view, by being a "single-parameter" distribution (the parameter is called
>> lambda for a Poisson distribution).  And the analysis of a simple queue
>> with poisson arrival distribution and a static, fixed service time is the
>> first interesting Queueing Theory example in most textbooks. It is
>> suggestive of an interesting phenomenon, but it does NOT characterize any
>> real system.
>>
>>
>> It's the queueing theory equivalent of "First, we assume a spherical
>> cow...". in doing an example in a freshman physics class.
>>
>>
>> Unfortunately, most networking engineers understand neither queuing
>> theory nor application networking usage in interactive applications. Which
>> makes them arrogant. They assume all distributions are poisson!
>>
>>
>>
>>
>> On Tuesday, July 6, 2021 9:46am, "Ben Greear" <greearb@candelatech.com>
>> said:
>>
>> > Hello,
>> >
>> > I am interested to hear wish lists for network testing features. We
>> make test
>> > equipment, supporting lots
>> > of wifi stations and a distributed architecture, with built-in udp,
>> tcp, ipv6,
>> > http, ... protocols,
>> > and open to creating/improving some of our automated tests.
>> >
>> > I know Dave has some test scripts already, so I'm not necessarily
>> looking to
>> > reimplement that,
>> > but more fishing for other/new ideas.
>> >
>> > Thanks,
>> > Ben
>> >
>> > On 7/2/21 4:28 PM, Bob McMahon wrote:
>> > > I think we need the language of math here. It seems like the network
>> > power metric, introduced by Kleinrock and Jaffe in the late 70s, is
>> something
>> > useful.
>> > > Effective end/end queue depths per Little's law also seems useful.
>> Both are
>> > available in iperf 2 from a test perspective. Repurposing test
>> techniques to
>> > actual
>> > > traffic could be useful. Hence the question around what exact
>> telemetry
>> > is useful to apps making socket write() and read() calls.
>> > >
>> > > Bob
>> > >
>> > > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com
>> > <mailto:dave.taht@gmail.com <dave.taht@gmail.com>>> wrote:
>> > >
>> > > In terms of trying to find "Quality" I have tried to encourage folk to
>> > > both read "zen and the art of motorcycle maintenance"[0], and Deming's
>> > > work on "total quality management".
>> > >
>> > > My own slice at this network, computer and lifestyle "issue" is aiming
>> > > for "imperceptible latency" in all things. [1]. There's a lot of
>> > > fallout from that in terms of not just addressing queuing delay, but
>> > > caching, prefetching, and learning more about what a user really needs
>> > > (as opposed to wants) to know via intelligent agents.
>> > >
>> > > [0] If you want to get depressed, read Pirsig's successor to "zen...",
>> > > lila, which is in part about what happens when an engineer hits an
>> > > insoluble problem.
>> > > [1] https://www.internetsociety.org/events/latency2013/
>> > <https://www.internetsociety.org/events/latency2013/>
>> > >
>> > >
>> > >
>> > > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com
>> > <mailto:dpreed@deepplum.com <dpreed@deepplum.com>>> wrote:
>> > > >
>> > > > Well, nice that the folks doing the conference  are willing to
>> > consider that quality of user experience has little to do with
>> signalling rate at
>> > the
>> > > physical layer or throughput of FTP transfers.
>> > > >
>> > > >
>> > > >
>> > > > But honestly, the fact that they call the problem "network quality"
>> > suggests that they REALLY, REALLY don't understand the Internet isn't
>> the hardware
>> > or
>> > > the routers or even the routing algorithms *to its users*.
>> > > >
>> > > >
>> > > >
>> > > > By ignoring the diversity of applications now and in the future,
>> > and the fact that we DON'T KNOW what will be coming up, this conference
>> will
>> > likely fall
>> > > into the usual trap that net-heads fall into - optimizing for some
>> > imaginary reality that doesn't exist, and in fact will probably never
>> be what
>> > users
>> > > actually will do given the chance.
>> > > >
>> > > >
>> > > >
>> > > > I saw this issue in 1976 in the group developing the original
>> > Internet protocols - a desire to put *into the network* special tricks
>> to optimize
>> > ASR33
>> > > logins to remote computers from terminal concentrators (aka remote
>> > login), bulk file transfers between file systems on different
>> time-sharing
>> > systems, and
>> > > "sessions" (virtual circuits) that required logins. And then trying to
>> > exploit underlying "multicast" by building it into the IP layer,
>> because someone
>> > > thought that TV broadcast would be the dominant application.
>> > > >
>> > > >
>> > > >
>> > > > Frankly, to think of "quality" as something that can be "provided"
>> > by "the network" misses the entire point of "end-to-end argument in
>> system
>> > design".
>> > > Quality is not a property defined or created by The Network. If you
>> want
>> > to talk about Quality, you need to talk about users - all the users at
>> all times,
>> > > now and into the future, and that's something you can't do if you
>> don't
>> > bother to include current and future users talking about what they
>> might expect
>> > to
>> > > experience that they don't experience.
>> > > >
>> > > >
>> > > >
>> > > > There was much fighting back in 1976 that basically involved
>> > "network experts" saying that the network was the place to "solve" such
>> issues as
>> > quality,
>> > > so applications could avoid having to solve such issues.
>> > > >
>> > > >
>> > > >
>> > > > What some of us managed to do was to argue that you can't "solve"
>> > such issues. All you can do is provide a framework that enables
>> different uses to
>> > > *cooperate* in some way.
>> > > >
>> > > >
>> > > >
>> > > > Which is why the Internet drops packets rather than queueing them,
>> > and why diffserv cannot work.
>> > > >
>> > > > (I know the latter is conftroversial, but at the moment, ALL of
>> > diffserv attempts to talk about end-to-end applicaiton specific
>> metrics, but
>> > never, ever
>> > > explains what the diffserv control points actually do w.r.t. what the
>> IP
>> > layer can actually control. So it is meaningless - another violation of
>> the
>> > > so-called end-to-end principle).
>> > > >
>> > > >
>> > > >
>> > > > Networks are about getting packets from here to there, multiplexing
>> > the underlying resources. That's it. Quality is a whole different
>> thing. Quality
>> > can
>> > > be improved by end-to-end approaches, if the underlying network
>> provides
>> > some kind of thing that actually creates a way for end-to-end
>> applications to
>> > > affect queueing and routing decisions, and more importantly getting
>> > "telemetry" from the network regarding what is actually going on with
>> the other
>> > > end-to-end users sharing the infrastructure.
>> > > >
>> > > >
>> > > >
>> > > > This conference won't talk about it this way. So don't waste your
>> > time.
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"
>> > <dave.taht@gmail.com <mailto:dave.taht@gmail.com <dave.taht@gmail.com>>>
>> said:
>> > > >
>> > > > > The program committee members are *amazing*. Perhaps, finally,
>> > we can
>> > > > > move the bar for the internet's quality metrics past endless,
>> > blind
>> > > > > repetitions of speedtest.
>> > > > >
>> > > > > For complete details, please see:
>> > > > > https://www.iab.org/activities/workshops/network-quality/
>> > <https://www.iab.org/activities/workshops/network-quality/>
>> > > > >
>> > > > > Submissions Due: Monday 2nd August 2021, midnight AOE
>> > (Anywhere On Earth)
>> > > > > Invitations Issued by: Monday 16th August 2021
>> > > > >
>> > > > > Workshop Date: This will be a virtual workshop, spread over
>> > three days:
>> > > > >
>> > > > > 1400-1800 UTC Tue 14th September 2021
>> > > > > 1400-1800 UTC Wed 15th September 2021
>> > > > > 1400-1800 UTC Thu 16th September 2021
>> > > > >
>> > > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
>> > > > >
>> > > > > The Program Committee members:
>> > > > >
>> > > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
>> > Sam
>> > > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
>> > Geoff
>> > > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
>> > Kuehlewind,
>> > > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
>> > Nichols,
>> > > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
>> > > > >
>> > > > > Send Submissions to: network-quality-workshop-pc@iab.org
>> > <mailto:network-quality-workshop-pc@iab.org
>> <network-quality-workshop-pc@iab.org>>.
>> > > > >
>> > > > > Position papers from academia, industry, the open source
>> > community and
>> > > > > others that focus on measurements, experiences, observations
>> > and
>> > > > > advice for the future are welcome. Papers that reflect
>> > experience
>> > > > > based on deployed services are especially welcome. The
>> > organizers
>> > > > > understand that specific actions taken by operators are
>> > unlikely to be
>> > > > > discussed in detail, so papers discussing general categories
>> > of
>> > > > > actions and issues without naming specific technologies,
>> > products, or
>> > > > > other players in the ecosystem are expected. Papers should not
>> > focus
>> > > > > on specific protocol solutions.
>> > > > >
>> > > > > The workshop will be by invitation only. Those wishing to
>> > attend
>> > > > > should submit a position paper to the address above; it may
>> > take the
>> > > > > form of an Internet-Draft.
>> > > > >
>> > > > > All inputs submitted and considered relevant will be published
>> > on the
>> > > > > workshop website. The organisers will decide whom to invite
>> > based on
>> > > > > the submissions received. Sessions will be organized according
>> > to
>> > > > > content, and not every accepted submission or invited attendee
>> > will
>> > > > > have an opportunity to present as the intent is to foster
>> > discussion
>> > > > > and not simply to have a sequence of presentations.
>> > > > >
>> > > > > Position papers from those not planning to attend the virtual
>> > sessions
>> > > > > themselves are also encouraged. A workshop report will be
>> > published
>> > > > > afterwards.
>> > > > >
>> > > > > Overview:
>> > > > >
>> > > > > "We believe that one of the major factors behind this lack of
>> > progress
>> > > > > is the popular perception that throughput is the often sole
>> > measure of
>> > > > > the quality of Internet connectivity. With such narrow focus,
>> > people
>> > > > > don’t consider questions such as:
>> > > > >
>> > > > > What is the latency under typical working conditions?
>> > > > > How reliable is the connectivity across longer time periods?
>> > > > > Does the network allow the use of a broad range of protocols?
>> > > > > What services can be run by clients of the network?
>> > > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and
>> > are there firewalls?
>> > > > > What security mechanisms are available for local services,
>> > such as DNS?
>> > > > > To what degree are the privacy, confidentiality, integrity
>> > and
>> > > > > authenticity of user communications guarded?
>> > > > >
>> > > > > Improving these aspects of network quality will likely depend
>> > on
>> > > > > measurement and exposing metrics to all involved parties,
>> > including to
>> > > > > end users in a meaningful way. Such measurements and exposure
>> > of the
>> > > > > right metrics will allow service providers and network
>> > operators to
>> > > > > focus on the aspects that impacts the users’ experience
>> > most and at
>> > > > > the same time empowers users to choose the Internet service
>> > that will
>> > > > > give them the best experience."
>> > > > >
>> > > > >
>> > > > > --
>> > > > > Latest Podcast:
>> > > > >
>> >
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> > <
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> >
>> > > > >
>> > > > > Dave Täht CTO, TekLibre, LLC
>> > > > > _______________________________________________
>> > > > > Cerowrt-devel mailing list
>> > > > > Cerowrt-devel@lists.bufferbloat.net
>> > <mailto:Cerowrt-devel@lists.bufferbloat.net
>> <Cerowrt-devel@lists.bufferbloat.net>>
>> > > > > https://lists.bufferbloat.net/listinfo/cerowrt-devel
>> > <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
>> > > > >
>> > >
>> > >
>> > >
>> > > --
>> > > Latest Podcast:
>> > >
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> > <
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> >
>> > >
>> > > Dave Täht CTO, TekLibre, LLC
>> > > _______________________________________________
>> > > Make-wifi-fast mailing list
>> > > Make-wifi-fast@lists.bufferbloat.net
>> > <mailto:Make-wifi-fast@lists.bufferbloat.net
>> <Make-wifi-fast@lists.bufferbloat.net>>
>> > > https://lists.bufferbloat.net/listinfo/make-wifi-fast
>> > <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
>> > >
>> > >
>> > > This electronic communication and the information and any files
>> transmitted
>> > with it, or attached to it, are confidential and are intended solely
>> for the use
>> > of
>> > > the individual or entity to whom it is addressed and may contain
>> information
>> > that is confidential, legally privileged, protected by privacy laws, or
>> otherwise
>> > > restricted from disclosure to anyone else. If you are not the intended
>> > recipient or the person responsible for delivering the e-mail to the
>> intended
>> > recipient,
>> > > you are hereby notified that any use, copying, distributing,
>> dissemination,
>> > forwarding, printing, or copying of this e-mail is strictly prohibited.
>> If you
>> > > received this e-mail in error, please return the e-mail to the
>> sender, delete
>> > it from your computer, and destroy any printed copy of it.
>> > >
>> > > _______________________________________________
>> > > Starlink mailing list
>> > > Starlink@lists.bufferbloat.net
>> > > https://lists.bufferbloat.net/listinfo/starlink
>> > >
>> >
>> >
>> > --
>> > Ben Greear <greearb@candelatech.com>
>> > Candela Technologies Inc http://www.candelatech.com
>> >
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>> _______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 43745 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: Little's Law mea culpa, but not invalidating my main point
  2021-07-10 19:51                   ` Bob McMahon
@ 2021-07-10 23:24                     ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-10 23:24 UTC (permalink / raw)
  To: Leonard Kleinrock
  Cc: David P. Reed, Luca Muscariello, starlink, Make-Wifi-fast,
	Cake List, codel, cerowrt-devel, bloat, Ben Greear


[-- Attachment #1.1: Type: text/plain, Size: 31625 bytes --]

One example question is, if it seems useful to control and queuing theory
experts to feedback the non-parametric OWD distributions to the sending
device's transport layer control loop? We find kolmogorov-smirnov distance
matrices as useful for clustering non-parametric distributions and chose to
use it because experimentally OWD distributions have been non-parametric
where the application of the central limit theorem lost the affecting
information. I'm wondering if the KS distances have any use in real word
traffic beyond our post analysis techniques?

Bob

On Sat, Jul 10, 2021 at 12:51 PM Bob McMahon <bob.mcmahon@broadcom.com>
wrote:

> "Analyzing that is really difficult, and if we don’t measure and sense, we
> have no hope of understanding, controlling, or ameliorating such
> situations."
>
> It is truly a high honor to observe the queueing theory and control theory
> discussions to the world class experts here. We simple test guys must
> measure things and we'd like those things to be generally useful to all who
> can help towards improvements. Hence back to my original question, what
> network, or other, telemetry do experts here see as useful towards
> measuring active traffic to help with this?
>
> Just some background, and my apologies for the indulgence, but we'd like
> our automation rigs to be able to better emulate "real world scenarios" and
> use stochastic based regression type signals when something goes wrong
> which, for us, is typically a side effect to a driver or firmware code
> change and commit. (Humans need machine level support for this.) It's also
> very frustrating that modern data centers aren't generally providing GPS
> atomic time to servers. (I think part of the idea behind IP packets, etc.
> was to mitigate fault domains and the PSTN stratum clocks were a huge weak
> point.) I find, today, not having a common clock reference "accurate and
> precise enough" is hindering progress towards understanding the complexity
> and towards the ameliorating, at least from our attempts to map "bothersome
> to machine and/or humans and relevant real world phenomenon" into our
> automation environments allowing us to catch things early in the eng life
> cycle.
>
> A few of us have pushed over the last five or more years to add one way
> delay (OWD) of the test traffic (which is not the same as 1/2 RTT nor an
> ICMP ping delay) into iperf 2. That code is available to anyone. The lack
> of adoption applied to OWD has been disheartening. One common response has
> been, "We don't need that because users can't get their devices sync'd
> to the atomic clock anyway." (Also 3 is a larger number than 2 so iperf3
> must be better than iperf2 so let us keep using that as our measurement
> tool - though I digress  ;) ;)
>
> Bob
>
> PS. One can get a stratum 1 clock with a raspberry pi working in a home
> for about $200. I've got one in my home (along with a $2500 OCXO from
> spectracom) and the Pi is reasonable.
> https://www.satsignal.eu/ntp/Raspberry-Pi-NTP.html
>
> On Fri, Jul 9, 2021 at 4:01 PM Leonard Kleinrock <lk@cs.ucla.edu> wrote:
>
>> David,
>>
>> No question that non-stationarity and instability are what we often see
>> in networks.  And, non-stationarity and instability are both topics that
>> lead to very complex analytical problems in queueing theory.  You can find
>> some results on the transient analysis in the queueing theory literature
>> (including the second volume of my Queueing Systems book), but they are
>> limited and hard. Nevertheless, the literature does contain some works on
>> transient analysis of queueing systems as applied to network congestion
>> control - again limited. On the other hand, as you said, control theory
>> addresses stability head on and does offer some tools as well, but again,
>> it is hairy.
>>
>> Averages are only averages, but they can provide valuable information.
>> For sure, latency can and does confound behavior.  But, as you point out,
>> it is the proliferation of control protocols that are, in some cases,
>> deployed willy-nilly in networks without proper evaluation of their
>> behavior that can lead to the nasty cycle of large transient latency,
>> frantic repeating of web requests, protocols sending multiple copies, lack
>> of awareness of true capacity or queue size or throughput, etc, all of
>> which you articulate so well, create the chaos and frustration in the
>> network.  Analyzing that is really difficult, and if we don’t measure and
>> sense, we have no hope of understanding, controlling, or ameliorating such
>> situations.
>>
>> Len
>>
>> On Jul 9, 2021, at 12:31 PM, David P. Reed <dpreed@deepplum.com> wrote:
>>
>> Len - I admit I made a mistake in challenging Little's Law as being based
>> on Poisson processes. It is more general. But it tells you an "average" in
>> its base form, and latency averages are not useful for end user
>> applications.
>>
>>
>> However, Little's Law does assume something that is not actually valid
>> about the kind of distributions seen in the network, and in fact, it is NOT
>> true that networks converge on Poisson arrival times.
>>
>>
>> The key issue is well-described in the sandard analysis of the M/M/1
>> queue (e.g. https://en.wikipedia.org/wiki/M/M/1_queue) , which is done
>> only for Poisson processes, and is also limited to "stable" systems. But
>> networks are never stable when fully loaded. They get unstable and those
>> instabilities persist for a long time in the network. Instability is at
>> core the underlying *requirement* of the Internet's usage.
>>
>>
>> So specifically: real networks, even large ones, and certainly the
>> Internet today, are not asymptotic limits of sums of stationary stochastic
>> arrival processes. Each esternal terminal of any real network has a real
>> user there, running a real application, and the network is a complex graph.
>> This makes it completely unlike a single queue. Even the links within a
>> network carry a relatively small number of application flows. There's no
>> ability to apply the Law of Large Numbers to the distributions, because any
>> particular path contains only a small number of serialized flows with
>> hightly variable rates.
>>
>>
>> Here's an example of what really happens in a real network (I've observed
>> this in 5 different cities on ATT's cellular network, back when it was
>> running Alcatel Lucent HSPA+ gear in those cities).
>> But you can see this on any network where transient overload occurs,
>> creating instability.
>>
>>
>>
>>
>> At 7 AM, the data transmission of the network is roughty stable. That's
>> because no links are overloaded within the network. Little's Law can tell
>> you by observing the delay and throughput on any path that the average
>> delay in the network is X.
>>
>>
>> Continue sampling delay in the network as the day wears on. At about 10
>> AM, ping delay starts to soar into the multiple second range. No packers
>> are lost. The peak ping time is about 4000 milliseconds - 4 seconds in most
>> of the networks. This is in downtown, no radio errors are reported, no link
>> errors.
>> So it is all queueing delay.
>>
>>
>> Now what Little's law doesn't tell you much about average delay, because
>> clearly *some* subpiece of the network is fully saturated. But what is
>> interesting here is what is happening and where. You can't tell what is
>> saturated, and in fact the entire network is quite unstable, because the
>> peak is constantly varying and you don't know where the throughput is. All
>> the packets are now arriving 4 seconds or so later.
>>
>>
>> Why is the situaton not worse than 4 seconds? Well, there are multiple
>> things going on:
>>
>>
>> 1) TCP may be doing a lot of retransmissions (non-Poisson at all, not
>> random either. The arrival process is entirely deterministic in each
>> source, based on the retransmission timeout) or it may not be.
>>
>>
>> 2) Users are pissed off, because they clicked on a web page, and got
>> nothing back. They retry on their screen, or they try another site.
>> Meanwhile, the underlying TCP connection remains there, pumping the network
>> full of more packets on that old path, which is still backed up with
>> packets that haven't been delivered that are sitting in queues. The real
>> arrival process is not Poisson at all, its a deterministic, repeated
>> retrsnsmission plus a new attempt to connect to a new site.
>>
>>
>> 3) When the users get a web page back eventually, it is filled with names
>> of other pieces needed to display that web page, which causes some number
>> (often as many as 100) new pages to be fetched, ALL at the same time.
>> Certainly not a stochastic process that will just obey the law of large
>> numbers.
>>
>>
>> All of these things are the result of initial instability, causing queues
>> to build up.
>>
>>
>> So what is the state of the system? is it stable? is it stochastic? Is it
>> the sum of enough stochastic stable flows to average out to Poisson?
>>
>>
>> The answer is clearly NO. Control theory (not queuing theory) suggests
>> that this system is completely uncontrolled and unstable.
>>
>>
>> So if the system is in this state, what does Little's Lemma tell us? What
>> is the meaning of that hightly variable 4 second delay on ping packets, in
>> terms of average utilizaton of the network?
>>
>>
>> We don't even know what all the users really might need, if the system
>> hadn't become unstable, because some users have given up, and others are
>> trying even harder, and new users are arriving.
>>
>>
>> What we do know, because ATT (at my suggestion) reconfigured their system
>> after blaming Apple Computer company for "bugs" in the original iPhone in
>> public, is that simply *dropping* packets sitting in queues more than a
>> couple milliseconds MADE THE USERS HAPPY. Apparently the required capacity
>> was there all along!
>>
>>
>> So I conclude that the 4 second delay was the largest delay users could
>> barely tolerate before deciding the network was DOWN and going away. And
>> that the backup was the accumulation of useless packets sitting in queues
>> because none of the end systems were receiving congestion signals (which
>> for the Internet stack begins with packet dropping).
>>
>>
>> I should say that most operators, and especially ATT in this case, do not
>> measure end-to-end latency. Instead they use Little's Lemma to query
>> routers for their current throughput in bits per second, and calculate
>> latency as if Little's Lemma applied. This results in reports to management
>> that literally say:
>>
>>
>>   The network is not dropping packets, utilization is near 100% on many
>> of our switches and routers.
>>
>>
>> And management responds, Hooray! Because utilization of 100% of their
>> hardware is their investors' metric of maximizing profits. The hardware
>> they are operating is fully utilized. No waste! And users are happy because
>> no packets have been dropped!
>>
>>
>> Hmm... what's wrong with this picture? I can see why Donovan, CTO, would
>> accuse Apple of lousy software that was ruining iPhone user experience!
>> His network was operating without ANY problems.
>> So it must be Apple!
>>
>>
>> Well, no. The entire problem, as we saw when ATT just changed to shorten
>> egress queues and drop packets when the egress queues overflowed, was that
>> ATT's network was amplifying instability, not at the link level, but at the
>> network level.
>>
>>
>> And queueing theory can help with that, but *intro queueing theory*
>> cannot.
>>
>>
>> And a big part of that problem is the pervasive belief that, at the
>> network boundary, *Poisson arrival* is a reasonable model for use in all
>> cases.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Friday, July 9, 2021 6:05am, "Luca Muscariello" <muscariello@ieee.org>
>> said:
>>
>> For those who might be interested in Little's law
>> there is a nice paper by John Little on the occasion
>> of the 50th anniversary  of the result.
>>
>> https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary
>>
>> https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf
>>
>> Nice read.
>> Luca
>>
>> P.S.
>> Who has not a copy of L. Kleinrock's books? I do have and am not ready to
>> lend them!
>> On Fri, Jul 9, 2021 at 11:01 AM Leonard Kleinrock <lk@cs.ucla.edu> wrote:
>>
>>> David,
>>> I totally appreciate  your attention to when and when not analytical
>>> modeling works. Let me clarify a few things from your note.
>>> First, Little's law (also known as Little’s lemma or, as I use in my
>>> book, Little’s result) does not assume Poisson arrivals -  it is good for
>>> *any* arrival process and any service process and is an equality
>>> between time averages.  It states that the time average of the number in a
>>> system (for a sample path *w)* is equal to the average arrival rate to
>>> the system multiplied by the time-averaged time in the system for that
>>> sample path.  This is often written as   NTimeAvg =λ·TTimeAvg .  Moreover,
>>> if the system is also ergodic, then the time average equals the ensemble
>>> average and we often write it as N ̄ = λ T ̄ .  In any case, this
>>> requires neither Poisson arrivals nor exponential service times.
>>>
>>> Queueing theorists often do study the case of Poisson arrivals.  True,
>>> it makes the analysis easier, yet there is a better reason it is often
>>> used, and that is because the sum of a large number of independent
>>> stationary renewal processes approaches a Poisson process.  So nature often
>>> gives us Poisson arrivals.
>>> Best,
>>> Len
>>>
>>> On Jul 8, 2021, at 12:38 PM, David P. Reed <dpreed@deepplum.com> wrote:
>>>
>>> I will tell you flat out that the arrival time distribution assumption
>>> made by Little's Lemma that allows "estimation of queue depth" is totally
>>> unreasonable on ANY Internet in practice.
>>>
>>>
>>> The assumption is a Poisson Arrival Process. In reality, traffic
>>> arrivals in real internet applications are extremely far from Poisson, and,
>>> of course, using TCP windowing, become highly intercorrelated with crossing
>>> traffic that shares the same queue.
>>>
>>>
>>> So, as I've tried to tell many, many net-heads (people who ignore
>>> applications layer behavior, like the people that think latency doesn't
>>> matter to end users, only throughput), end-to-end packet arrival times on a
>>> practical network are incredibly far from Poisson - and they are more like
>>> fractal probability distributions, very irregular at all scales of time.
>>>
>>>
>>> So, the idea that iperf can estimate queue depth by Little's Lemma by
>>> just measuring saturation of capacity of a path is bogus.The less Poisson,
>>> the worse the estimate gets, by a huge factor.
>>>
>>>
>>>
>>>
>>> Where does the Poisson assumption come from?  Well, like many theorems,
>>> it is the simplest tractable closed form solution - it creates a simplified
>>> view, by being a "single-parameter" distribution (the parameter is called
>>> lambda for a Poisson distribution).  And the analysis of a simple queue
>>> with poisson arrival distribution and a static, fixed service time is the
>>> first interesting Queueing Theory example in most textbooks. It is
>>> suggestive of an interesting phenomenon, but it does NOT characterize any
>>> real system.
>>>
>>>
>>> It's the queueing theory equivalent of "First, we assume a spherical
>>> cow...". in doing an example in a freshman physics class.
>>>
>>>
>>> Unfortunately, most networking engineers understand neither queuing
>>> theory nor application networking usage in interactive applications. Which
>>> makes them arrogant. They assume all distributions are poisson!
>>>
>>>
>>>
>>>
>>> On Tuesday, July 6, 2021 9:46am, "Ben Greear" <greearb@candelatech.com>
>>> said:
>>>
>>> > Hello,
>>> >
>>> > I am interested to hear wish lists for network testing features. We
>>> make test
>>> > equipment, supporting lots
>>> > of wifi stations and a distributed architecture, with built-in udp,
>>> tcp, ipv6,
>>> > http, ... protocols,
>>> > and open to creating/improving some of our automated tests.
>>> >
>>> > I know Dave has some test scripts already, so I'm not necessarily
>>> looking to
>>> > reimplement that,
>>> > but more fishing for other/new ideas.
>>> >
>>> > Thanks,
>>> > Ben
>>> >
>>> > On 7/2/21 4:28 PM, Bob McMahon wrote:
>>> > > I think we need the language of math here. It seems like the network
>>> > power metric, introduced by Kleinrock and Jaffe in the late 70s, is
>>> something
>>> > useful.
>>> > > Effective end/end queue depths per Little's law also seems useful.
>>> Both are
>>> > available in iperf 2 from a test perspective. Repurposing test
>>> techniques to
>>> > actual
>>> > > traffic could be useful. Hence the question around what exact
>>> telemetry
>>> > is useful to apps making socket write() and read() calls.
>>> > >
>>> > > Bob
>>> > >
>>> > > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com
>>> > <mailto:dave.taht@gmail.com <dave.taht@gmail.com>>> wrote:
>>> > >
>>> > > In terms of trying to find "Quality" I have tried to encourage folk
>>> to
>>> > > both read "zen and the art of motorcycle maintenance"[0], and
>>> Deming's
>>> > > work on "total quality management".
>>> > >
>>> > > My own slice at this network, computer and lifestyle "issue" is
>>> aiming
>>> > > for "imperceptible latency" in all things. [1]. There's a lot of
>>> > > fallout from that in terms of not just addressing queuing delay, but
>>> > > caching, prefetching, and learning more about what a user really
>>> needs
>>> > > (as opposed to wants) to know via intelligent agents.
>>> > >
>>> > > [0] If you want to get depressed, read Pirsig's successor to
>>> "zen...",
>>> > > lila, which is in part about what happens when an engineer hits an
>>> > > insoluble problem.
>>> > > [1] https://www.internetsociety.org/events/latency2013/
>>> > <https://www.internetsociety.org/events/latency2013/>
>>> > >
>>> > >
>>> > >
>>> > > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com
>>> > <mailto:dpreed@deepplum.com <dpreed@deepplum.com>>> wrote:
>>> > > >
>>> > > > Well, nice that the folks doing the conference  are willing to
>>> > consider that quality of user experience has little to do with
>>> signalling rate at
>>> > the
>>> > > physical layer or throughput of FTP transfers.
>>> > > >
>>> > > >
>>> > > >
>>> > > > But honestly, the fact that they call the problem "network quality"
>>> > suggests that they REALLY, REALLY don't understand the Internet isn't
>>> the hardware
>>> > or
>>> > > the routers or even the routing algorithms *to its users*.
>>> > > >
>>> > > >
>>> > > >
>>> > > > By ignoring the diversity of applications now and in the future,
>>> > and the fact that we DON'T KNOW what will be coming up, this
>>> conference will
>>> > likely fall
>>> > > into the usual trap that net-heads fall into - optimizing for some
>>> > imaginary reality that doesn't exist, and in fact will probably never
>>> be what
>>> > users
>>> > > actually will do given the chance.
>>> > > >
>>> > > >
>>> > > >
>>> > > > I saw this issue in 1976 in the group developing the original
>>> > Internet protocols - a desire to put *into the network* special tricks
>>> to optimize
>>> > ASR33
>>> > > logins to remote computers from terminal concentrators (aka remote
>>> > login), bulk file transfers between file systems on different
>>> time-sharing
>>> > systems, and
>>> > > "sessions" (virtual circuits) that required logins. And then trying
>>> to
>>> > exploit underlying "multicast" by building it into the IP layer,
>>> because someone
>>> > > thought that TV broadcast would be the dominant application.
>>> > > >
>>> > > >
>>> > > >
>>> > > > Frankly, to think of "quality" as something that can be "provided"
>>> > by "the network" misses the entire point of "end-to-end argument in
>>> system
>>> > design".
>>> > > Quality is not a property defined or created by The Network. If you
>>> want
>>> > to talk about Quality, you need to talk about users - all the users at
>>> all times,
>>> > > now and into the future, and that's something you can't do if you
>>> don't
>>> > bother to include current and future users talking about what they
>>> might expect
>>> > to
>>> > > experience that they don't experience.
>>> > > >
>>> > > >
>>> > > >
>>> > > > There was much fighting back in 1976 that basically involved
>>> > "network experts" saying that the network was the place to "solve"
>>> such issues as
>>> > quality,
>>> > > so applications could avoid having to solve such issues.
>>> > > >
>>> > > >
>>> > > >
>>> > > > What some of us managed to do was to argue that you can't "solve"
>>> > such issues. All you can do is provide a framework that enables
>>> different uses to
>>> > > *cooperate* in some way.
>>> > > >
>>> > > >
>>> > > >
>>> > > > Which is why the Internet drops packets rather than queueing them,
>>> > and why diffserv cannot work.
>>> > > >
>>> > > > (I know the latter is conftroversial, but at the moment, ALL of
>>> > diffserv attempts to talk about end-to-end applicaiton specific
>>> metrics, but
>>> > never, ever
>>> > > explains what the diffserv control points actually do w.r.t. what
>>> the IP
>>> > layer can actually control. So it is meaningless - another violation
>>> of the
>>> > > so-called end-to-end principle).
>>> > > >
>>> > > >
>>> > > >
>>> > > > Networks are about getting packets from here to there, multiplexing
>>> > the underlying resources. That's it. Quality is a whole different
>>> thing. Quality
>>> > can
>>> > > be improved by end-to-end approaches, if the underlying network
>>> provides
>>> > some kind of thing that actually creates a way for end-to-end
>>> applications to
>>> > > affect queueing and routing decisions, and more importantly getting
>>> > "telemetry" from the network regarding what is actually going on with
>>> the other
>>> > > end-to-end users sharing the infrastructure.
>>> > > >
>>> > > >
>>> > > >
>>> > > > This conference won't talk about it this way. So don't waste your
>>> > time.
>>> > > >
>>> > > >
>>> > > >
>>> > > >
>>> > > >
>>> > > >
>>> > > >
>>> > > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"
>>> > <dave.taht@gmail.com <mailto:dave.taht@gmail.com <dave.taht@gmail.com>>>
>>> said:
>>> > > >
>>> > > > > The program committee members are *amazing*. Perhaps, finally,
>>> > we can
>>> > > > > move the bar for the internet's quality metrics past endless,
>>> > blind
>>> > > > > repetitions of speedtest.
>>> > > > >
>>> > > > > For complete details, please see:
>>> > > > > https://www.iab.org/activities/workshops/network-quality/
>>> > <https://www.iab.org/activities/workshops/network-quality/>
>>> > > > >
>>> > > > > Submissions Due: Monday 2nd August 2021, midnight AOE
>>> > (Anywhere On Earth)
>>> > > > > Invitations Issued by: Monday 16th August 2021
>>> > > > >
>>> > > > > Workshop Date: This will be a virtual workshop, spread over
>>> > three days:
>>> > > > >
>>> > > > > 1400-1800 UTC Tue 14th September 2021
>>> > > > > 1400-1800 UTC Wed 15th September 2021
>>> > > > > 1400-1800 UTC Thu 16th September 2021
>>> > > > >
>>> > > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
>>> > > > >
>>> > > > > The Program Committee members:
>>> > > > >
>>> > > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
>>> > Sam
>>> > > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
>>> > Geoff
>>> > > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
>>> > Kuehlewind,
>>> > > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
>>> > Nichols,
>>> > > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
>>> > > > >
>>> > > > > Send Submissions to: network-quality-workshop-pc@iab.org
>>> > <mailto:network-quality-workshop-pc@iab.org
>>> <network-quality-workshop-pc@iab.org>>.
>>> > > > >
>>> > > > > Position papers from academia, industry, the open source
>>> > community and
>>> > > > > others that focus on measurements, experiences, observations
>>> > and
>>> > > > > advice for the future are welcome. Papers that reflect
>>> > experience
>>> > > > > based on deployed services are especially welcome. The
>>> > organizers
>>> > > > > understand that specific actions taken by operators are
>>> > unlikely to be
>>> > > > > discussed in detail, so papers discussing general categories
>>> > of
>>> > > > > actions and issues without naming specific technologies,
>>> > products, or
>>> > > > > other players in the ecosystem are expected. Papers should not
>>> > focus
>>> > > > > on specific protocol solutions.
>>> > > > >
>>> > > > > The workshop will be by invitation only. Those wishing to
>>> > attend
>>> > > > > should submit a position paper to the address above; it may
>>> > take the
>>> > > > > form of an Internet-Draft.
>>> > > > >
>>> > > > > All inputs submitted and considered relevant will be published
>>> > on the
>>> > > > > workshop website. The organisers will decide whom to invite
>>> > based on
>>> > > > > the submissions received. Sessions will be organized according
>>> > to
>>> > > > > content, and not every accepted submission or invited attendee
>>> > will
>>> > > > > have an opportunity to present as the intent is to foster
>>> > discussion
>>> > > > > and not simply to have a sequence of presentations.
>>> > > > >
>>> > > > > Position papers from those not planning to attend the virtual
>>> > sessions
>>> > > > > themselves are also encouraged. A workshop report will be
>>> > published
>>> > > > > afterwards.
>>> > > > >
>>> > > > > Overview:
>>> > > > >
>>> > > > > "We believe that one of the major factors behind this lack of
>>> > progress
>>> > > > > is the popular perception that throughput is the often sole
>>> > measure of
>>> > > > > the quality of Internet connectivity. With such narrow focus,
>>> > people
>>> > > > > don’t consider questions such as:
>>> > > > >
>>> > > > > What is the latency under typical working conditions?
>>> > > > > How reliable is the connectivity across longer time periods?
>>> > > > > Does the network allow the use of a broad range of protocols?
>>> > > > > What services can be run by clients of the network?
>>> > > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and
>>> > are there firewalls?
>>> > > > > What security mechanisms are available for local services,
>>> > such as DNS?
>>> > > > > To what degree are the privacy, confidentiality, integrity
>>> > and
>>> > > > > authenticity of user communications guarded?
>>> > > > >
>>> > > > > Improving these aspects of network quality will likely depend
>>> > on
>>> > > > > measurement and exposing metrics to all involved parties,
>>> > including to
>>> > > > > end users in a meaningful way. Such measurements and exposure
>>> > of the
>>> > > > > right metrics will allow service providers and network
>>> > operators to
>>> > > > > focus on the aspects that impacts the users’ experience
>>> > most and at
>>> > > > > the same time empowers users to choose the Internet service
>>> > that will
>>> > > > > give them the best experience."
>>> > > > >
>>> > > > >
>>> > > > > --
>>> > > > > Latest Podcast:
>>> > > > >
>>> >
>>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>>> > <
>>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>>> >
>>> > > > >
>>> > > > > Dave Täht CTO, TekLibre, LLC
>>> > > > > _______________________________________________
>>> > > > > Cerowrt-devel mailing list
>>> > > > > Cerowrt-devel@lists.bufferbloat.net
>>> > <mailto:Cerowrt-devel@lists.bufferbloat.net
>>> <Cerowrt-devel@lists.bufferbloat.net>>
>>> > > > > https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>> > <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
>>> > > > >
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > > Latest Podcast:
>>> > >
>>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>>> > <
>>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>>> >
>>> > >
>>> > > Dave Täht CTO, TekLibre, LLC
>>> > > _______________________________________________
>>> > > Make-wifi-fast mailing list
>>> > > Make-wifi-fast@lists.bufferbloat.net
>>> > <mailto:Make-wifi-fast@lists.bufferbloat.net
>>> <Make-wifi-fast@lists.bufferbloat.net>>
>>> > > https://lists.bufferbloat.net/listinfo/make-wifi-fast
>>> > <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
>>> > >
>>> > >
>>> > > This electronic communication and the information and any files
>>> transmitted
>>> > with it, or attached to it, are confidential and are intended solely
>>> for the use
>>> > of
>>> > > the individual or entity to whom it is addressed and may contain
>>> information
>>> > that is confidential, legally privileged, protected by privacy laws,
>>> or otherwise
>>> > > restricted from disclosure to anyone else. If you are not the
>>> intended
>>> > recipient or the person responsible for delivering the e-mail to the
>>> intended
>>> > recipient,
>>> > > you are hereby notified that any use, copying, distributing,
>>> dissemination,
>>> > forwarding, printing, or copying of this e-mail is strictly
>>> prohibited. If you
>>> > > received this e-mail in error, please return the e-mail to the
>>> sender, delete
>>> > it from your computer, and destroy any printed copy of it.
>>> > >
>>> > > _______________________________________________
>>> > > Starlink mailing list
>>> > > Starlink@lists.bufferbloat.net
>>> > > https://lists.bufferbloat.net/listinfo/starlink
>>> > >
>>> >
>>> >
>>> > --
>>> > Ben Greear <greearb@candelatech.com>
>>> > Candela Technologies Inc http://www.candelatech.com
>>> >
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>> _______________________________________________
>>> Make-wifi-fast mailing list
>>> Make-wifi-fast@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>>
>>
>>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 44744 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-09 19:31               ` [Cerowrt-devel] Little's Law mea culpa, but not invalidating my main point David P. Reed
                                   ` (2 preceding siblings ...)
  2021-07-09 23:01                 ` [Cerowrt-devel] " Leonard Kleinrock
@ 2021-07-12 13:46                 ` Livingood, Jason
  2021-07-12 17:40                   ` [Cerowrt-devel] " David P. Reed
  2021-09-20  1:21                 ` [Cerowrt-devel] " Dave Taht
  4 siblings, 1 reply; 108+ messages in thread
From: Livingood, Jason @ 2021-07-12 13:46 UTC (permalink / raw)
  To: David P. Reed, Luca Muscariello
  Cc: Cake List, Make-Wifi-fast, Leonard Kleinrock, Bob McMahon,
	starlink, codel, cerowrt-devel, bloat, Ben Greear

[-- Attachment #1: Type: text/plain, Size: 2777 bytes --]

> 2) Users are pissed off, because they clicked on a web page, and got nothing back. They retry on their screen, or they try another site. Meanwhile, the underlying TCP connection remains there, pumping the network full of more packets on that old path, which is still backed up with packets that haven't been delivered that are sitting in queues.



Agree. I’ve experienced that as utilization of a network segment or supporting network systems (e.g. DNS) increases, you may see very small delay creep in - but not much as things are stable until they are *quite suddenly* not so. At that stability inflection point you immediately & dramatically fall off a cliff, which is then exacerbated by what you note here – user and machine-based retries/retransmissions that drives a huge increase in traffic. The solution has typically been throwing massive new capacity at it until the storm recedes.



> I should say that most operators, and especially ATT in this case, do not measure end-to-end latency. Instead they use Little's Lemma to query routers for their current throughput in bits per second, and calculate latency as if Little's Lemma applied.



IMO network operators views/practices vary widely & have been evolving quite a bit in recent years. Yes, it used to be all about capacity utilization metrics but I think that is changing. In my day job, we run E2E latency tests (among others) to CPE and the distribution is a lot more instructive than the mean/median to continuously improving the network experience.



> And management responds, Hooray! Because utilization of 100% of their hardware is their investors' metric of maximizing profits. The hardware they are operating is fully utilized. No waste! And users are happy because no packets have been dropped!



Well, I hope it wasn’t 100% utilization meant they were ‘green’ on their network KPIs. ;-) Ha. But I think you are correct that a network engineering team would have been measured by how well they kept ahead of utilization/demand & network capacity decisions driven in large part by utilization trends. In a lot of networks I suspect an informal rule of thumb arose that things got a little funny once p98 utilization got to around 94-95% of link capacity – so backup from there to figure out when you need to trigger automatic capacity augments to avoid that. While I do not think managing via utilization in that way is incorrect, ISTM it’s mostly being used as the measure is an indirect proxy for end user QoE. I think latency/delay is becoming seen to be as important certainly, if not a more direct proxy for end user QoE. This is all still evolving and I have to say is a super interesting & fun thing to work on. :-)



Jason













[-- Attachment #2: Type: text/html, Size: 5498 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 13:46                 ` [Bloat] " Livingood, Jason
@ 2021-07-12 17:40                   ` David P. Reed
  2021-07-12 18:21                     ` Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: David P. Reed @ 2021-07-12 17:40 UTC (permalink / raw)
  To: Livingood, Jason
  Cc: Luca Muscariello, Cake List, Make-Wifi-fast, Leonard Kleinrock,
	Bob McMahon, starlink, codel, cerowrt-devel, bloat, Ben Greear

 
On Monday, July 12, 2021 9:46am, "Livingood, Jason" <Jason_Livingood@comcast.com> said:

> I think latency/delay is becoming seen to be as important certainly, if not a more direct proxy for end user QoE. This is all still evolving and I have to say is a super interesting & fun thing to work on. :-)
 
If I could manage to sell one idea to the management hierarchy of communications industry CEOs (operators, vendors, ...) it is this one:

"It's the end-to-end latency, stupid!"

And I mean, by end-to-end, latency to complete a task at a relevant layer of abstraction.

At the link level, it's packet send to packet receive completion.

But at the transport level including retransmission buffers, it's datagram (or message) origination until the acknowledgement arrives for that message being delivered after whatever number of retransmissions, freeing the retransmission buffer.

At the WWW level, it's mouse click to display update corresponding to completion of the request.

What should be noted is that lower level latencies don't directly predict the magnitude of higher-level latencies. But longer lower level latencies almost always amplfify higher level latencies. Often non-linearly.

Throughput is very, very weakly related to these latencies, in contrast.

The amplification process has to do with the presence of queueing. Queueing is ALWAYS bad for latency, and throughput only helps if it is in exactly the right place (the so-called input queue of the bottleneck process, which is often a link, but not always).

Can we get that slogan into Harvard Business Review? Can we get it taught in Managerial Accounting at HBS? (which does address logistics/supply chain queueing).
 
 
 
 
 


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 17:40                   ` [Cerowrt-devel] " David P. Reed
@ 2021-07-12 18:21                     ` Bob McMahon
  2021-07-12 18:38                       ` Bob McMahon
  2021-07-12 19:07                       ` [Cerowrt-devel] " Ben Greear
  0 siblings, 2 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-12 18:21 UTC (permalink / raw)
  To: David P. Reed
  Cc: Livingood, Jason, Luca Muscariello, Cake List, Make-Wifi-fast,
	Leonard Kleinrock, starlink, codel, cerowrt-devel, bloat,
	Ben Greear


[-- Attachment #1.1: Type: text/plain, Size: 3406 bytes --]

iperf 2 supports OWD and gives full histograms for TCP write to read, TCP
connect times, latency of packets (with UDP), latency of "frames" with
simulated video traffic (TCP and UDP), xfer times of bursts with low duty
cycle traffic, and TCP RTT (sampling based.) It also has support for
sampling (per interval reports) down to 100 usecs if configured with
--enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
released all this as open source.

OWD only works if the end realtime clocks are synchronized using a "machine
level" protocol such as IEEE 1588 or PTP. Sadly, *most data centers don't
provide sufficient level of clock accuracy and the GPS pulse per second *
to colo and vm customers.

https://iperf2.sourceforge.io/iperf-manpage.html

Bob

On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <dpreed@deepplum.com> wrote:

>
> On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
> Jason_Livingood@comcast.com> said:
>
> > I think latency/delay is becoming seen to be as important certainly, if
> not a more direct proxy for end user QoE. This is all still evolving and I
> have to say is a super interesting & fun thing to work on. :-)
>
> If I could manage to sell one idea to the management hierarchy of
> communications industry CEOs (operators, vendors, ...) it is this one:
>
> "It's the end-to-end latency, stupid!"
>
> And I mean, by end-to-end, latency to complete a task at a relevant layer
> of abstraction.
>
> At the link level, it's packet send to packet receive completion.
>
> But at the transport level including retransmission buffers, it's datagram
> (or message) origination until the acknowledgement arrives for that message
> being delivered after whatever number of retransmissions, freeing the
> retransmission buffer.
>
> At the WWW level, it's mouse click to display update corresponding to
> completion of the request.
>
> What should be noted is that lower level latencies don't directly predict
> the magnitude of higher-level latencies. But longer lower level latencies
> almost always amplfify higher level latencies. Often non-linearly.
>
> Throughput is very, very weakly related to these latencies, in contrast.
>
> The amplification process has to do with the presence of queueing.
> Queueing is ALWAYS bad for latency, and throughput only helps if it is in
> exactly the right place (the so-called input queue of the bottleneck
> process, which is often a link, but not always).
>
> Can we get that slogan into Harvard Business Review? Can we get it taught
> in Managerial Accounting at HBS? (which does address logistics/supply chain
> queueing).
>
>
>
>
>
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 4077 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 18:21                     ` Bob McMahon
@ 2021-07-12 18:38                       ` Bob McMahon
  2021-07-12 19:07                       ` [Cerowrt-devel] " Ben Greear
  1 sibling, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-12 18:38 UTC (permalink / raw)
  To: David P. Reed
  Cc: Livingood, Jason, Luca Muscariello, Cake List, Make-Wifi-fast,
	Leonard Kleinrock, starlink, codel, cerowrt-devel, bloat,
	Ben Greear


[-- Attachment #1.1: Type: text/plain, Size: 3753 bytes --]

To be clear, it's a OS write() using a socket opened with TCP and the final
OS read() of that write. The write size is set using -l or --length. OWD
requires --trip-times option.

Bob

On Mon, Jul 12, 2021 at 11:21 AM Bob McMahon <bob.mcmahon@broadcom.com>
wrote:

> iperf 2 supports OWD and gives full histograms for TCP write to read, TCP
> connect times, latency of packets (with UDP), latency of "frames" with
> simulated video traffic (TCP and UDP), xfer times of bursts with low duty
> cycle traffic, and TCP RTT (sampling based.) It also has support for
> sampling (per interval reports) down to 100 usecs if configured with
> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
> released all this as open source.
>
> OWD only works if the end realtime clocks are synchronized using a
> "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
> centers don't provide sufficient level of clock accuracy and the GPS pulse
> per second * to colo and vm customers.
>
> https://iperf2.sourceforge.io/iperf-manpage.html
>
> Bob
>
> On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <dpreed@deepplum.com>
> wrote:
>
>>
>> On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
>> Jason_Livingood@comcast.com> said:
>>
>> > I think latency/delay is becoming seen to be as important certainly, if
>> not a more direct proxy for end user QoE. This is all still evolving and I
>> have to say is a super interesting & fun thing to work on. :-)
>>
>> If I could manage to sell one idea to the management hierarchy of
>> communications industry CEOs (operators, vendors, ...) it is this one:
>>
>> "It's the end-to-end latency, stupid!"
>>
>> And I mean, by end-to-end, latency to complete a task at a relevant layer
>> of abstraction.
>>
>> At the link level, it's packet send to packet receive completion.
>>
>> But at the transport level including retransmission buffers, it's
>> datagram (or message) origination until the acknowledgement arrives for
>> that message being delivered after whatever number of retransmissions,
>> freeing the retransmission buffer.
>>
>> At the WWW level, it's mouse click to display update corresponding to
>> completion of the request.
>>
>> What should be noted is that lower level latencies don't directly predict
>> the magnitude of higher-level latencies. But longer lower level latencies
>> almost always amplfify higher level latencies. Often non-linearly.
>>
>> Throughput is very, very weakly related to these latencies, in contrast.
>>
>> The amplification process has to do with the presence of queueing.
>> Queueing is ALWAYS bad for latency, and throughput only helps if it is in
>> exactly the right place (the so-called input queue of the bottleneck
>> process, which is often a link, but not always).
>>
>> Can we get that slogan into Harvard Business Review? Can we get it taught
>> in Managerial Accounting at HBS? (which does address logistics/supply chain
>> queueing).
>>
>>
>>
>>
>>
>>
>>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 4669 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 18:21                     ` Bob McMahon
  2021-07-12 18:38                       ` Bob McMahon
@ 2021-07-12 19:07                       ` Ben Greear
  2021-07-12 20:04                         ` Bob McMahon
  1 sibling, 1 reply; 108+ messages in thread
From: Ben Greear @ 2021-07-12 19:07 UTC (permalink / raw)
  To: Bob McMahon, David P. Reed
  Cc: Livingood, Jason, Luca Muscariello, Cake List, Make-Wifi-fast,
	Leonard Kleinrock, starlink, codel, cerowrt-devel, bloat

Measuring one or a few links provides a bit of data, but seems like if someone is trying to understand
a large and real network, then the OWD between point A and B needs to just be input into something much
more grand.  Assuming real-time OWD data exists between 100 to 1000 endpoint pairs, has anyone found a way
to visualize this in a useful manner?

Also, considering something better than ntp may not really scale to 1000+ endpoints, maybe round-trip
time is only viable way to get this type of data.  In that case, maybe clever logic could use things
like trace-route to get some idea of how long it takes to get 'onto' the internet proper, and so estimate
the last-mile latency.  My assumption is that the last-mile latency is where most of the pervasive
assymetric network latencies would exist (or just ping 8.8.8.8 which is 20ms from everywhere due to
$magic).

Endpoints could also triangulate a bit if needed, using some anchor points in the network
under test.

Thanks,
Ben

On 7/12/21 11:21 AM, Bob McMahon wrote:
> iperf 2 supports OWD and gives full histograms for TCP write to read, TCP connect times, latency of packets (with UDP), latency of "frames" with 
> simulated video traffic (TCP and UDP), xfer times of bursts with low duty cycle traffic, and TCP RTT (sampling based.) It also has support for sampling (per 
> interval reports) down to 100 usecs if configured with --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've released all this as open source.
> 
> OWD only works if the end realtime clocks are synchronized using a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data centers don't provide 
> sufficient level of clock accuracy and the GPS pulse per second * to colo and vm customers.
> 
> https://iperf2.sourceforge.io/iperf-manpage.html
> 
> Bob
> 
> On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <dpreed@deepplum.com <mailto:dpreed@deepplum.com>> wrote:
> 
> 
>     On Monday, July 12, 2021 9:46am, "Livingood, Jason" <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> said:
> 
>      > I think latency/delay is becoming seen to be as important certainly, if not a more direct proxy for end user QoE. This is all still evolving and I have
>     to say is a super interesting & fun thing to work on. :-)
> 
>     If I could manage to sell one idea to the management hierarchy of communications industry CEOs (operators, vendors, ...) it is this one:
> 
>     "It's the end-to-end latency, stupid!"
> 
>     And I mean, by end-to-end, latency to complete a task at a relevant layer of abstraction.
> 
>     At the link level, it's packet send to packet receive completion.
> 
>     But at the transport level including retransmission buffers, it's datagram (or message) origination until the acknowledgement arrives for that message being
>     delivered after whatever number of retransmissions, freeing the retransmission buffer.
> 
>     At the WWW level, it's mouse click to display update corresponding to completion of the request.
> 
>     What should be noted is that lower level latencies don't directly predict the magnitude of higher-level latencies. But longer lower level latencies almost
>     always amplfify higher level latencies. Often non-linearly.
> 
>     Throughput is very, very weakly related to these latencies, in contrast.
> 
>     The amplification process has to do with the presence of queueing. Queueing is ALWAYS bad for latency, and throughput only helps if it is in exactly the
>     right place (the so-called input queue of the bottleneck process, which is often a link, but not always).
> 
>     Can we get that slogan into Harvard Business Review? Can we get it taught in Managerial Accounting at HBS? (which does address logistics/supply chain queueing).
> 
> 
> 
> 
> 
> 
> 
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of 
> the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise 
> restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, 
> you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you 
> received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.


-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 19:07                       ` [Cerowrt-devel] " Ben Greear
@ 2021-07-12 20:04                         ` Bob McMahon
  2021-07-12 20:32                           ` [Cerowrt-devel] " Ben Greear
  2021-07-12 21:54                           ` [Cerowrt-devel] [Make-wifi-fast] " Jonathan Morton
  0 siblings, 2 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-12 20:04 UTC (permalink / raw)
  To: Ben Greear
  Cc: David P. Reed, Livingood, Jason, Luca Muscariello, Cake List,
	Make-Wifi-fast, Leonard Kleinrock, starlink, codel,
	cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 7095 bytes --]

I believe end host's TCP stats are insufficient as seen per the "failed"
congested control mechanisms over the last decades. I think Jaffe pointed
this out in 1979 though he was using what's been deemed on this thread as
"spherical cow queueing theory."

"Flow control in store-and-forward computer networks is appropriate for
decentralized execution. A formal description of a class of "decentralized
flow control algorithms" is given. The feasibility of maximizing power with
such algorithms is investigated. On the assumption that communication links
behave like M/M/1 servers it is shown that no "decentralized flow control
algorithm" can maximize network power. Power has been suggested in the
literature as a network performance objective. It is also shown that no
objective based only on the users' throughputs and average delay is
decentralizable. Finally, a restricted class of algorithms cannot even
approximate power."

https://ieeexplore.ieee.org/document/1095152

Did Jaffe make a mistake?

Also, it's been observed that latency is non-parametric in it's
distributions and computing gaussians per the central limit theorem for OWD
feedback loops aren't effective. How does one design a control loop around
things that are non-parametric? It also begs the question, what are the
feed forward knobs that can actually help?

Bob

On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com> wrote:

> Measuring one or a few links provides a bit of data, but seems like if
> someone is trying to understand
> a large and real network, then the OWD between point A and B needs to just
> be input into something much
> more grand.  Assuming real-time OWD data exists between 100 to 1000
> endpoint pairs, has anyone found a way
> to visualize this in a useful manner?
>
> Also, considering something better than ntp may not really scale to 1000+
> endpoints, maybe round-trip
> time is only viable way to get this type of data.  In that case, maybe
> clever logic could use things
> like trace-route to get some idea of how long it takes to get 'onto' the
> internet proper, and so estimate
> the last-mile latency.  My assumption is that the last-mile latency is
> where most of the pervasive
> assymetric network latencies would exist (or just ping 8.8.8.8 which is
> 20ms from everywhere due to
> $magic).
>
> Endpoints could also triangulate a bit if needed, using some anchor points
> in the network
> under test.
>
> Thanks,
> Ben
>
> On 7/12/21 11:21 AM, Bob McMahon wrote:
> > iperf 2 supports OWD and gives full histograms for TCP write to read,
> TCP connect times, latency of packets (with UDP), latency of "frames" with
> > simulated video traffic (TCP and UDP), xfer times of bursts with low
> duty cycle traffic, and TCP RTT (sampling based.) It also has support for
> sampling (per
> > interval reports) down to 100 usecs if configured with
> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
> released all this as open source.
> >
> > OWD only works if the end realtime clocks are synchronized using a
> "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
> centers don't provide
> > sufficient level of clock accuracy and the GPS pulse per second * to
> colo and vm customers.
> >
> > https://iperf2.sourceforge.io/iperf-manpage.html
> >
> > Bob
> >
> > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <dpreed@deepplum.com
> <mailto:dpreed@deepplum.com>> wrote:
> >
> >
> >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
> Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> said:
> >
> >      > I think latency/delay is becoming seen to be as important
> certainly, if not a more direct proxy for end user QoE. This is all still
> evolving and I have
> >     to say is a super interesting & fun thing to work on. :-)
> >
> >     If I could manage to sell one idea to the management hierarchy of
> communications industry CEOs (operators, vendors, ...) it is this one:
> >
> >     "It's the end-to-end latency, stupid!"
> >
> >     And I mean, by end-to-end, latency to complete a task at a relevant
> layer of abstraction.
> >
> >     At the link level, it's packet send to packet receive completion.
> >
> >     But at the transport level including retransmission buffers, it's
> datagram (or message) origination until the acknowledgement arrives for
> that message being
> >     delivered after whatever number of retransmissions, freeing the
> retransmission buffer.
> >
> >     At the WWW level, it's mouse click to display update corresponding
> to completion of the request.
> >
> >     What should be noted is that lower level latencies don't directly
> predict the magnitude of higher-level latencies. But longer lower level
> latencies almost
> >     always amplfify higher level latencies. Often non-linearly.
> >
> >     Throughput is very, very weakly related to these latencies, in
> contrast.
> >
> >     The amplification process has to do with the presence of queueing.
> Queueing is ALWAYS bad for latency, and throughput only helps if it is in
> exactly the
> >     right place (the so-called input queue of the bottleneck process,
> which is often a link, but not always).
> >
> >     Can we get that slogan into Harvard Business Review? Can we get it
> taught in Managerial Accounting at HBS? (which does address
> logistics/supply chain queueing).
> >
> >
> >
> >
> >
> >
> >
> > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of
> > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or otherwise
> > restricted from disclosure to anyone else. If you are not the intended
> recipient or the person responsible for delivering the e-mail to the
> intended recipient,
> > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> > received this e-mail in error, please return the e-mail to the sender,
> delete it from your computer, and destroy any printed copy of it.
>
>
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc  http://www.candelatech.com
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 8678 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 20:04                         ` Bob McMahon
@ 2021-07-12 20:32                           ` Ben Greear
  2021-07-12 20:36                             ` [Cerowrt-devel] [Cake] " David Lang
                                               ` (3 more replies)
  2021-07-12 21:54                           ` [Cerowrt-devel] [Make-wifi-fast] " Jonathan Morton
  1 sibling, 4 replies; 108+ messages in thread
From: Ben Greear @ 2021-07-12 20:32 UTC (permalink / raw)
  To: Bob McMahon
  Cc: David P. Reed, Livingood, Jason, Luca Muscariello, Cake List,
	Make-Wifi-fast, Leonard Kleinrock, starlink, codel,
	cerowrt-devel, bloat

UDP is better for getting actual packet latency, for sure.  TCP is typical-user-experience-latency though,
so it is also useful.

I'm interested in the test and visualization side of this.  If there were a way to give engineers
a good real-time look at a complex real-world network, then they have something to go on while trying
to tune various knobs in their network to improve it.

I'll let others try to figure out how build and tune the knobs, but the data acquisition and
visualization is something we might try to accomplish.  I have a feeling I'm not the
first person to think of this, however....probably someone already has done such
a thing.

Thanks,
Ben

On 7/12/21 1:04 PM, Bob McMahon wrote:
> I believe end host's TCP stats are insufficient as seen per the "failed" congested control mechanisms over the last decades. I think Jaffe pointed this out in 
> 1979 though he was using what's been deemed on this thread as "spherical cow queueing theory."
> 
> "Flow control in store-and-forward computer networks is appropriate for decentralized execution. A formal description of a class of "decentralized flow control 
> algorithms" is given. The feasibility of maximizing power with such algorithms is investigated. On the assumption that communication links behave like M/M/1 
> servers it is shown that no "decentralized flow control algorithm" can maximize network power. Power has been suggested in the literature as a network 
> performance objective. It is also shown that no objective based only on the users' throughputs and average delay is decentralizable. Finally, a restricted class 
> of algorithms cannot even approximate power."
> 
> https://ieeexplore.ieee.org/document/1095152
> 
> Did Jaffe make a mistake?
> 
> Also, it's been observed that latency is non-parametric in it's distributions and computing gaussians per the central limit theorem for OWD feedback loops 
> aren't effective. How does one design a control loop around things that are non-parametric? It also begs the question, what are the feed forward knobs that can 
> actually help?
> 
> Bob
> 
> On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com>> wrote:
> 
>     Measuring one or a few links provides a bit of data, but seems like if someone is trying to understand
>     a large and real network, then the OWD between point A and B needs to just be input into something much
>     more grand.  Assuming real-time OWD data exists between 100 to 1000 endpoint pairs, has anyone found a way
>     to visualize this in a useful manner?
> 
>     Also, considering something better than ntp may not really scale to 1000+ endpoints, maybe round-trip
>     time is only viable way to get this type of data.  In that case, maybe clever logic could use things
>     like trace-route to get some idea of how long it takes to get 'onto' the internet proper, and so estimate
>     the last-mile latency.  My assumption is that the last-mile latency is where most of the pervasive
>     assymetric network latencies would exist (or just ping 8.8.8.8 which is 20ms from everywhere due to
>     $magic).
> 
>     Endpoints could also triangulate a bit if needed, using some anchor points in the network
>     under test.
> 
>     Thanks,
>     Ben
> 
>     On 7/12/21 11:21 AM, Bob McMahon wrote:
>      > iperf 2 supports OWD and gives full histograms for TCP write to read, TCP connect times, latency of packets (with UDP), latency of "frames" with
>      > simulated video traffic (TCP and UDP), xfer times of bursts with low duty cycle traffic, and TCP RTT (sampling based.) It also has support for sampling (per
>      > interval reports) down to 100 usecs if configured with --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've released all this as open source.
>      >
>      > OWD only works if the end realtime clocks are synchronized using a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data centers don't
>     provide
>      > sufficient level of clock accuracy and the GPS pulse per second * to colo and vm customers.
>      >
>      > https://iperf2.sourceforge.io/iperf-manpage.html
>      >
>      > Bob
>      >
>      > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:dpreed@deepplum.com
>     <mailto:dpreed@deepplum.com>>> wrote:
>      >
>      >
>      >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
>     <mailto:Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>>> said:
>      >
>      >      > I think latency/delay is becoming seen to be as important certainly, if not a more direct proxy for end user QoE. This is all still evolving and I
>     have
>      >     to say is a super interesting & fun thing to work on. :-)
>      >
>      >     If I could manage to sell one idea to the management hierarchy of communications industry CEOs (operators, vendors, ...) it is this one:
>      >
>      >     "It's the end-to-end latency, stupid!"
>      >
>      >     And I mean, by end-to-end, latency to complete a task at a relevant layer of abstraction.
>      >
>      >     At the link level, it's packet send to packet receive completion.
>      >
>      >     But at the transport level including retransmission buffers, it's datagram (or message) origination until the acknowledgement arrives for that
>     message being
>      >     delivered after whatever number of retransmissions, freeing the retransmission buffer.
>      >
>      >     At the WWW level, it's mouse click to display update corresponding to completion of the request.
>      >
>      >     What should be noted is that lower level latencies don't directly predict the magnitude of higher-level latencies. But longer lower level latencies
>     almost
>      >     always amplfify higher level latencies. Often non-linearly.
>      >
>      >     Throughput is very, very weakly related to these latencies, in contrast.
>      >
>      >     The amplification process has to do with the presence of queueing. Queueing is ALWAYS bad for latency, and throughput only helps if it is in exactly the
>      >     right place (the so-called input queue of the bottleneck process, which is often a link, but not always).
>      >
>      >     Can we get that slogan into Harvard Business Review? Can we get it taught in Managerial Accounting at HBS? (which does address logistics/supply chain
>     queueing).
>      >
>      >
>      >
>      >
>      >
>      >
>      >
>      > This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the
>     use of
>      > the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or
>     otherwise
>      > restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended
>     recipient,
>      > you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you
>      > received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
> 
> 
>     -- 
>     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com>>
>     Candela Technologies Inc http://www.candelatech.com
> 
> 
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of 
> the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise 
> restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, 
> you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you 
> received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.


-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Cake] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 20:32                           ` [Cerowrt-devel] " Ben Greear
@ 2021-07-12 20:36                             ` David Lang
  2021-07-12 20:50                               ` Bob McMahon
  2021-07-12 20:42                             ` Bob McMahon
                                               ` (2 subsequent siblings)
  3 siblings, 1 reply; 108+ messages in thread
From: David Lang @ 2021-07-12 20:36 UTC (permalink / raw)
  To: Ben Greear
  Cc: Bob McMahon, starlink, Make-Wifi-fast, Leonard Kleinrock,
	Cake List, Livingood, Jason, codel, cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 9126 bytes --]

I have seen some performance tests that do explicit DNS timing tests separate 
from other throughput/latency tests.

Since DNS uses UDP (even if it then falls back to TCP in some cases), UDP 
performance (and especially probability of loss at congested links) is very 
important.

David Lang

On Mon, 12 Jul 2021, Ben Greear wrote:

> UDP is better for getting actual packet latency, for sure.  TCP is 
> typical-user-experience-latency though,
> so it is also useful.
>
> I'm interested in the test and visualization side of this.  If there were a 
> way to give engineers
> a good real-time look at a complex real-world network, then they have 
> something to go on while trying
> to tune various knobs in their network to improve it.
>
> I'll let others try to figure out how build and tune the knobs, but the data 
> acquisition and
> visualization is something we might try to accomplish.  I have a feeling I'm 
> not the
> first person to think of this, however....probably someone already has done 
> such
> a thing.
>
> Thanks,
> Ben
>
> On 7/12/21 1:04 PM, Bob McMahon wrote:
>> I believe end host's TCP stats are insufficient as seen per the "failed" 
> congested control mechanisms over the last decades. I think Jaffe pointed 
> this out in 
>> 1979 though he was using what's been deemed on this thread as "spherical 
> cow queueing theory."
>> 
>> "Flow control in store-and-forward computer networks is appropriate for 
> decentralized execution. A formal description of a class of "decentralized 
> flow control 
>> algorithms" is given. The feasibility of maximizing power with such 
> algorithms is investigated. On the assumption that communication links behave 
> like M/M/1 
>> servers it is shown that no "decentralized flow control algorithm" can 
> maximize network power. Power has been suggested in the literature as a 
> network 
>> performance objective. It is also shown that no objective based only on the 
> users' throughputs and average delay is decentralizable. Finally, a 
> restricted class 
>> of algorithms cannot even approximate power."
>> 
>> https://ieeexplore.ieee.org/document/1095152
>> 
>> Did Jaffe make a mistake?
>> 
>> Also, it's been observed that latency is non-parametric in it's 
> distributions and computing gaussians per the central limit theorem for OWD 
> feedback loops 
>> aren't effective. How does one design a control loop around things that are 
> non-parametric? It also begs the question, what are the feed forward knobs 
> that can 
>> actually help?
>> 
>> Bob
>> 
>> On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com 
> <mailto:greearb@candelatech.com>> wrote:
>>
>>     Measuring one or a few links provides a bit of data, but seems like if 
> someone is trying to understand
>>     a large and real network, then the OWD between point A and B needs to 
> just be input into something much
>>     more grand.  Assuming real-time OWD data exists between 100 to 1000 
> endpoint pairs, has anyone found a way
>>     to visualize this in a useful manner?
>>
>>     Also, considering something better than ntp may not really scale to 
> 1000+ endpoints, maybe round-trip
>>     time is only viable way to get this type of data.  In that case, maybe 
> clever logic could use things
>>     like trace-route to get some idea of how long it takes to get 'onto' 
> the internet proper, and so estimate
>>     the last-mile latency.  My assumption is that the last-mile latency is 
> where most of the pervasive
>>     assymetric network latencies would exist (or just ping 8.8.8.8 which is 
> 20ms from everywhere due to
>>     $magic).
>>
>>     Endpoints could also triangulate a bit if needed, using some anchor 
> points in the network
>>     under test.
>>
>>     Thanks,
>>     Ben
>>
>>     On 7/12/21 11:21 AM, Bob McMahon wrote:
>>      > iperf 2 supports OWD and gives full histograms for TCP write to 
> read, TCP connect times, latency of packets (with UDP), latency of "frames" 
> with
>>      > simulated video traffic (TCP and UDP), xfer times of bursts with low 
> duty cycle traffic, and TCP RTT (sampling based.) It also has support for 
> sampling (per
>>      > interval reports) down to 100 usecs if configured with 
> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've released 
> all this as open source.
>>      >
>>      > OWD only works if the end realtime clocks are synchronized using a 
> "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data centers 
> don't
>>     provide
>>      > sufficient level of clock accuracy and the GPS pulse per second * to 
> colo and vm customers.
>>      >
>>      > https://iperf2.sourceforge.io/iperf-manpage.html
>>      >
>>      > Bob
>>      >
>>      > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <dpreed@deepplum.com 
> <mailto:dpreed@deepplum.com> <mailto:dpreed@deepplum.com
>>     <mailto:dpreed@deepplum.com>>> wrote:
>>      >
>>      >
>>      >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" 
> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
>>     <mailto:Jason_Livingood@comcast.com 
> <mailto:Jason_Livingood@comcast.com>>> said:
>>      >
>>      >      > I think latency/delay is becoming seen to be as important 
> certainly, if not a more direct proxy for end user QoE. This is all still 
> evolving and I
>>     have
>>      >     to say is a super interesting & fun thing to work on. :-)
>>      >
>>      >     If I could manage to sell one idea to the management hierarchy 
> of communications industry CEOs (operators, vendors, ...) it is this one:
>>      >
>>      >     "It's the end-to-end latency, stupid!"
>>      >
>>      >     And I mean, by end-to-end, latency to complete a task at a 
> relevant layer of abstraction.
>>      >
>>      >     At the link level, it's packet send to packet receive 
> completion.
>>      >
>>      >     But at the transport level including retransmission buffers, 
> it's datagram (or message) origination until the acknowledgement arrives for 
> that
>>     message being
>>      >     delivered after whatever number of retransmissions, freeing the 
> retransmission buffer.
>>      >
>>      >     At the WWW level, it's mouse click to display update 
> corresponding to completion of the request.
>>      >
>>      >     What should be noted is that lower level latencies don't 
> directly predict the magnitude of higher-level latencies. But longer lower 
> level latencies
>>     almost
>>      >     always amplfify higher level latencies. Often non-linearly.
>>      >
>>      >     Throughput is very, very weakly related to these latencies, in 
> contrast.
>>      >
>>      >     The amplification process has to do with the presence of 
> queueing. Queueing is ALWAYS bad for latency, and throughput only helps if it 
> is in exactly the
>>      >     right place (the so-called input queue of the bottleneck 
> process, which is often a link, but not always).
>>      >
>>      >     Can we get that slogan into Harvard Business Review? Can we get 
> it taught in Managerial Accounting at HBS? (which does address 
> logistics/supply chain
>>     queueing).
>>      >
>>      >
>>      >
>>      >
>>      >
>>      >
>>      >
>>      > This electronic communication and the information and any files 
> transmitted with it, or attached to it, are confidential and are intended 
> solely for the
>>     use of
>>      > the individual or entity to whom it is addressed and may contain 
> information that is confidential, legally privileged, protected by privacy 
> laws, or
>>     otherwise
>>      > restricted from disclosure to anyone else. If you are not the 
> intended recipient or the person responsible for delivering the e-mail to the 
> intended
>>     recipient,
>>      > you are hereby notified that any use, copying, distributing, 
> dissemination, forwarding, printing, or copying of this e-mail is strictly 
> prohibited. If you
>>      > received this e-mail in error, please return the e-mail to the 
> sender, delete it from your computer, and destroy any printed copy of it.
>> 
>>
>>     --
>>     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com>>
>>     Candela Technologies Inc http://www.candelatech.com
>> 
>> 
>> This electronic communication and the information and any files transmitted 
> with it, or attached to it, are confidential and are intended solely for the 
> use of 
>> the individual or entity to whom it is addressed and may contain 
> information that is confidential, legally privileged, protected by privacy 
> laws, or otherwise 
>> restricted from disclosure to anyone else. If you are not the intended 
> recipient or the person responsible for delivering the e-mail to the intended 
> recipient, 
>> you are hereby notified that any use, copying, distributing, dissemination, 
> forwarding, printing, or copying of this e-mail is strictly prohibited. If 
> you 
>> received this e-mail in error, please return the e-mail to the sender, 
> delete it from your computer, and destroy any printed copy of it.
>
>
>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 20:32                           ` [Cerowrt-devel] " Ben Greear
  2021-07-12 20:36                             ` [Cerowrt-devel] [Cake] " David Lang
@ 2021-07-12 20:42                             ` Bob McMahon
  2021-07-13  7:14                             ` [Cerowrt-devel] " Amr Rizk
  2021-07-17 23:29                             ` [Cerowrt-devel] " Aaron Wood
  3 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-12 20:42 UTC (permalink / raw)
  To: Ben Greear
  Cc: David P. Reed, Livingood, Jason, Luca Muscariello, Cake List,
	Make-Wifi-fast, Leonard Kleinrock, starlink, codel,
	cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 10168 bytes --]

We in WiFi find UDP, while useful, also has severe limitations. The impact
to the TCP control loop matters a lot for things like aggregation.

Visualizations can be useful but also a bit limiting. We use stats
techniques such as PCA which is more mathematical and less visual.

We find syscall connect() times as a bit more relevant to user experience
than ICMP pings which are typically originated and terminated in kernel
space.

Bob

On Mon, Jul 12, 2021 at 1:32 PM Ben Greear <greearb@candelatech.com> wrote:

> UDP is better for getting actual packet latency, for sure.  TCP is
> typical-user-experience-latency though,
> so it is also useful.
>
> I'm interested in the test and visualization side of this.  If there were
> a way to give engineers
> a good real-time look at a complex real-world network, then they have
> something to go on while trying
> to tune various knobs in their network to improve it.
>
> I'll let others try to figure out how build and tune the knobs, but the
> data acquisition and
> visualization is something we might try to accomplish.  I have a feeling
> I'm not the
> first person to think of this, however....probably someone already has
> done such
> a thing.
>
> Thanks,
> Ben
>
> On 7/12/21 1:04 PM, Bob McMahon wrote:
> > I believe end host's TCP stats are insufficient as seen per the "failed"
> congested control mechanisms over the last decades. I think Jaffe pointed
> this out in
> > 1979 though he was using what's been deemed on this thread as "spherical
> cow queueing theory."
> >
> > "Flow control in store-and-forward computer networks is appropriate for
> decentralized execution. A formal description of a class of "decentralized
> flow control
> > algorithms" is given. The feasibility of maximizing power with such
> algorithms is investigated. On the assumption that communication links
> behave like M/M/1
> > servers it is shown that no "decentralized flow control algorithm" can
> maximize network power. Power has been suggested in the literature as a
> network
> > performance objective. It is also shown that no objective based only on
> the users' throughputs and average delay is decentralizable. Finally, a
> restricted class
> > of algorithms cannot even approximate power."
> >
> > https://ieeexplore.ieee.org/document/1095152
> >
> > Did Jaffe make a mistake?
> >
> > Also, it's been observed that latency is non-parametric in it's
> distributions and computing gaussians per the central limit theorem for OWD
> feedback loops
> > aren't effective. How does one design a control loop around things that
> are non-parametric? It also begs the question, what are the feed forward
> knobs that can
> > actually help?
> >
> > Bob
> >
> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com
> <mailto:greearb@candelatech.com>> wrote:
> >
> >     Measuring one or a few links provides a bit of data, but seems like
> if someone is trying to understand
> >     a large and real network, then the OWD between point A and B needs
> to just be input into something much
> >     more grand.  Assuming real-time OWD data exists between 100 to 1000
> endpoint pairs, has anyone found a way
> >     to visualize this in a useful manner?
> >
> >     Also, considering something better than ntp may not really scale to
> 1000+ endpoints, maybe round-trip
> >     time is only viable way to get this type of data.  In that case,
> maybe clever logic could use things
> >     like trace-route to get some idea of how long it takes to get 'onto'
> the internet proper, and so estimate
> >     the last-mile latency.  My assumption is that the last-mile latency
> is where most of the pervasive
> >     assymetric network latencies would exist (or just ping 8.8.8.8 which
> is 20ms from everywhere due to
> >     $magic).
> >
> >     Endpoints could also triangulate a bit if needed, using some anchor
> points in the network
> >     under test.
> >
> >     Thanks,
> >     Ben
> >
> >     On 7/12/21 11:21 AM, Bob McMahon wrote:
> >      > iperf 2 supports OWD and gives full histograms for TCP write to
> read, TCP connect times, latency of packets (with UDP), latency of "frames"
> with
> >      > simulated video traffic (TCP and UDP), xfer times of bursts with
> low duty cycle traffic, and TCP RTT (sampling based.) It also has support
> for sampling (per
> >      > interval reports) down to 100 usecs if configured with
> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
> released all this as open source.
> >      >
> >      > OWD only works if the end realtime clocks are synchronized using
> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
> centers don't
> >     provide
> >      > sufficient level of clock accuracy and the GPS pulse per second *
> to colo and vm customers.
> >      >
> >      > https://iperf2.sourceforge.io/iperf-manpage.html
> >      >
> >      > Bob
> >      >
> >      > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <
> dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:
> dpreed@deepplum.com
> >     <mailto:dpreed@deepplum.com>>> wrote:
> >      >
> >      >
> >      >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
> Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
> >     <mailto:Jason_Livingood@comcast.com <mailto:
> Jason_Livingood@comcast.com>>> said:
> >      >
> >      >      > I think latency/delay is becoming seen to be as important
> certainly, if not a more direct proxy for end user QoE. This is all still
> evolving and I
> >     have
> >      >     to say is a super interesting & fun thing to work on. :-)
> >      >
> >      >     If I could manage to sell one idea to the management
> hierarchy of communications industry CEOs (operators, vendors, ...) it is
> this one:
> >      >
> >      >     "It's the end-to-end latency, stupid!"
> >      >
> >      >     And I mean, by end-to-end, latency to complete a task at a
> relevant layer of abstraction.
> >      >
> >      >     At the link level, it's packet send to packet receive
> completion.
> >      >
> >      >     But at the transport level including retransmission buffers,
> it's datagram (or message) origination until the acknowledgement arrives
> for that
> >     message being
> >      >     delivered after whatever number of retransmissions, freeing
> the retransmission buffer.
> >      >
> >      >     At the WWW level, it's mouse click to display update
> corresponding to completion of the request.
> >      >
> >      >     What should be noted is that lower level latencies don't
> directly predict the magnitude of higher-level latencies. But longer lower
> level latencies
> >     almost
> >      >     always amplfify higher level latencies. Often non-linearly.
> >      >
> >      >     Throughput is very, very weakly related to these latencies,
> in contrast.
> >      >
> >      >     The amplification process has to do with the presence of
> queueing. Queueing is ALWAYS bad for latency, and throughput only helps if
> it is in exactly the
> >      >     right place (the so-called input queue of the bottleneck
> process, which is often a link, but not always).
> >      >
> >      >     Can we get that slogan into Harvard Business Review? Can we
> get it taught in Managerial Accounting at HBS? (which does address
> logistics/supply chain
> >     queueing).
> >      >
> >      >
> >      >
> >      >
> >      >
> >      >
> >      >
> >      > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the
> >     use of
> >      > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or
> >     otherwise
> >      > restricted from disclosure to anyone else. If you are not the
> intended recipient or the person responsible for delivering the e-mail to
> the intended
> >     recipient,
> >      > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> >      > received this e-mail in error, please return the e-mail to the
> sender, delete it from your computer, and destroy any printed copy of it.
> >
> >
> >     --
> >     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com
> >>
> >     Candela Technologies Inc http://www.candelatech.com
> >
> >
> > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of
> > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or otherwise
> > restricted from disclosure to anyone else. If you are not the intended
> recipient or the person responsible for delivering the e-mail to the
> intended recipient,
> > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> > received this e-mail in error, please return the e-mail to the sender,
> delete it from your computer, and destroy any printed copy of it.
>
>
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc  http://www.candelatech.com
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 13004 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cake] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 20:36                             ` [Cerowrt-devel] [Cake] " David Lang
@ 2021-07-12 20:50                               ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-12 20:50 UTC (permalink / raw)
  To: David Lang
  Cc: Ben Greear, starlink, Make-Wifi-fast, Leonard Kleinrock,
	Cake List, Livingood, Jason, codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 10728 bytes --]

Agreed that UDP is important but it's also much easier to test and debug
for WiFi coders. We find it's the connect() and TCP control loop that
challenges WiFi logic systems on end hosts. APs are a whole different story
per things like OFDMA.

Nothing is simple anymore it seems. Reminds me of the standard model
developed over time at CERN. It ain't E=MC**2

Bob

On Mon, Jul 12, 2021 at 1:36 PM David Lang <david@lang.hm> wrote:

> I have seen some performance tests that do explicit DNS timing tests
> separate
> from other throughput/latency tests.
>
> Since DNS uses UDP (even if it then falls back to TCP in some cases), UDP
> performance (and especially probability of loss at congested links) is
> very
> important.
>
> David Lang
>
> On Mon, 12 Jul 2021, Ben Greear wrote:
>
> > UDP is better for getting actual packet latency, for sure.  TCP is
> > typical-user-experience-latency though,
> > so it is also useful.
> >
> > I'm interested in the test and visualization side of this.  If there
> were a
> > way to give engineers
> > a good real-time look at a complex real-world network, then they have
> > something to go on while trying
> > to tune various knobs in their network to improve it.
> >
> > I'll let others try to figure out how build and tune the knobs, but the
> data
> > acquisition and
> > visualization is something we might try to accomplish.  I have a feeling
> I'm
> > not the
> > first person to think of this, however....probably someone already has
> done
> > such
> > a thing.
> >
> > Thanks,
> > Ben
> >
> > On 7/12/21 1:04 PM, Bob McMahon wrote:
> >> I believe end host's TCP stats are insufficient as seen per the
> "failed"
> > congested control mechanisms over the last decades. I think Jaffe
> pointed
> > this out in
> >> 1979 though he was using what's been deemed on this thread as
> "spherical
> > cow queueing theory."
> >>
> >> "Flow control in store-and-forward computer networks is appropriate for
> > decentralized execution. A formal description of a class of
> "decentralized
> > flow control
> >> algorithms" is given. The feasibility of maximizing power with such
> > algorithms is investigated. On the assumption that communication links
> behave
> > like M/M/1
> >> servers it is shown that no "decentralized flow control algorithm" can
> > maximize network power. Power has been suggested in the literature as a
> > network
> >> performance objective. It is also shown that no objective based only on
> the
> > users' throughputs and average delay is decentralizable. Finally, a
> > restricted class
> >> of algorithms cannot even approximate power."
> >>
> >> https://ieeexplore.ieee.org/document/1095152
> >>
> >> Did Jaffe make a mistake?
> >>
> >> Also, it's been observed that latency is non-parametric in it's
> > distributions and computing gaussians per the central limit theorem for
> OWD
> > feedback loops
> >> aren't effective. How does one design a control loop around things that
> are
> > non-parametric? It also begs the question, what are the feed forward
> knobs
> > that can
> >> actually help?
> >>
> >> Bob
> >>
> >> On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com
> > <mailto:greearb@candelatech.com>> wrote:
> >>
> >>     Measuring one or a few links provides a bit of data, but seems like
> if
> > someone is trying to understand
> >>     a large and real network, then the OWD between point A and B needs
> to
> > just be input into something much
> >>     more grand.  Assuming real-time OWD data exists between 100 to 1000
> > endpoint pairs, has anyone found a way
> >>     to visualize this in a useful manner?
> >>
> >>     Also, considering something better than ntp may not really scale to
> > 1000+ endpoints, maybe round-trip
> >>     time is only viable way to get this type of data.  In that case,
> maybe
> > clever logic could use things
> >>     like trace-route to get some idea of how long it takes to get
> 'onto'
> > the internet proper, and so estimate
> >>     the last-mile latency.  My assumption is that the last-mile latency
> is
> > where most of the pervasive
> >>     assymetric network latencies would exist (or just ping 8.8.8.8
> which is
> > 20ms from everywhere due to
> >>     $magic).
> >>
> >>     Endpoints could also triangulate a bit if needed, using some anchor
> > points in the network
> >>     under test.
> >>
> >>     Thanks,
> >>     Ben
> >>
> >>     On 7/12/21 11:21 AM, Bob McMahon wrote:
> >>      > iperf 2 supports OWD and gives full histograms for TCP write to
> > read, TCP connect times, latency of packets (with UDP), latency of
> "frames"
> > with
> >>      > simulated video traffic (TCP and UDP), xfer times of bursts with
> low
> > duty cycle traffic, and TCP RTT (sampling based.) It also has support
> for
> > sampling (per
> >>      > interval reports) down to 100 usecs if configured with
> > --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
> released
> > all this as open source.
> >>      >
> >>      > OWD only works if the end realtime clocks are synchronized using
> a
> > "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
> centers
> > don't
> >>     provide
> >>      > sufficient level of clock accuracy and the GPS pulse per second
> * to
> > colo and vm customers.
> >>      >
> >>      > https://iperf2.sourceforge.io/iperf-manpage.html
> >>      >
> >>      > Bob
> >>      >
> >>      > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <
> dpreed@deepplum.com
> > <mailto:dpreed@deepplum.com> <mailto:dpreed@deepplum.com
> >>     <mailto:dpreed@deepplum.com>>> wrote:
> >>      >
> >>      >
> >>      >     On Monday, July 12, 2021 9:46am, "Livingood, Jason"
> > <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
> >>     <mailto:Jason_Livingood@comcast.com
> > <mailto:Jason_Livingood@comcast.com>>> said:
> >>      >
> >>      >      > I think latency/delay is becoming seen to be as important
> > certainly, if not a more direct proxy for end user QoE. This is all
> still
> > evolving and I
> >>     have
> >>      >     to say is a super interesting & fun thing to work on. :-)
> >>      >
> >>      >     If I could manage to sell one idea to the management
> hierarchy
> > of communications industry CEOs (operators, vendors, ...) it is this one:
> >>      >
> >>      >     "It's the end-to-end latency, stupid!"
> >>      >
> >>      >     And I mean, by end-to-end, latency to complete a task at a
> > relevant layer of abstraction.
> >>      >
> >>      >     At the link level, it's packet send to packet receive
> > completion.
> >>      >
> >>      >     But at the transport level including retransmission buffers,
> > it's datagram (or message) origination until the acknowledgement arrives
> for
> > that
> >>     message being
> >>      >     delivered after whatever number of retransmissions, freeing
> the
> > retransmission buffer.
> >>      >
> >>      >     At the WWW level, it's mouse click to display update
> > corresponding to completion of the request.
> >>      >
> >>      >     What should be noted is that lower level latencies don't
> > directly predict the magnitude of higher-level latencies. But longer
> lower
> > level latencies
> >>     almost
> >>      >     always amplfify higher level latencies. Often non-linearly.
> >>      >
> >>      >     Throughput is very, very weakly related to these latencies,
> in
> > contrast.
> >>      >
> >>      >     The amplification process has to do with the presence of
> > queueing. Queueing is ALWAYS bad for latency, and throughput only helps
> if it
> > is in exactly the
> >>      >     right place (the so-called input queue of the bottleneck
> > process, which is often a link, but not always).
> >>      >
> >>      >     Can we get that slogan into Harvard Business Review? Can we
> get
> > it taught in Managerial Accounting at HBS? (which does address
> > logistics/supply chain
> >>     queueing).
> >>      >
> >>      >
> >>      >
> >>      >
> >>      >
> >>      >
> >>      >
> >>      > This electronic communication and the information and any files
> > transmitted with it, or attached to it, are confidential and are
> intended
> > solely for the
> >>     use of
> >>      > the individual or entity to whom it is addressed and may contain
> > information that is confidential, legally privileged, protected by
> privacy
> > laws, or
> >>     otherwise
> >>      > restricted from disclosure to anyone else. If you are not the
> > intended recipient or the person responsible for delivering the e-mail
> to the
> > intended
> >>     recipient,
> >>      > you are hereby notified that any use, copying, distributing,
> > dissemination, forwarding, printing, or copying of this e-mail is
> strictly
> > prohibited. If you
> >>      > received this e-mail in error, please return the e-mail to the
> > sender, delete it from your computer, and destroy any printed copy of it.
> >>
> >>
> >>     --
> >>     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com
> >>
> >>     Candela Technologies Inc http://www.candelatech.com
> >>
> >>
> >> This electronic communication and the information and any files
> transmitted
> > with it, or attached to it, are confidential and are intended solely for
> the
> > use of
> >> the individual or entity to whom it is addressed and may contain
> > information that is confidential, legally privileged, protected by
> privacy
> > laws, or otherwise
> >> restricted from disclosure to anyone else. If you are not the intended
> > recipient or the person responsible for delivering the e-mail to the
> intended
> > recipient,
> >> you are hereby notified that any use, copying, distributing,
> dissemination,
> > forwarding, printing, or copying of this e-mail is strictly prohibited.
> If
> > you
> >> received this e-mail in error, please return the e-mail to the sender,
> > delete it from your computer, and destroy any printed copy of it.
> >
> >
> >

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 14442 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Make-wifi-fast] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 20:04                         ` Bob McMahon
  2021-07-12 20:32                           ` [Cerowrt-devel] " Ben Greear
@ 2021-07-12 21:54                           ` Jonathan Morton
  1 sibling, 0 replies; 108+ messages in thread
From: Jonathan Morton @ 2021-07-12 21:54 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Ben Greear, starlink, Make-Wifi-fast, Leonard Kleinrock,
	David P. Reed, Cake List, Livingood, Jason, codel, cerowrt-devel,
	bloat

> On 12 Jul, 2021, at 11:04 pm, Bob McMahon via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
> 
> "Flow control in store-and-forward computer networks is appropriate for decentralized execution. A formal description of a class of "decentralized flow control algorithms" is given. The feasibility of maximizing power with such algorithms is investigated. On the assumption that communication links behave like M/M/1 servers it is shown that no "decentralized flow control algorithm" can maximize network power. Power has been suggested in the literature as a network performance objective. It is also shown that no objective based only on the users' throughputs and average delay is decentralizable. Finally, a restricted class of algorithms cannot even approximate power."
> 
> https://ieeexplore.ieee.org/document/1095152
> 
> Did Jaffe make a mistake?

I would suggest that if you model traffic as having no control feedback, you will inevitably find that no control occurs.  But real Internet traffic *does* have control feedback - though it was introduced some time *after* Jaffe's paper, so we can forgive him for a degree of ignorance on that point.  Perhaps Jaffe effectively predicted the ARPANET congestion collapse events with his analysis.

> Also, it's been observed that latency is non-parametric in it's distributions and computing gaussians per the central limit theorem for OWD feedback loops aren't effective. How does one design a control loop around things that are non-parametric? It also begs the question, what are the feed forward knobs that can actually help?

Control at endpoints benefits greatly from even small amounts of information supplied by the network about the degree of congestion present on the path.  This is the role played first by packets lost at queue overflow, then deliberately dropped by AQMs, then marked using the ECN mechanism rather than dropped.

AQM algorithms can be exceedingly simple, or they can be rather sophisticated.  Increased levels of sophistication in both the AQM and the endpoint's congestion control algorithm may be used to increase the "network power" actually obtained.  The required level of complexity for each, achieving reasonably good results, is however quite low.

 - Jonathan Morton

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 20:32                           ` [Cerowrt-devel] " Ben Greear
  2021-07-12 20:36                             ` [Cerowrt-devel] [Cake] " David Lang
  2021-07-12 20:42                             ` Bob McMahon
@ 2021-07-13  7:14                             ` Amr Rizk
  2021-07-13 17:07                               ` Bob McMahon
  2021-07-13 17:22                               ` Bob McMahon
  2021-07-17 23:29                             ` [Cerowrt-devel] " Aaron Wood
  3 siblings, 2 replies; 108+ messages in thread
From: Amr Rizk @ 2021-07-13  7:14 UTC (permalink / raw)
  To: 'Ben Greear', 'Bob McMahon'
  Cc: starlink, 'Make-Wifi-fast', 'Leonard Kleinrock',
	'David P. Reed', 'Cake List',
	codel, 'cerowrt-devel', 'bloat'

Ben, 

it depends on what one tries to measure. Doing a rate scan using UDP (to measure latency distributions under load) is the best thing that we have but without actually knowing how resources are shared (fair share as in WiFi, FIFO as nearly everywhere else) it becomes very difficult to interpret the results or provide a proper argument on latency. You are right - TCP stats are a proxy for user experience but I believe they are difficult to reproduce (we are always talking about very short TCP flows - the infinite TCP flow that converges to a steady behavior is purely academic).

By the way, Little's law is a strong tool when it comes to averages. To be able to say more (e.g. 1% of the delays is larger than x) one requires more information (e.g. the traffic - On-OFF pattern) see [1].  I am not sure when does such information readily exist. 

Best
Amr 

[1] https://dl.acm.org/doi/10.1145/3341617.3326146 or if behind a paywall https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf

--------------------------------
Amr Rizk (amr.rizk@uni-due.de)
University of Duisburg-Essen

-----Ursprüngliche Nachricht-----
Von: Bloat <bloat-bounces@lists.bufferbloat.net> Im Auftrag von Ben Greear
Gesendet: Montag, 12. Juli 2021 22:32
An: Bob McMahon <bob.mcmahon@broadcom.com>
Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>; Leonard Kleinrock <lk@cs.ucla.edu>; David P. Reed <dpreed@deepplum.com>; Cake List <cake@lists.bufferbloat.net>; codel@lists.bufferbloat.net; cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidating my main point

UDP is better for getting actual packet latency, for sure.  TCP is typical-user-experience-latency though, so it is also useful.

I'm interested in the test and visualization side of this.  If there were a way to give engineers a good real-time look at a complex real-world network, then they have something to go on while trying to tune various knobs in their network to improve it.

I'll let others try to figure out how build and tune the knobs, but the data acquisition and visualization is something we might try to accomplish.  I have a feeling I'm not the first person to think of this, however....probably someone already has done such a thing.

Thanks,
Ben

On 7/12/21 1:04 PM, Bob McMahon wrote:
> I believe end host's TCP stats are insufficient as seen per the 
> "failed" congested control mechanisms over the last decades. I think 
> Jaffe pointed this out in
> 1979 though he was using what's been deemed on this thread as "spherical cow queueing theory."
> 
> "Flow control in store-and-forward computer networks is appropriate 
> for decentralized execution. A formal description of a class of 
> "decentralized flow control algorithms" is given. The feasibility of 
> maximizing power with such algorithms is investigated. On the 
> assumption that communication links behave like M/M/1 servers it is shown that no "decentralized flow control algorithm" can maximize network power. Power has been suggested in the literature as a network performance objective. It is also shown that no objective based only on the users' throughputs and average delay is decentralizable. Finally, a restricted class of algorithms cannot even approximate power."
> 
> https://ieeexplore.ieee.org/document/1095152
> 
> Did Jaffe make a mistake?
> 
> Also, it's been observed that latency is non-parametric in it's 
> distributions and computing gaussians per the central limit theorem 
> for OWD feedback loops aren't effective. How does one design a control loop around things that are non-parametric? It also begs the question, what are the feed forward knobs that can actually help?
> 
> Bob
> 
> On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com>> wrote:
> 
>     Measuring one or a few links provides a bit of data, but seems like if someone is trying to understand
>     a large and real network, then the OWD between point A and B needs to just be input into something much
>     more grand.  Assuming real-time OWD data exists between 100 to 1000 endpoint pairs, has anyone found a way
>     to visualize this in a useful manner?
> 
>     Also, considering something better than ntp may not really scale to 1000+ endpoints, maybe round-trip
>     time is only viable way to get this type of data.  In that case, maybe clever logic could use things
>     like trace-route to get some idea of how long it takes to get 'onto' the internet proper, and so estimate
>     the last-mile latency.  My assumption is that the last-mile latency is where most of the pervasive
>     assymetric network latencies would exist (or just ping 8.8.8.8 which is 20ms from everywhere due to
>     $magic).
> 
>     Endpoints could also triangulate a bit if needed, using some anchor points in the network
>     under test.
> 
>     Thanks,
>     Ben
> 
>     On 7/12/21 11:21 AM, Bob McMahon wrote:
>      > iperf 2 supports OWD and gives full histograms for TCP write to read, TCP connect times, latency of packets (with UDP), latency of "frames" with
>      > simulated video traffic (TCP and UDP), xfer times of bursts with low duty cycle traffic, and TCP RTT (sampling based.) It also has support for sampling (per
>      > interval reports) down to 100 usecs if configured with --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've released all this as open source.
>      >
>      > OWD only works if the end realtime clocks are synchronized using a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data centers don't
>     provide
>      > sufficient level of clock accuracy and the GPS pulse per second * to colo and vm customers.
>      >
>      > https://iperf2.sourceforge.io/iperf-manpage.html
>      >
>      > Bob
>      >
>      > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:dpreed@deepplum.com
>     <mailto:dpreed@deepplum.com>>> wrote:
>      >
>      >
>      >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
>     <mailto:Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>>> said:
>      >
>      >      > I think latency/delay is becoming seen to be as important certainly, if not a more direct proxy for end user QoE. This is all still evolving and I
>     have
>      >     to say is a super interesting & fun thing to work on. :-)
>      >
>      >     If I could manage to sell one idea to the management hierarchy of communications industry CEOs (operators, vendors, ...) it is this one:
>      >
>      >     "It's the end-to-end latency, stupid!"
>      >
>      >     And I mean, by end-to-end, latency to complete a task at a relevant layer of abstraction.
>      >
>      >     At the link level, it's packet send to packet receive completion.
>      >
>      >     But at the transport level including retransmission buffers, it's datagram (or message) origination until the acknowledgement arrives for that
>     message being
>      >     delivered after whatever number of retransmissions, freeing the retransmission buffer.
>      >
>      >     At the WWW level, it's mouse click to display update corresponding to completion of the request.
>      >
>      >     What should be noted is that lower level latencies don't directly predict the magnitude of higher-level latencies. But longer lower level latencies
>     almost
>      >     always amplfify higher level latencies. Often non-linearly.
>      >
>      >     Throughput is very, very weakly related to these latencies, in contrast.
>      >
>      >     The amplification process has to do with the presence of queueing. Queueing is ALWAYS bad for latency, and throughput only helps if it is in exactly the
>      >     right place (the so-called input queue of the bottleneck process, which is often a link, but not always).
>      >
>      >     Can we get that slogan into Harvard Business Review? Can we get it taught in Managerial Accounting at HBS? (which does address logistics/supply chain
>     queueing).
>      >
>      >
>      >
>      >
>      >
>      >
>      >
>      > This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the
>     use of
>      > the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or
>     otherwise
>      > restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended
>     recipient,
>      > you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you
>      > received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
> 
> 
>     -- 
>     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com>>
>     Candela Technologies Inc http://www.candelatech.com
> 
> 
> This electronic communication and the information and any files 
> transmitted with it, or attached to it, are confidential and are 
> intended solely for the use of the individual or entity to whom it is 
> addressed and may contain information that is confidential, legally 
> privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.


--
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-13  7:14                             ` [Cerowrt-devel] " Amr Rizk
@ 2021-07-13 17:07                               ` Bob McMahon
  2021-07-13 17:49                                 ` [Cerowrt-devel] " David P. Reed
  2021-07-13 17:22                               ` Bob McMahon
  1 sibling, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-07-13 17:07 UTC (permalink / raw)
  To: Amr Rizk
  Cc: Ben Greear, starlink, Make-Wifi-fast, Leonard Kleinrock,
	David P. Reed, Cake List, codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 12806 bytes --]

"Control at endpoints benefits greatly from even small amounts of
information supplied by the network about the degree of congestion present
on the path."

Agreed. The ECN mechanism seems like a shared thermostat in a building.
It's basically an on/off where everyone is trying to set the temperature.
It does affect, in a non-linear manner, but still an effect. Better than a
thermostat set at infinity or 0 Kelvin for sure.

I find the assumption that congestion occurs "in network" as not always
true. Taking OWD measurements with read side rate limiting suggests that
equally important to mitigating bufferbloat driven latency using congestion
signals is to make sure apps read "fast enough" whatever that means. I
rarely hear about how important it is for apps to prioritize reads over
open sockets. Not sure why that's overlooked and bufferbloat gets all the
attention. I'm probably missing something.

Bob

On Tue, Jul 13, 2021 at 12:15 AM Amr Rizk <amr@rizk.com.de> wrote:

> Ben,
>
> it depends on what one tries to measure. Doing a rate scan using UDP (to
> measure latency distributions under load) is the best thing that we have
> but without actually knowing how resources are shared (fair share as in
> WiFi, FIFO as nearly everywhere else) it becomes very difficult to
> interpret the results or provide a proper argument on latency. You are
> right - TCP stats are a proxy for user experience but I believe they are
> difficult to reproduce (we are always talking about very short TCP flows -
> the infinite TCP flow that converges to a steady behavior is purely
> academic).
>
> By the way, Little's law is a strong tool when it comes to averages. To be
> able to say more (e.g. 1% of the delays is larger than x) one requires more
> information (e.g. the traffic - On-OFF pattern) see [1].  I am not sure
> when does such information readily exist.
>
> Best
> Amr
>
> [1] https://dl.acm.org/doi/10.1145/3341617.3326146 or if behind a paywall
> https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf
>
> --------------------------------
> Amr Rizk (amr.rizk@uni-due.de)
> University of Duisburg-Essen
>
> -----Ursprüngliche Nachricht-----
> Von: Bloat <bloat-bounces@lists.bufferbloat.net> Im Auftrag von Ben Greear
> Gesendet: Montag, 12. Juli 2021 22:32
> An: Bob McMahon <bob.mcmahon@broadcom.com>
> Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast <
> make-wifi-fast@lists.bufferbloat.net>; Leonard Kleinrock <lk@cs.ucla.edu>;
> David P. Reed <dpreed@deepplum.com>; Cake List <cake@lists.bufferbloat.net>;
> codel@lists.bufferbloat.net; cerowrt-devel <
> cerowrt-devel@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
> Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidating my main
> point
>
> UDP is better for getting actual packet latency, for sure.  TCP is
> typical-user-experience-latency though, so it is also useful.
>
> I'm interested in the test and visualization side of this.  If there were
> a way to give engineers a good real-time look at a complex real-world
> network, then they have something to go on while trying to tune various
> knobs in their network to improve it.
>
> I'll let others try to figure out how build and tune the knobs, but the
> data acquisition and visualization is something we might try to
> accomplish.  I have a feeling I'm not the first person to think of this,
> however....probably someone already has done such a thing.
>
> Thanks,
> Ben
>
> On 7/12/21 1:04 PM, Bob McMahon wrote:
> > I believe end host's TCP stats are insufficient as seen per the
> > "failed" congested control mechanisms over the last decades. I think
> > Jaffe pointed this out in
> > 1979 though he was using what's been deemed on this thread as "spherical
> cow queueing theory."
> >
> > "Flow control in store-and-forward computer networks is appropriate
> > for decentralized execution. A formal description of a class of
> > "decentralized flow control algorithms" is given. The feasibility of
> > maximizing power with such algorithms is investigated. On the
> > assumption that communication links behave like M/M/1 servers it is
> shown that no "decentralized flow control algorithm" can maximize network
> power. Power has been suggested in the literature as a network performance
> objective. It is also shown that no objective based only on the users'
> throughputs and average delay is decentralizable. Finally, a restricted
> class of algorithms cannot even approximate power."
> >
> > https://ieeexplore.ieee.org/document/1095152
> >
> > Did Jaffe make a mistake?
> >
> > Also, it's been observed that latency is non-parametric in it's
> > distributions and computing gaussians per the central limit theorem
> > for OWD feedback loops aren't effective. How does one design a control
> loop around things that are non-parametric? It also begs the question, what
> are the feed forward knobs that can actually help?
> >
> > Bob
> >
> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com
> <mailto:greearb@candelatech.com>> wrote:
> >
> >     Measuring one or a few links provides a bit of data, but seems like
> if someone is trying to understand
> >     a large and real network, then the OWD between point A and B needs
> to just be input into something much
> >     more grand.  Assuming real-time OWD data exists between 100 to 1000
> endpoint pairs, has anyone found a way
> >     to visualize this in a useful manner?
> >
> >     Also, considering something better than ntp may not really scale to
> 1000+ endpoints, maybe round-trip
> >     time is only viable way to get this type of data.  In that case,
> maybe clever logic could use things
> >     like trace-route to get some idea of how long it takes to get 'onto'
> the internet proper, and so estimate
> >     the last-mile latency.  My assumption is that the last-mile latency
> is where most of the pervasive
> >     assymetric network latencies would exist (or just ping 8.8.8.8 which
> is 20ms from everywhere due to
> >     $magic).
> >
> >     Endpoints could also triangulate a bit if needed, using some anchor
> points in the network
> >     under test.
> >
> >     Thanks,
> >     Ben
> >
> >     On 7/12/21 11:21 AM, Bob McMahon wrote:
> >      > iperf 2 supports OWD and gives full histograms for TCP write to
> read, TCP connect times, latency of packets (with UDP), latency of "frames"
> with
> >      > simulated video traffic (TCP and UDP), xfer times of bursts with
> low duty cycle traffic, and TCP RTT (sampling based.) It also has support
> for sampling (per
> >      > interval reports) down to 100 usecs if configured with
> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
> released all this as open source.
> >      >
> >      > OWD only works if the end realtime clocks are synchronized using
> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
> centers don't
> >     provide
> >      > sufficient level of clock accuracy and the GPS pulse per second *
> to colo and vm customers.
> >      >
> >      > https://iperf2.sourceforge.io/iperf-manpage.html
> >      >
> >      > Bob
> >      >
> >      > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <
> dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:
> dpreed@deepplum.com
> >     <mailto:dpreed@deepplum.com>>> wrote:
> >      >
> >      >
> >      >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
> Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
> >     <mailto:Jason_Livingood@comcast.com <mailto:
> Jason_Livingood@comcast.com>>> said:
> >      >
> >      >      > I think latency/delay is becoming seen to be as important
> certainly, if not a more direct proxy for end user QoE. This is all still
> evolving and I
> >     have
> >      >     to say is a super interesting & fun thing to work on. :-)
> >      >
> >      >     If I could manage to sell one idea to the management
> hierarchy of communications industry CEOs (operators, vendors, ...) it is
> this one:
> >      >
> >      >     "It's the end-to-end latency, stupid!"
> >      >
> >      >     And I mean, by end-to-end, latency to complete a task at a
> relevant layer of abstraction.
> >      >
> >      >     At the link level, it's packet send to packet receive
> completion.
> >      >
> >      >     But at the transport level including retransmission buffers,
> it's datagram (or message) origination until the acknowledgement arrives
> for that
> >     message being
> >      >     delivered after whatever number of retransmissions, freeing
> the retransmission buffer.
> >      >
> >      >     At the WWW level, it's mouse click to display update
> corresponding to completion of the request.
> >      >
> >      >     What should be noted is that lower level latencies don't
> directly predict the magnitude of higher-level latencies. But longer lower
> level latencies
> >     almost
> >      >     always amplfify higher level latencies. Often non-linearly.
> >      >
> >      >     Throughput is very, very weakly related to these latencies,
> in contrast.
> >      >
> >      >     The amplification process has to do with the presence of
> queueing. Queueing is ALWAYS bad for latency, and throughput only helps if
> it is in exactly the
> >      >     right place (the so-called input queue of the bottleneck
> process, which is often a link, but not always).
> >      >
> >      >     Can we get that slogan into Harvard Business Review? Can we
> get it taught in Managerial Accounting at HBS? (which does address
> logistics/supply chain
> >     queueing).
> >      >
> >      >
> >      >
> >      >
> >      >
> >      >
> >      >
> >      > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the
> >     use of
> >      > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or
> >     otherwise
> >      > restricted from disclosure to anyone else. If you are not the
> intended recipient or the person responsible for delivering the e-mail to
> the intended
> >     recipient,
> >      > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> >      > received this e-mail in error, please return the e-mail to the
> sender, delete it from your computer, and destroy any printed copy of it.
> >
> >
> >     --
> >     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com
> >>
> >     Candela Technologies Inc http://www.candelatech.com
> >
> >
> > This electronic communication and the information and any files
> > transmitted with it, or attached to it, are confidential and are
> > intended solely for the use of the individual or entity to whom it is
> > addressed and may contain information that is confidential, legally
> > privileged, protected by privacy laws, or otherwise restricted from
> disclosure to anyone else. If you are not the intended recipient or the
> person responsible for delivering the e-mail to the intended recipient, you
> are hereby notified that any use, copying, distributing, dissemination,
> forwarding, printing, or copying of this e-mail is strictly prohibited. If
> you received this e-mail in error, please return the e-mail to the sender,
> delete it from your computer, and destroy any printed copy of it.
>
>
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc  http://www.candelatech.com
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 16571 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-13  7:14                             ` [Cerowrt-devel] " Amr Rizk
  2021-07-13 17:07                               ` Bob McMahon
@ 2021-07-13 17:22                               ` Bob McMahon
  1 sibling, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-13 17:22 UTC (permalink / raw)
  To: Amr Rizk
  Cc: Ben Greear, starlink, Make-Wifi-fast, Leonard Kleinrock,
	David P. Reed, Cake List, codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 12565 bytes --]

"the infinite TCP flow that converges to a steady behavior is purely
academic"

We find this to be mostly true. Sadly, the tools such as iperf drove to
this condition. While still useful, not realistic.

We added, in iperf 2, the ability to test TCP bursts (--burst-size and
--burst-period) over low duty cycles and get completely different sets of
phenomena with TCP.

It seems (past) time that peak average throughput driving the control loop
needs reconsideration, particularly if TCP_NODELAY is set on a socket. This
is particularly challenging for WiFi because "requests for low latency"
usually triggers no aggregation which doesn't always reduce tail latencies.

Bob

On Tue, Jul 13, 2021 at 12:15 AM Amr Rizk <amr@rizk.com.de> wrote:

> Ben,
>
> it depends on what one tries to measure. Doing a rate scan using UDP (to
> measure latency distributions under load) is the best thing that we have
> but without actually knowing how resources are shared (fair share as in
> WiFi, FIFO as nearly everywhere else) it becomes very difficult to
> interpret the results or provide a proper argument on latency. You are
> right - TCP stats are a proxy for user experience but I believe they are
> difficult to reproduce (we are always talking about very short TCP flows -
> the infinite TCP flow that converges to a steady behavior is purely
> academic).
>
> By the way, Little's law is a strong tool when it comes to averages. To be
> able to say more (e.g. 1% of the delays is larger than x) one requires more
> information (e.g. the traffic - On-OFF pattern) see [1].  I am not sure
> when does such information readily exist.
>
> Best
> Amr
>
> [1] https://dl.acm.org/doi/10.1145/3341617.3326146 or if behind a paywall
> https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf
>
> --------------------------------
> Amr Rizk (amr.rizk@uni-due.de)
> University of Duisburg-Essen
>
> -----Ursprüngliche Nachricht-----
> Von: Bloat <bloat-bounces@lists.bufferbloat.net> Im Auftrag von Ben Greear
> Gesendet: Montag, 12. Juli 2021 22:32
> An: Bob McMahon <bob.mcmahon@broadcom.com>
> Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast <
> make-wifi-fast@lists.bufferbloat.net>; Leonard Kleinrock <lk@cs.ucla.edu>;
> David P. Reed <dpreed@deepplum.com>; Cake List <cake@lists.bufferbloat.net>;
> codel@lists.bufferbloat.net; cerowrt-devel <
> cerowrt-devel@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
> Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidating my main
> point
>
> UDP is better for getting actual packet latency, for sure.  TCP is
> typical-user-experience-latency though, so it is also useful.
>
> I'm interested in the test and visualization side of this.  If there were
> a way to give engineers a good real-time look at a complex real-world
> network, then they have something to go on while trying to tune various
> knobs in their network to improve it.
>
> I'll let others try to figure out how build and tune the knobs, but the
> data acquisition and visualization is something we might try to
> accomplish.  I have a feeling I'm not the first person to think of this,
> however....probably someone already has done such a thing.
>
> Thanks,
> Ben
>
> On 7/12/21 1:04 PM, Bob McMahon wrote:
> > I believe end host's TCP stats are insufficient as seen per the
> > "failed" congested control mechanisms over the last decades. I think
> > Jaffe pointed this out in
> > 1979 though he was using what's been deemed on this thread as "spherical
> cow queueing theory."
> >
> > "Flow control in store-and-forward computer networks is appropriate
> > for decentralized execution. A formal description of a class of
> > "decentralized flow control algorithms" is given. The feasibility of
> > maximizing power with such algorithms is investigated. On the
> > assumption that communication links behave like M/M/1 servers it is
> shown that no "decentralized flow control algorithm" can maximize network
> power. Power has been suggested in the literature as a network performance
> objective. It is also shown that no objective based only on the users'
> throughputs and average delay is decentralizable. Finally, a restricted
> class of algorithms cannot even approximate power."
> >
> > https://ieeexplore.ieee.org/document/1095152
> >
> > Did Jaffe make a mistake?
> >
> > Also, it's been observed that latency is non-parametric in it's
> > distributions and computing gaussians per the central limit theorem
> > for OWD feedback loops aren't effective. How does one design a control
> loop around things that are non-parametric? It also begs the question, what
> are the feed forward knobs that can actually help?
> >
> > Bob
> >
> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com
> <mailto:greearb@candelatech.com>> wrote:
> >
> >     Measuring one or a few links provides a bit of data, but seems like
> if someone is trying to understand
> >     a large and real network, then the OWD between point A and B needs
> to just be input into something much
> >     more grand.  Assuming real-time OWD data exists between 100 to 1000
> endpoint pairs, has anyone found a way
> >     to visualize this in a useful manner?
> >
> >     Also, considering something better than ntp may not really scale to
> 1000+ endpoints, maybe round-trip
> >     time is only viable way to get this type of data.  In that case,
> maybe clever logic could use things
> >     like trace-route to get some idea of how long it takes to get 'onto'
> the internet proper, and so estimate
> >     the last-mile latency.  My assumption is that the last-mile latency
> is where most of the pervasive
> >     assymetric network latencies would exist (or just ping 8.8.8.8 which
> is 20ms from everywhere due to
> >     $magic).
> >
> >     Endpoints could also triangulate a bit if needed, using some anchor
> points in the network
> >     under test.
> >
> >     Thanks,
> >     Ben
> >
> >     On 7/12/21 11:21 AM, Bob McMahon wrote:
> >      > iperf 2 supports OWD and gives full histograms for TCP write to
> read, TCP connect times, latency of packets (with UDP), latency of "frames"
> with
> >      > simulated video traffic (TCP and UDP), xfer times of bursts with
> low duty cycle traffic, and TCP RTT (sampling based.) It also has support
> for sampling (per
> >      > interval reports) down to 100 usecs if configured with
> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
> released all this as open source.
> >      >
> >      > OWD only works if the end realtime clocks are synchronized using
> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
> centers don't
> >     provide
> >      > sufficient level of clock accuracy and the GPS pulse per second *
> to colo and vm customers.
> >      >
> >      > https://iperf2.sourceforge.io/iperf-manpage.html
> >      >
> >      > Bob
> >      >
> >      > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <
> dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:
> dpreed@deepplum.com
> >     <mailto:dpreed@deepplum.com>>> wrote:
> >      >
> >      >
> >      >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
> Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
> >     <mailto:Jason_Livingood@comcast.com <mailto:
> Jason_Livingood@comcast.com>>> said:
> >      >
> >      >      > I think latency/delay is becoming seen to be as important
> certainly, if not a more direct proxy for end user QoE. This is all still
> evolving and I
> >     have
> >      >     to say is a super interesting & fun thing to work on. :-)
> >      >
> >      >     If I could manage to sell one idea to the management
> hierarchy of communications industry CEOs (operators, vendors, ...) it is
> this one:
> >      >
> >      >     "It's the end-to-end latency, stupid!"
> >      >
> >      >     And I mean, by end-to-end, latency to complete a task at a
> relevant layer of abstraction.
> >      >
> >      >     At the link level, it's packet send to packet receive
> completion.
> >      >
> >      >     But at the transport level including retransmission buffers,
> it's datagram (or message) origination until the acknowledgement arrives
> for that
> >     message being
> >      >     delivered after whatever number of retransmissions, freeing
> the retransmission buffer.
> >      >
> >      >     At the WWW level, it's mouse click to display update
> corresponding to completion of the request.
> >      >
> >      >     What should be noted is that lower level latencies don't
> directly predict the magnitude of higher-level latencies. But longer lower
> level latencies
> >     almost
> >      >     always amplfify higher level latencies. Often non-linearly.
> >      >
> >      >     Throughput is very, very weakly related to these latencies,
> in contrast.
> >      >
> >      >     The amplification process has to do with the presence of
> queueing. Queueing is ALWAYS bad for latency, and throughput only helps if
> it is in exactly the
> >      >     right place (the so-called input queue of the bottleneck
> process, which is often a link, but not always).
> >      >
> >      >     Can we get that slogan into Harvard Business Review? Can we
> get it taught in Managerial Accounting at HBS? (which does address
> logistics/supply chain
> >     queueing).
> >      >
> >      >
> >      >
> >      >
> >      >
> >      >
> >      >
> >      > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the
> >     use of
> >      > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or
> >     otherwise
> >      > restricted from disclosure to anyone else. If you are not the
> intended recipient or the person responsible for delivering the e-mail to
> the intended
> >     recipient,
> >      > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> >      > received this e-mail in error, please return the e-mail to the
> sender, delete it from your computer, and destroy any printed copy of it.
> >
> >
> >     --
> >     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com
> >>
> >     Candela Technologies Inc http://www.candelatech.com
> >
> >
> > This electronic communication and the information and any files
> > transmitted with it, or attached to it, are confidential and are
> > intended solely for the use of the individual or entity to whom it is
> > addressed and may contain information that is confidential, legally
> > privileged, protected by privacy laws, or otherwise restricted from
> disclosure to anyone else. If you are not the intended recipient or the
> person responsible for delivering the e-mail to the intended recipient, you
> are hereby notified that any use, copying, distributing, dissemination,
> forwarding, printing, or copying of this e-mail is strictly prohibited. If
> you received this e-mail in error, please return the e-mail to the sender,
> delete it from your computer, and destroy any printed copy of it.
>
>
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc  http://www.candelatech.com
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 16321 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-13 17:07                               ` Bob McMahon
@ 2021-07-13 17:49                                 ` David P. Reed
  2021-07-14 18:37                                   ` Bob McMahon
       [not found]                                   ` <A5E35F34-A4D5-45B1-8E2D-E2F6DE988A1E@cs.ucla.edu>
  0 siblings, 2 replies; 108+ messages in thread
From: David P. Reed @ 2021-07-13 17:49 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Amr Rizk, Ben Greear, starlink, Make-Wifi-fast,
	Leonard Kleinrock, Cake List, codel, cerowrt-devel, bloat

Bob -

On Tuesday, July 13, 2021 1:07pm, "Bob McMahon" <bob.mcmahon@broadcom.com> said:

> "Control at endpoints benefits greatly from even small amounts of
> information supplied by the network about the degree of congestion present
> on the path."
> 
> Agreed. The ECN mechanism seems like a shared thermostat in a building.
> It's basically an on/off where everyone is trying to set the temperature.
> It does affect, in a non-linear manner, but still an effect. Better than a
> thermostat set at infinity or 0 Kelvin for sure.
> 
> I find the assumption that congestion occurs "in network" as not always
> true. Taking OWD measurements with read side rate limiting suggests that
> equally important to mitigating bufferbloat driven latency using congestion
> signals is to make sure apps read "fast enough" whatever that means. I
> rarely hear about how important it is for apps to prioritize reads over
> open sockets. Not sure why that's overlooked and bufferbloat gets all the
> attention. I'm probably missing something.

In the early days of the Internet protocol and also even ARPANET Host-Host protocol there were those who conflated host-level "flow control" (matching production rate of data into the network to the destination *process* consumption rate of data on a virtual circuit with a source capable of variable and unbounded bit rate) with "congestion control" in the network. The term "congestion control" wasn't even used in the Internetworking project when it was discussing design in the late 1970's. I tried to use it in our working group meetings, and every time I said "congestion" the response would be phrased as "flow".

The classic example was printing a file's contents from disk to an ASR33 terminal on an TIP (Terminal IMP). There was flow control in the end-to-end protocol to avoid overflowing the TTY's limited buffer. But those who grew up with ARPANET knew that thare was no way to accumulate queueing in the IMP network, because of RFNM's that required permission for each new packet to be sent. RFNM's implicitly prevented congestion from being caused by a virtual circuit. But a flow control problem remained, because at the higher level protocol, buffering would overflow at the TIP.

TCP adopted a different end-to-end *flow* control, so it solved the flow control problem by creating a Windowing mechanism. But it did not by itself solve the *congestion* control problem, even congestion built up inside the network by a wide-open window and a lazy operating system at the receiving end that just said, I've got a lot of virtual memory so I'll open the window to maximum size.

There was a lot of confusion, because the guys who came from the ARPANET environment, with all links being the same speed and RFNM limits on rate, couldn't see why the Internet stack was so collapse-prone. I think Multics, for example, as a giant virtual memory system caused congestion by opening up its window too much.

This is where Van Jacobson discovered that dropped packets were a "good enough" congestion signal because of "fate sharing" among the packets that flowed on a bottleneck path, and that windowing (invented for flow control by the receiver to protect itself from overflow if the receiver couldn't receive fast enough) could be used to slow down the sender to match the rate of senders to the capacity of the internal bottleneck link. An elegant "hack" that actually worked really well in practice.

Now we view it as a bug if the receiver opens its window too much, or otherwise doesn't translate dropped packets (or other incipient-congestion signals) to shut down the source transmission rate as quickly as possible. Fortunately, the proper state of the internet - the one it should seek as its ideal state - is that there is at most one packet waiting for each egress link in the bottleneck path. This stable state ensures that the window-reduction or slow-down signal encounters no congestion, with high probability. [Excursions from one-packet queue occur, but since only one-packet waiting is sufficient to fill the bottleneck link to capacity, they can't achieve higher throughput in steady state. In practice, noisy arrival distributions can reduce throughput, so allowing a small number of packets to be waiting on a bottleneck link's queue can slightly increase throughput. That's not asymptotically relevant, but as mentioned, the Internet is never near asymptotic behavior.]


> 
> Bob
> 
> On Tue, Jul 13, 2021 at 12:15 AM Amr Rizk <amr@rizk.com.de> wrote:
> 
>> Ben,
>>
>> it depends on what one tries to measure. Doing a rate scan using UDP (to
>> measure latency distributions under load) is the best thing that we have
>> but without actually knowing how resources are shared (fair share as in
>> WiFi, FIFO as nearly everywhere else) it becomes very difficult to
>> interpret the results or provide a proper argument on latency. You are
>> right - TCP stats are a proxy for user experience but I believe they are
>> difficult to reproduce (we are always talking about very short TCP flows -
>> the infinite TCP flow that converges to a steady behavior is purely
>> academic).
>>
>> By the way, Little's law is a strong tool when it comes to averages. To be
>> able to say more (e.g. 1% of the delays is larger than x) one requires more
>> information (e.g. the traffic - On-OFF pattern) see [1].  I am not sure
>> when does such information readily exist.
>>
>> Best
>> Amr
>>
>> [1] https://dl.acm.org/doi/10.1145/3341617.3326146 or if behind a paywall
>> https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf
>>
>> --------------------------------
>> Amr Rizk (amr.rizk@uni-due.de)
>> University of Duisburg-Essen
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Bloat <bloat-bounces@lists.bufferbloat.net> Im Auftrag von Ben Greear
>> Gesendet: Montag, 12. Juli 2021 22:32
>> An: Bob McMahon <bob.mcmahon@broadcom.com>
>> Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast <
>> make-wifi-fast@lists.bufferbloat.net>; Leonard Kleinrock <lk@cs.ucla.edu>;
>> David P. Reed <dpreed@deepplum.com>; Cake List <cake@lists.bufferbloat.net>;
>> codel@lists.bufferbloat.net; cerowrt-devel <
>> cerowrt-devel@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
>> Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidating my main
>> point
>>
>> UDP is better for getting actual packet latency, for sure.  TCP is
>> typical-user-experience-latency though, so it is also useful.
>>
>> I'm interested in the test and visualization side of this.  If there were
>> a way to give engineers a good real-time look at a complex real-world
>> network, then they have something to go on while trying to tune various
>> knobs in their network to improve it.
>>
>> I'll let others try to figure out how build and tune the knobs, but the
>> data acquisition and visualization is something we might try to
>> accomplish.  I have a feeling I'm not the first person to think of this,
>> however....probably someone already has done such a thing.
>>
>> Thanks,
>> Ben
>>
>> On 7/12/21 1:04 PM, Bob McMahon wrote:
>> > I believe end host's TCP stats are insufficient as seen per the
>> > "failed" congested control mechanisms over the last decades. I think
>> > Jaffe pointed this out in
>> > 1979 though he was using what's been deemed on this thread as "spherical
>> cow queueing theory."
>> >
>> > "Flow control in store-and-forward computer networks is appropriate
>> > for decentralized execution. A formal description of a class of
>> > "decentralized flow control algorithms" is given. The feasibility of
>> > maximizing power with such algorithms is investigated. On the
>> > assumption that communication links behave like M/M/1 servers it is
>> shown that no "decentralized flow control algorithm" can maximize network
>> power. Power has been suggested in the literature as a network performance
>> objective. It is also shown that no objective based only on the users'
>> throughputs and average delay is decentralizable. Finally, a restricted
>> class of algorithms cannot even approximate power."
>> >
>> > https://ieeexplore.ieee.org/document/1095152
>> >
>> > Did Jaffe make a mistake?
>> >
>> > Also, it's been observed that latency is non-parametric in it's
>> > distributions and computing gaussians per the central limit theorem
>> > for OWD feedback loops aren't effective. How does one design a control
>> loop around things that are non-parametric? It also begs the question, what
>> are the feed forward knobs that can actually help?
>> >
>> > Bob
>> >
>> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com
>> <mailto:greearb@candelatech.com>> wrote:
>> >
>> >     Measuring one or a few links provides a bit of data, but seems like
>> if someone is trying to understand
>> >     a large and real network, then the OWD between point A and B needs
>> to just be input into something much
>> >     more grand.  Assuming real-time OWD data exists between 100 to 1000
>> endpoint pairs, has anyone found a way
>> >     to visualize this in a useful manner?
>> >
>> >     Also, considering something better than ntp may not really scale to
>> 1000+ endpoints, maybe round-trip
>> >     time is only viable way to get this type of data.  In that case,
>> maybe clever logic could use things
>> >     like trace-route to get some idea of how long it takes to get 'onto'
>> the internet proper, and so estimate
>> >     the last-mile latency.  My assumption is that the last-mile latency
>> is where most of the pervasive
>> >     assymetric network latencies would exist (or just ping 8.8.8.8 which
>> is 20ms from everywhere due to
>> >     $magic).
>> >
>> >     Endpoints could also triangulate a bit if needed, using some anchor
>> points in the network
>> >     under test.
>> >
>> >     Thanks,
>> >     Ben
>> >
>> >     On 7/12/21 11:21 AM, Bob McMahon wrote:
>> >      > iperf 2 supports OWD and gives full histograms for TCP write to
>> read, TCP connect times, latency of packets (with UDP), latency of "frames"
>> with
>> >      > simulated video traffic (TCP and UDP), xfer times of bursts with
>> low duty cycle traffic, and TCP RTT (sampling based.) It also has support
>> for sampling (per
>> >      > interval reports) down to 100 usecs if configured with
>> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
>> released all this as open source.
>> >      >
>> >      > OWD only works if the end realtime clocks are synchronized using
>> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
>> centers don't
>> >     provide
>> >      > sufficient level of clock accuracy and the GPS pulse per second *
>> to colo and vm customers.
>> >      >
>> >      > https://iperf2.sourceforge.io/iperf-manpage.html
>> >      >
>> >      > Bob
>> >      >
>> >      > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <
>> dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:
>> dpreed@deepplum.com
>> >     <mailto:dpreed@deepplum.com>>> wrote:
>> >      >
>> >      >
>> >      >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
>> Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
>> >     <mailto:Jason_Livingood@comcast.com <mailto:
>> Jason_Livingood@comcast.com>>> said:
>> >      >
>> >      >      > I think latency/delay is becoming seen to be as important
>> certainly, if not a more direct proxy for end user QoE. This is all still
>> evolving and I
>> >     have
>> >      >     to say is a super interesting & fun thing to work on. :-)
>> >      >
>> >      >     If I could manage to sell one idea to the management
>> hierarchy of communications industry CEOs (operators, vendors, ...) it is
>> this one:
>> >      >
>> >      >     "It's the end-to-end latency, stupid!"
>> >      >
>> >      >     And I mean, by end-to-end, latency to complete a task at a
>> relevant layer of abstraction.
>> >      >
>> >      >     At the link level, it's packet send to packet receive
>> completion.
>> >      >
>> >      >     But at the transport level including retransmission buffers,
>> it's datagram (or message) origination until the acknowledgement arrives
>> for that
>> >     message being
>> >      >     delivered after whatever number of retransmissions, freeing
>> the retransmission buffer.
>> >      >
>> >      >     At the WWW level, it's mouse click to display update
>> corresponding to completion of the request.
>> >      >
>> >      >     What should be noted is that lower level latencies don't
>> directly predict the magnitude of higher-level latencies. But longer lower
>> level latencies
>> >     almost
>> >      >     always amplfify higher level latencies. Often non-linearly.
>> >      >
>> >      >     Throughput is very, very weakly related to these latencies,
>> in contrast.
>> >      >
>> >      >     The amplification process has to do with the presence of
>> queueing. Queueing is ALWAYS bad for latency, and throughput only helps if
>> it is in exactly the
>> >      >     right place (the so-called input queue of the bottleneck
>> process, which is often a link, but not always).
>> >      >
>> >      >     Can we get that slogan into Harvard Business Review? Can we
>> get it taught in Managerial Accounting at HBS? (which does address
>> logistics/supply chain
>> >     queueing).
>> >      >
>> >      >
>> >      >
>> >      >
>> >      >
>> >      >
>> >      >
>> >      > This electronic communication and the information and any files
>> transmitted with it, or attached to it, are confidential and are intended
>> solely for the
>> >     use of
>> >      > the individual or entity to whom it is addressed and may contain
>> information that is confidential, legally privileged, protected by privacy
>> laws, or
>> >     otherwise
>> >      > restricted from disclosure to anyone else. If you are not the
>> intended recipient or the person responsible for delivering the e-mail to
>> the intended
>> >     recipient,
>> >      > you are hereby notified that any use, copying, distributing,
>> dissemination, forwarding, printing, or copying of this e-mail is strictly
>> prohibited. If you
>> >      > received this e-mail in error, please return the e-mail to the
>> sender, delete it from your computer, and destroy any printed copy of it.
>> >
>> >
>> >     --
>> >     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com
>> >>
>> >     Candela Technologies Inc http://www.candelatech.com
>> >
>> >
>> > This electronic communication and the information and any files
>> > transmitted with it, or attached to it, are confidential and are
>> > intended solely for the use of the individual or entity to whom it is
>> > addressed and may contain information that is confidential, legally
>> > privileged, protected by privacy laws, or otherwise restricted from
>> disclosure to anyone else. If you are not the intended recipient or the
>> person responsible for delivering the e-mail to the intended recipient, you
>> are hereby notified that any use, copying, distributing, dissemination,
>> forwarding, printing, or copying of this e-mail is strictly prohibited. If
>> you received this e-mail in error, please return the e-mail to the sender,
>> delete it from your computer, and destroy any printed copy of it.
>>
>>
>> --
>> Ben Greear <greearb@candelatech.com>
>> Candela Technologies Inc  http://www.candelatech.com
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>
> 
> --
> This electronic communication and the information and any files transmitted
> with it, or attached to it, are confidential and are intended solely for
> the use of the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or otherwise restricted from disclosure to anyone else. If you are
> not the intended recipient or the person responsible for delivering the
> e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
> 



^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-13 17:49                                 ` [Cerowrt-devel] " David P. Reed
@ 2021-07-14 18:37                                   ` Bob McMahon
  2021-07-15  1:27                                     ` Holland, Jake
       [not found]                                   ` <A5E35F34-A4D5-45B1-8E2D-E2F6DE988A1E@cs.ucla.edu>
  1 sibling, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-07-14 18:37 UTC (permalink / raw)
  To: David P. Reed
  Cc: Amr Rizk, Ben Greear, starlink, Make-Wifi-fast,
	Leonard Kleinrock, Cake List, codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 19471 bytes --]

Thanks for this. I find it both interesting and useful. Learning from those
who came before me reminds me of "standing on the shoulders of giants." I
try to teach my kids that it's not so much us as the giants we choose - so
choose judiciously and, more importantly, be grateful when they provide
their shoulders from which to see.

One challenge I faced with iperf 2 was around flow control's effects on
latency. I find if iperf 2 rate limits on writes then the end/end
latencies, RTT look good because the pipe is basically empty, while rate
limiting reads to the same value fills the window and drives the RTT up.
One might conclude, from a network perspective, the write side is better.
But in reality, the write rate limiting is just pushing the delay into the
application's logic, i.e. the relevant bytes may not be in the pipe but
they aren't at the receiver either, they're stuck somewhere in the "tx
application space."

It wasn't obvious to me how to address this. We added burst measurements
(burst xfer time, and bursts/sec) which, I think, helps.

Bob

On Tue, Jul 13, 2021 at 10:49 AM David P. Reed <dpreed@deepplum.com> wrote:

> Bob -
>
> On Tuesday, July 13, 2021 1:07pm, "Bob McMahon" <bob.mcmahon@broadcom.com>
> said:
>
> > "Control at endpoints benefits greatly from even small amounts of
> > information supplied by the network about the degree of congestion
> present
> > on the path."
> >
> > Agreed. The ECN mechanism seems like a shared thermostat in a building.
> > It's basically an on/off where everyone is trying to set the temperature.
> > It does affect, in a non-linear manner, but still an effect. Better than
> a
> > thermostat set at infinity or 0 Kelvin for sure.
> >
> > I find the assumption that congestion occurs "in network" as not always
> > true. Taking OWD measurements with read side rate limiting suggests that
> > equally important to mitigating bufferbloat driven latency using
> congestion
> > signals is to make sure apps read "fast enough" whatever that means. I
> > rarely hear about how important it is for apps to prioritize reads over
> > open sockets. Not sure why that's overlooked and bufferbloat gets all the
> > attention. I'm probably missing something.
>
> In the early days of the Internet protocol and also even ARPANET Host-Host
> protocol there were those who conflated host-level "flow control" (matching
> production rate of data into the network to the destination *process*
> consumption rate of data on a virtual circuit with a source capable of
> variable and unbounded bit rate) with "congestion control" in the network.
> The term "congestion control" wasn't even used in the Internetworking
> project when it was discussing design in the late 1970's. I tried to use it
> in our working group meetings, and every time I said "congestion" the
> response would be phrased as "flow".
>
> The classic example was printing a file's contents from disk to an ASR33
> terminal on an TIP (Terminal IMP). There was flow control in the end-to-end
> protocol to avoid overflowing the TTY's limited buffer. But those who grew
> up with ARPANET knew that thare was no way to accumulate queueing in the
> IMP network, because of RFNM's that required permission for each new packet
> to be sent. RFNM's implicitly prevented congestion from being caused by a
> virtual circuit. But a flow control problem remained, because at the higher
> level protocol, buffering would overflow at the TIP.
>
> TCP adopted a different end-to-end *flow* control, so it solved the flow
> control problem by creating a Windowing mechanism. But it did not by itself
> solve the *congestion* control problem, even congestion built up inside the
> network by a wide-open window and a lazy operating system at the receiving
> end that just said, I've got a lot of virtual memory so I'll open the
> window to maximum size.
>
> There was a lot of confusion, because the guys who came from the ARPANET
> environment, with all links being the same speed and RFNM limits on rate,
> couldn't see why the Internet stack was so collapse-prone. I think Multics,
> for example, as a giant virtual memory system caused congestion by opening
> up its window too much.
>
> This is where Van Jacobson discovered that dropped packets were a "good
> enough" congestion signal because of "fate sharing" among the packets that
> flowed on a bottleneck path, and that windowing (invented for flow control
> by the receiver to protect itself from overflow if the receiver couldn't
> receive fast enough) could be used to slow down the sender to match the
> rate of senders to the capacity of the internal bottleneck link. An elegant
> "hack" that actually worked really well in practice.
>
> Now we view it as a bug if the receiver opens its window too much, or
> otherwise doesn't translate dropped packets (or other incipient-congestion
> signals) to shut down the source transmission rate as quickly as possible.
> Fortunately, the proper state of the internet - the one it should seek as
> its ideal state - is that there is at most one packet waiting for each
> egress link in the bottleneck path. This stable state ensures that the
> window-reduction or slow-down signal encounters no congestion, with high
> probability. [Excursions from one-packet queue occur, but since only
> one-packet waiting is sufficient to fill the bottleneck link to capacity,
> they can't achieve higher throughput in steady state. In practice, noisy
> arrival distributions can reduce throughput, so allowing a small number of
> packets to be waiting on a bottleneck link's queue can slightly increase
> throughput. That's not asymptotically relevant, but as mentioned, the
> Internet is never near asymptotic behavior.]
>
>
> >
> > Bob
> >
> > On Tue, Jul 13, 2021 at 12:15 AM Amr Rizk <amr@rizk.com.de> wrote:
> >
> >> Ben,
> >>
> >> it depends on what one tries to measure. Doing a rate scan using UDP (to
> >> measure latency distributions under load) is the best thing that we have
> >> but without actually knowing how resources are shared (fair share as in
> >> WiFi, FIFO as nearly everywhere else) it becomes very difficult to
> >> interpret the results or provide a proper argument on latency. You are
> >> right - TCP stats are a proxy for user experience but I believe they are
> >> difficult to reproduce (we are always talking about very short TCP
> flows -
> >> the infinite TCP flow that converges to a steady behavior is purely
> >> academic).
> >>
> >> By the way, Little's law is a strong tool when it comes to averages. To
> be
> >> able to say more (e.g. 1% of the delays is larger than x) one requires
> more
> >> information (e.g. the traffic - On-OFF pattern) see [1].  I am not sure
> >> when does such information readily exist.
> >>
> >> Best
> >> Amr
> >>
> >> [1] https://dl.acm.org/doi/10.1145/3341617.3326146 or if behind a
> paywall
> >> https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf
> >>
> >> --------------------------------
> >> Amr Rizk (amr.rizk@uni-due.de)
> >> University of Duisburg-Essen
> >>
> >> -----Ursprüngliche Nachricht-----
> >> Von: Bloat <bloat-bounces@lists.bufferbloat.net> Im Auftrag von Ben
> Greear
> >> Gesendet: Montag, 12. Juli 2021 22:32
> >> An: Bob McMahon <bob.mcmahon@broadcom.com>
> >> Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast <
> >> make-wifi-fast@lists.bufferbloat.net>; Leonard Kleinrock <
> lk@cs.ucla.edu>;
> >> David P. Reed <dpreed@deepplum.com>; Cake List <
> cake@lists.bufferbloat.net>;
> >> codel@lists.bufferbloat.net; cerowrt-devel <
> >> cerowrt-devel@lists.bufferbloat.net>; bloat <
> bloat@lists.bufferbloat.net>
> >> Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidating my
> main
> >> point
> >>
> >> UDP is better for getting actual packet latency, for sure.  TCP is
> >> typical-user-experience-latency though, so it is also useful.
> >>
> >> I'm interested in the test and visualization side of this.  If there
> were
> >> a way to give engineers a good real-time look at a complex real-world
> >> network, then they have something to go on while trying to tune various
> >> knobs in their network to improve it.
> >>
> >> I'll let others try to figure out how build and tune the knobs, but the
> >> data acquisition and visualization is something we might try to
> >> accomplish.  I have a feeling I'm not the first person to think of this,
> >> however....probably someone already has done such a thing.
> >>
> >> Thanks,
> >> Ben
> >>
> >> On 7/12/21 1:04 PM, Bob McMahon wrote:
> >> > I believe end host's TCP stats are insufficient as seen per the
> >> > "failed" congested control mechanisms over the last decades. I think
> >> > Jaffe pointed this out in
> >> > 1979 though he was using what's been deemed on this thread as
> "spherical
> >> cow queueing theory."
> >> >
> >> > "Flow control in store-and-forward computer networks is appropriate
> >> > for decentralized execution. A formal description of a class of
> >> > "decentralized flow control algorithms" is given. The feasibility of
> >> > maximizing power with such algorithms is investigated. On the
> >> > assumption that communication links behave like M/M/1 servers it is
> >> shown that no "decentralized flow control algorithm" can maximize
> network
> >> power. Power has been suggested in the literature as a network
> performance
> >> objective. It is also shown that no objective based only on the users'
> >> throughputs and average delay is decentralizable. Finally, a restricted
> >> class of algorithms cannot even approximate power."
> >> >
> >> > https://ieeexplore.ieee.org/document/1095152
> >> >
> >> > Did Jaffe make a mistake?
> >> >
> >> > Also, it's been observed that latency is non-parametric in it's
> >> > distributions and computing gaussians per the central limit theorem
> >> > for OWD feedback loops aren't effective. How does one design a control
> >> loop around things that are non-parametric? It also begs the question,
> what
> >> are the feed forward knobs that can actually help?
> >> >
> >> > Bob
> >> >
> >> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com
> >> <mailto:greearb@candelatech.com>> wrote:
> >> >
> >> >     Measuring one or a few links provides a bit of data, but seems
> like
> >> if someone is trying to understand
> >> >     a large and real network, then the OWD between point A and B needs
> >> to just be input into something much
> >> >     more grand.  Assuming real-time OWD data exists between 100 to
> 1000
> >> endpoint pairs, has anyone found a way
> >> >     to visualize this in a useful manner?
> >> >
> >> >     Also, considering something better than ntp may not really scale
> to
> >> 1000+ endpoints, maybe round-trip
> >> >     time is only viable way to get this type of data.  In that case,
> >> maybe clever logic could use things
> >> >     like trace-route to get some idea of how long it takes to get
> 'onto'
> >> the internet proper, and so estimate
> >> >     the last-mile latency.  My assumption is that the last-mile
> latency
> >> is where most of the pervasive
> >> >     assymetric network latencies would exist (or just ping 8.8.8.8
> which
> >> is 20ms from everywhere due to
> >> >     $magic).
> >> >
> >> >     Endpoints could also triangulate a bit if needed, using some
> anchor
> >> points in the network
> >> >     under test.
> >> >
> >> >     Thanks,
> >> >     Ben
> >> >
> >> >     On 7/12/21 11:21 AM, Bob McMahon wrote:
> >> >      > iperf 2 supports OWD and gives full histograms for TCP write to
> >> read, TCP connect times, latency of packets (with UDP), latency of
> "frames"
> >> with
> >> >      > simulated video traffic (TCP and UDP), xfer times of bursts
> with
> >> low duty cycle traffic, and TCP RTT (sampling based.) It also has
> support
> >> for sampling (per
> >> >      > interval reports) down to 100 usecs if configured with
> >> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
> >> released all this as open source.
> >> >      >
> >> >      > OWD only works if the end realtime clocks are synchronized
> using
> >> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
> >> centers don't
> >> >     provide
> >> >      > sufficient level of clock accuracy and the GPS pulse per
> second *
> >> to colo and vm customers.
> >> >      >
> >> >      > https://iperf2.sourceforge.io/iperf-manpage.html
> >> >      >
> >> >      > Bob
> >> >      >
> >> >      > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <
> >> dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:
> >> dpreed@deepplum.com
> >> >     <mailto:dpreed@deepplum.com>>> wrote:
> >> >      >
> >> >      >
> >> >      >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
> >> Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
> >> >     <mailto:Jason_Livingood@comcast.com <mailto:
> >> Jason_Livingood@comcast.com>>> said:
> >> >      >
> >> >      >      > I think latency/delay is becoming seen to be as
> important
> >> certainly, if not a more direct proxy for end user QoE. This is all
> still
> >> evolving and I
> >> >     have
> >> >      >     to say is a super interesting & fun thing to work on. :-)
> >> >      >
> >> >      >     If I could manage to sell one idea to the management
> >> hierarchy of communications industry CEOs (operators, vendors, ...) it
> is
> >> this one:
> >> >      >
> >> >      >     "It's the end-to-end latency, stupid!"
> >> >      >
> >> >      >     And I mean, by end-to-end, latency to complete a task at a
> >> relevant layer of abstraction.
> >> >      >
> >> >      >     At the link level, it's packet send to packet receive
> >> completion.
> >> >      >
> >> >      >     But at the transport level including retransmission
> buffers,
> >> it's datagram (or message) origination until the acknowledgement arrives
> >> for that
> >> >     message being
> >> >      >     delivered after whatever number of retransmissions, freeing
> >> the retransmission buffer.
> >> >      >
> >> >      >     At the WWW level, it's mouse click to display update
> >> corresponding to completion of the request.
> >> >      >
> >> >      >     What should be noted is that lower level latencies don't
> >> directly predict the magnitude of higher-level latencies. But longer
> lower
> >> level latencies
> >> >     almost
> >> >      >     always amplfify higher level latencies. Often non-linearly.
> >> >      >
> >> >      >     Throughput is very, very weakly related to these latencies,
> >> in contrast.
> >> >      >
> >> >      >     The amplification process has to do with the presence of
> >> queueing. Queueing is ALWAYS bad for latency, and throughput only helps
> if
> >> it is in exactly the
> >> >      >     right place (the so-called input queue of the bottleneck
> >> process, which is often a link, but not always).
> >> >      >
> >> >      >     Can we get that slogan into Harvard Business Review? Can we
> >> get it taught in Managerial Accounting at HBS? (which does address
> >> logistics/supply chain
> >> >     queueing).
> >> >      >
> >> >      >
> >> >      >
> >> >      >
> >> >      >
> >> >      >
> >> >      >
> >> >      > This electronic communication and the information and any files
> >> transmitted with it, or attached to it, are confidential and are
> intended
> >> solely for the
> >> >     use of
> >> >      > the individual or entity to whom it is addressed and may
> contain
> >> information that is confidential, legally privileged, protected by
> privacy
> >> laws, or
> >> >     otherwise
> >> >      > restricted from disclosure to anyone else. If you are not the
> >> intended recipient or the person responsible for delivering the e-mail
> to
> >> the intended
> >> >     recipient,
> >> >      > you are hereby notified that any use, copying, distributing,
> >> dissemination, forwarding, printing, or copying of this e-mail is
> strictly
> >> prohibited. If you
> >> >      > received this e-mail in error, please return the e-mail to the
> >> sender, delete it from your computer, and destroy any printed copy of
> it.
> >> >
> >> >
> >> >     --
> >> >     Ben Greear <greearb@candelatech.com <mailto:
> greearb@candelatech.com
> >> >>
> >> >     Candela Technologies Inc http://www.candelatech.com
> >> >
> >> >
> >> > This electronic communication and the information and any files
> >> > transmitted with it, or attached to it, are confidential and are
> >> > intended solely for the use of the individual or entity to whom it is
> >> > addressed and may contain information that is confidential, legally
> >> > privileged, protected by privacy laws, or otherwise restricted from
> >> disclosure to anyone else. If you are not the intended recipient or the
> >> person responsible for delivering the e-mail to the intended recipient,
> you
> >> are hereby notified that any use, copying, distributing, dissemination,
> >> forwarding, printing, or copying of this e-mail is strictly prohibited.
> If
> >> you received this e-mail in error, please return the e-mail to the
> sender,
> >> delete it from your computer, and destroy any printed copy of it.
> >>
> >>
> >> --
> >> Ben Greear <greearb@candelatech.com>
> >> Candela Technologies Inc  http://www.candelatech.com
> >>
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >>
> >>
> >
> > --
> > This electronic communication and the information and any files
> transmitted
> > with it, or attached to it, are confidential and are intended solely for
> > the use of the individual or entity to whom it is addressed and may
> contain
> > information that is confidential, legally privileged, protected by
> privacy
> > laws, or otherwise restricted from disclosure to anyone else. If you are
> > not the intended recipient or the person responsible for delivering the
> > e-mail to the intended recipient, you are hereby notified that any use,
> > copying, distributing, dissemination, forwarding, printing, or copying of
> > this e-mail is strictly prohibited. If you received this e-mail in error,
> > please return the e-mail to the sender, delete it from your computer, and
> > destroy any printed copy of it.
> >
>
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 25427 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-14 18:37                                   ` Bob McMahon
@ 2021-07-15  1:27                                     ` Holland, Jake
  2021-07-16  0:34                                       ` Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: Holland, Jake @ 2021-07-15  1:27 UTC (permalink / raw)
  To: Bob McMahon, David P. Reed
  Cc: Cake List, Make-Wifi-fast, Leonard Kleinrock, starlink, codel,
	cerowrt-devel, bloat, Ben Greear

From: Bob McMahon via Bloat <bloat@lists.bufferbloat.net>
> Date: Wed,2021-07-14 at 11:38 AM
> One challenge I faced with iperf 2 was around flow control's effects on
> latency. I find if iperf 2 rate limits on writes then the end/end
> latencies, RTT look good because the pipe is basically empty, while rate
> limiting reads to the same value fills the window and drives the RTT up.
> One might conclude, from a network perspective, the write side is
> better.  But in reality, the write rate limiting is just pushing the
> delay into the application's logic, i.e. the relevant bytes may not be
> in the pipe but they aren't at the receiver either, they're stuck
> somewhere in the "tx application space."
>
> It wasn't obvious to me how to address this. We added burst measurements
> (burst xfer time, and bursts/sec) which, I think, helps.
...
>>> I find the assumption that congestion occurs "in network" as not always
>>> true. Taking OWD measurements with read side rate limiting suggests that
>>> equally important to mitigating bufferbloat driven latency using congestion
>>> signals is to make sure apps read "fast enough" whatever that means. I
>>> rarely hear about how important it is for apps to prioritize reads over
>>> open sockets. Not sure why that's overlooked and bufferbloat gets all the
>>> attention. I'm probably missing something.

Hi Bob,

You're right that the sender generally also has to avoid sending
more than the receiver can handle to avoid delays in a message-
reply cycle on the same TCP flow.

In general, I think of failures here as application faults rather
than network faults.  While important for user experience, it's
something that an app developer can solve.  That's importantly
different from network buffering.

It's also somewhat possible to avoid getting excessively backed up
in the network because of your own traffic.  Here bbr usually does
a decent job of keeping the queues decently low.  (And you'll maybe
find that some of the bufferbloat measurement efforts are relying
on the self-congestion you get out of cubic, so if you switch them
to bbr you might not get a good answer on how big the network buffers
are.)

In general, anything along these lines has to give backpressure to
the sender somehow.  What I'm guessing you saw when you did receiver-
side rate limiting was that the backpressure had to fill bytes all
the way back to a full receive kernel buffer (making a 0 rwnd for
TCP) and a full send kernel buffer before the send writes start
failing (I think with ENOBUFS iirc?), and that's the first hint the
sender has that it can't send more data right now.  The assumption
that the receiver can receive as fast as the sender can send is so
common that it often goes unstated.

(If you love to suffer, you can maybe get the backpressure to start
earlier, and with maybe a lower impact to your app-level RTT, if
you try hard enough from the receive side with TCP_WINDOW_CLAMP:
https://man7.org/linux/man-pages/man7/tcp.7.html#:~:text=tcp_window_clamp
But you'll still be living with a full send buffer ahead of the
message-response.)

But usually the right thing to do if you want receiver-driven rate
control is to send back some kind of explicit "slow down, it's too
fast for me" feedback at the app layer that will make the sender send
slower.  For instance most ABR players will shift down their bitrate
if they're failing to render video fast enough just as well as if the
network isn't feeding the video segments fast enough, like if they're
CPU-bound from something else churning on the machine.  (RTP-based
video players are supposed to send feedback with this same kind of
"slow down" capability, and sometimes they do.)

But what you can't fix from the endpoints no matter how hard you
try is the buffers in the network that get filled by other people's
traffic.

Getting other people's traffic to avoid breaking my latency when
we're sharing a bottleneck requires deploying something in the network
and it's not something I can fix myself except inside my own network.

While the app-specific fixes would make for very fine blog posts or
stack overflow questions that could help someone who managed to search
the right terms, there's a lot of different approaches for different
apps that can solve it more or less, and anyone who tries hard enough
will land on something that works well enough for them, and you don't
need a whole movement to get people to make it so their own app works
ok for them and their users.  The problems can be subtle and maybe
there will be some late and frustrating nights involved, but anyone
who gets it reproducible and keeps digging will solve it eventually.

But getting stuff deployed in networks to stop people's traffic
breaking each other's latency is harder, especially when it's a
major challenge for people to even grasp the problem and understand
its causes.  The only possible paths to getting a solution widely
deployed (assuming you have one that works) start with things like
"start an advocacy movement" or "get a controlling interest in Cisco".

Best,
Jake



^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-15  1:27                                     ` Holland, Jake
@ 2021-07-16  0:34                                       ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-16  0:34 UTC (permalink / raw)
  To: Holland, Jake
  Cc: David P. Reed, Cake List, Make-Wifi-fast, Leonard Kleinrock,
	starlink, codel, cerowrt-devel, bloat, Ben Greear


[-- Attachment #1.1: Type: text/plain, Size: 7155 bytes --]

Ok, adding support for TCP_WINDOW_CLAMP and TCP_NOTSENT_LOWAT into iperf 2
seems useful for TCP WiFi latency related testing.  These option names are
quite obfuscated. I can't see many but some ultimate networking geeks
knowing what these actually do.

Here are some proposed command/option names which wouldn't pass any "parser
police" (Parser police was the internal discussion list we used at Cisco to
review router commands. The Cisco cli is a disaster even with pp, but less
so than what could have been.)

{"*tcp-rx-window-clamp*", required_argument, &rxwinclamp, 1},
{"*tcp-not-sent-low-watermark*", required_argument, &txnotsendlowwater, 1},

I'd for sure like to rename "tcp-not-sent-low-watermark" to something more
intuitive. (My daughter, trained in linguistics, is having a field day
laughing at this "nerd language" that is beyond human comprehension.) This
cli option sets the socket option and causes the use of select for writes
(vs a write spin loop.)

Thanks in advance for any suggestions here,
Bob

On Wed, Jul 14, 2021 at 6:27 PM Holland, Jake <jholland@akamai.com> wrote:

> From: Bob McMahon via Bloat <bloat@lists.bufferbloat.net>
> > Date: Wed,2021-07-14 at 11:38 AM
> > One challenge I faced with iperf 2 was around flow control's effects on
> > latency. I find if iperf 2 rate limits on writes then the end/end
> > latencies, RTT look good because the pipe is basically empty, while rate
> > limiting reads to the same value fills the window and drives the RTT up.
> > One might conclude, from a network perspective, the write side is
> > better.  But in reality, the write rate limiting is just pushing the
> > delay into the application's logic, i.e. the relevant bytes may not be
> > in the pipe but they aren't at the receiver either, they're stuck
> > somewhere in the "tx application space."
> >
> > It wasn't obvious to me how to address this. We added burst measurements
> > (burst xfer time, and bursts/sec) which, I think, helps.
> ...
> >>> I find the assumption that congestion occurs "in network" as not always
> >>> true. Taking OWD measurements with read side rate limiting suggests
> that
> >>> equally important to mitigating bufferbloat driven latency using
> congestion
> >>> signals is to make sure apps read "fast enough" whatever that means. I
> >>> rarely hear about how important it is for apps to prioritize reads over
> >>> open sockets. Not sure why that's overlooked and bufferbloat gets all
> the
> >>> attention. I'm probably missing something.
>
> Hi Bob,
>
> You're right that the sender generally also has to avoid sending
> more than the receiver can handle to avoid delays in a message-
> reply cycle on the same TCP flow.
>
> In general, I think of failures here as application faults rather
> than network faults.  While important for user experience, it's
> something that an app developer can solve.  That's importantly
> different from network buffering.
>
> It's also somewhat possible to avoid getting excessively backed up
> in the network because of your own traffic.  Here bbr usually does
> a decent job of keeping the queues decently low.  (And you'll maybe
> find that some of the bufferbloat measurement efforts are relying
> on the self-congestion you get out of cubic, so if you switch them
> to bbr you might not get a good answer on how big the network buffers
> are.)
>
> In general, anything along these lines has to give backpressure to
> the sender somehow.  What I'm guessing you saw when you did receiver-
> side rate limiting was that the backpressure had to fill bytes all
> the way back to a full receive kernel buffer (making a 0 rwnd for
> TCP) and a full send kernel buffer before the send writes start
> failing (I think with ENOBUFS iirc?), and that's the first hint the
> sender has that it can't send more data right now.  The assumption
> that the receiver can receive as fast as the sender can send is so
> common that it often goes unstated.
>
> (If you love to suffer, you can maybe get the backpressure to start
> earlier, and with maybe a lower impact to your app-level RTT, if
> you try hard enough from the receive side with TCP_WINDOW_CLAMP:
> https://man7.org/linux/man-pages/man7/tcp.7.html#:~:text=tcp_window_clamp
> But you'll still be living with a full send buffer ahead of the
> message-response.)
>
> But usually the right thing to do if you want receiver-driven rate
> control is to send back some kind of explicit "slow down, it's too
> fast for me" feedback at the app layer that will make the sender send
> slower.  For instance most ABR players will shift down their bitrate
> if they're failing to render video fast enough just as well as if the
> network isn't feeding the video segments fast enough, like if they're
> CPU-bound from something else churning on the machine.  (RTP-based
> video players are supposed to send feedback with this same kind of
> "slow down" capability, and sometimes they do.)
>
> But what you can't fix from the endpoints no matter how hard you
> try is the buffers in the network that get filled by other people's
> traffic.
>
> Getting other people's traffic to avoid breaking my latency when
> we're sharing a bottleneck requires deploying something in the network
> and it's not something I can fix myself except inside my own network.
>
> While the app-specific fixes would make for very fine blog posts or
> stack overflow questions that could help someone who managed to search
> the right terms, there's a lot of different approaches for different
> apps that can solve it more or less, and anyone who tries hard enough
> will land on something that works well enough for them, and you don't
> need a whole movement to get people to make it so their own app works
> ok for them and their users.  The problems can be subtle and maybe
> there will be some late and frustrating nights involved, but anyone
> who gets it reproducible and keeps digging will solve it eventually.
>
> But getting stuff deployed in networks to stop people's traffic
> breaking each other's latency is harder, especially when it's a
> major challenge for people to even grasp the problem and understand
> its causes.  The only possible paths to getting a solution widely
> deployed (assuming you have one that works) start with things like
> "start an advocacy movement" or "get a controlling interest in Cisco".
>
> Best,
> Jake
>
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 8452 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-12 20:32                           ` [Cerowrt-devel] " Ben Greear
                                               ` (2 preceding siblings ...)
  2021-07-13  7:14                             ` [Cerowrt-devel] " Amr Rizk
@ 2021-07-17 23:29                             ` Aaron Wood
  2021-07-18 19:06                               ` Bob McMahon
  3 siblings, 1 reply; 108+ messages in thread
From: Aaron Wood @ 2021-07-17 23:29 UTC (permalink / raw)
  To: Ben Greear
  Cc: Bob McMahon, starlink, Make-Wifi-fast, Leonard Kleinrock,
	David P. Reed, Cake List, codel, cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 11249 bytes --]

On Mon, Jul 12, 2021 at 1:32 PM Ben Greear <greearb@candelatech.com> wrote:

> UDP is better for getting actual packet latency, for sure.  TCP is
> typical-user-experience-latency though,
> so it is also useful.
>
> I'm interested in the test and visualization side of this.  If there were
> a way to give engineers
> a good real-time look at a complex real-world network, then they have
> something to go on while trying
> to tune various knobs in their network to improve it.
>

I've always liked the smoke-ping visualization, although a single graph is
only really useful for a single pair of endpoints (or a single segment,
maybe).  But I can see using a repeated set of graphs (Tufte has some
examples), that can represent an overview of pairwise collections of
latency+loss:
https://www.edwardtufte.com/bboard/images/0003Cs-8047.GIF
https://www.edwardtufte.com/tufte/psysvcs_p2

These work for understanding because the tiled graphs are all identically
constructed, and the reader first learns how to read a single tile, and
then learns the pattern of which tiles represent which measurements.

Further, they are opinionated.  In the second link above, the y axis is not
based on the measured data, but standardized expected values, which (I
think) is key to quick readability.  You never need to read the axes.  Much
like setting up gauges such that "nominal" is always at the same indicator
position for all graphs (e.g. straight up).  At a glance, you can see if
things are "correct" or not.

That tiling arrangement wouldn't be great for showing interrelationships
(although it may give you a good historical view of correlated behavior).
One thought is to overlay a network graph diagram (graph of all network
links) with small "sparkline" type graphs.

For a more physical-based network graph, I could see visualizing the queue
depth for each egress port (max value over a time of X, or percentage of
time at max depth).

Taken together, the timewise correlation could be useful (which peers are
having problems communicating, and which ports between them are impacted?).

I think getting good data about queue depth may be the hard part,
especially catching transients and the duty cycle / pulse-width of the load
(and then converting that to a number).  Back when I uncovered the iperf
application-level pacing granularity was too high 5 years ago, I called it
them "millibursts", and maybe dtaht pointed out that link utilization is
always 0% or 100%, and it's just a matter of the PWM of the packet rate
that makes it look like something in between.
https://burntchrome.blogspot.com/2016/09/iperf3-and-microbursts.html



I'll let others try to figure out how build and tune the knobs, but the
> data acquisition and
> visualization is something we might try to accomplish.  I have a feeling
> I'm not the
> first person to think of this, however....probably someone already has
> done such
> a thing.
>
> Thanks,
> Ben
>
> On 7/12/21 1:04 PM, Bob McMahon wrote:
> > I believe end host's TCP stats are insufficient as seen per the "failed"
> congested control mechanisms over the last decades. I think Jaffe pointed
> this out in
> > 1979 though he was using what's been deemed on this thread as "spherical
> cow queueing theory."
> >
> > "Flow control in store-and-forward computer networks is appropriate for
> decentralized execution. A formal description of a class of "decentralized
> flow control
> > algorithms" is given. The feasibility of maximizing power with such
> algorithms is investigated. On the assumption that communication links
> behave like M/M/1
> > servers it is shown that no "decentralized flow control algorithm" can
> maximize network power. Power has been suggested in the literature as a
> network
> > performance objective. It is also shown that no objective based only on
> the users' throughputs and average delay is decentralizable. Finally, a
> restricted class
> > of algorithms cannot even approximate power."
> >
> > https://ieeexplore.ieee.org/document/1095152
> >
> > Did Jaffe make a mistake?
> >
> > Also, it's been observed that latency is non-parametric in it's
> distributions and computing gaussians per the central limit theorem for OWD
> feedback loops
> > aren't effective. How does one design a control loop around things that
> are non-parametric? It also begs the question, what are the feed forward
> knobs that can
> > actually help?
> >
> > Bob
> >
> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com
> <mailto:greearb@candelatech.com>> wrote:
> >
> >     Measuring one or a few links provides a bit of data, but seems like
> if someone is trying to understand
> >     a large and real network, then the OWD between point A and B needs
> to just be input into something much
> >     more grand.  Assuming real-time OWD data exists between 100 to 1000
> endpoint pairs, has anyone found a way
> >     to visualize this in a useful manner?
> >
> >     Also, considering something better than ntp may not really scale to
> 1000+ endpoints, maybe round-trip
> >     time is only viable way to get this type of data.  In that case,
> maybe clever logic could use things
> >     like trace-route to get some idea of how long it takes to get 'onto'
> the internet proper, and so estimate
> >     the last-mile latency.  My assumption is that the last-mile latency
> is where most of the pervasive
> >     assymetric network latencies would exist (or just ping 8.8.8.8 which
> is 20ms from everywhere due to
> >     $magic).
> >
> >     Endpoints could also triangulate a bit if needed, using some anchor
> points in the network
> >     under test.
> >
> >     Thanks,
> >     Ben
> >
> >     On 7/12/21 11:21 AM, Bob McMahon wrote:
> >      > iperf 2 supports OWD and gives full histograms for TCP write to
> read, TCP connect times, latency of packets (with UDP), latency of "frames"
> with
> >      > simulated video traffic (TCP and UDP), xfer times of bursts with
> low duty cycle traffic, and TCP RTT (sampling based.) It also has support
> for sampling (per
> >      > interval reports) down to 100 usecs if configured with
> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
> released all this as open source.
> >      >
> >      > OWD only works if the end realtime clocks are synchronized using
> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
> centers don't
> >     provide
> >      > sufficient level of clock accuracy and the GPS pulse per second *
> to colo and vm customers.
> >      >
> >      > https://iperf2.sourceforge.io/iperf-manpage.html
> >      >
> >      > Bob
> >      >
> >      > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <
> dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:
> dpreed@deepplum.com
> >     <mailto:dpreed@deepplum.com>>> wrote:
> >      >
> >      >
> >      >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
> Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
> >     <mailto:Jason_Livingood@comcast.com <mailto:
> Jason_Livingood@comcast.com>>> said:
> >      >
> >      >      > I think latency/delay is becoming seen to be as important
> certainly, if not a more direct proxy for end user QoE. This is all still
> evolving and I
> >     have
> >      >     to say is a super interesting & fun thing to work on. :-)
> >      >
> >      >     If I could manage to sell one idea to the management
> hierarchy of communications industry CEOs (operators, vendors, ...) it is
> this one:
> >      >
> >      >     "It's the end-to-end latency, stupid!"
> >      >
> >      >     And I mean, by end-to-end, latency to complete a task at a
> relevant layer of abstraction.
> >      >
> >      >     At the link level, it's packet send to packet receive
> completion.
> >      >
> >      >     But at the transport level including retransmission buffers,
> it's datagram (or message) origination until the acknowledgement arrives
> for that
> >     message being
> >      >     delivered after whatever number of retransmissions, freeing
> the retransmission buffer.
> >      >
> >      >     At the WWW level, it's mouse click to display update
> corresponding to completion of the request.
> >      >
> >      >     What should be noted is that lower level latencies don't
> directly predict the magnitude of higher-level latencies. But longer lower
> level latencies
> >     almost
> >      >     always amplfify higher level latencies. Often non-linearly.
> >      >
> >      >     Throughput is very, very weakly related to these latencies,
> in contrast.
> >      >
> >      >     The amplification process has to do with the presence of
> queueing. Queueing is ALWAYS bad for latency, and throughput only helps if
> it is in exactly the
> >      >     right place (the so-called input queue of the bottleneck
> process, which is often a link, but not always).
> >      >
> >      >     Can we get that slogan into Harvard Business Review? Can we
> get it taught in Managerial Accounting at HBS? (which does address
> logistics/supply chain
> >     queueing).
> >      >
> >      >
> >      >
> >      >
> >      >
> >      >
> >      >
> >      > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the
> >     use of
> >      > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or
> >     otherwise
> >      > restricted from disclosure to anyone else. If you are not the
> intended recipient or the person responsible for delivering the e-mail to
> the intended
> >     recipient,
> >      > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> >      > received this e-mail in error, please return the e-mail to the
> sender, delete it from your computer, and destroy any printed copy of it.
> >
> >
> >     --
> >     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com
> >>
> >     Candela Technologies Inc http://www.candelatech.com
> >
> >
> > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of
> > the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or otherwise
> > restricted from disclosure to anyone else. If you are not the intended
> recipient or the person responsible for delivering the e-mail to the
> intended recipient,
> > you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
> > received this e-mail in error, please return the e-mail to the sender,
> delete it from your computer, and destroy any printed copy of it.
>
>
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc  http://www.candelatech.com
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 14879 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Make-wifi-fast] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-09 23:56                   ` [Cerowrt-devel] [Bloat] " Jonathan Morton
@ 2021-07-17 23:56                     ` Aaron Wood
  0 siblings, 0 replies; 108+ messages in thread
From: Aaron Wood @ 2021-07-17 23:56 UTC (permalink / raw)
  To: Jonathan Morton
  Cc: Leonard Kleinrock, starlink, Make-Wifi-fast, Bob McMahon,
	David P. Reed, Cake List, codel, cerowrt-devel, bloat,
	Ben Greear

[-- Attachment #1: Type: text/plain, Size: 4788 bytes --]

With the disclaimer that I'm not as strong in statistics and modelling as
I'd like to be....

I think it's not useful to attempt to stochastically model the behavior of
what are actually active (well, reactive) components.  The responses of
each piece are deterministic, but the inputs (users) are not.  So while you
could maybe measure the behavior of a network, and then build a hidden
markov model that can produce the same results, I don't see how it would be
useful for testing the behavior of either the reactive components (TCP CC
algs) or the layers below the reactive components (queues and links),
because the model needs to react to the behavior of the pieces it's sitting
on top of, not due to a stochastic process that's independent (in the
statistical sense) of the underlying queues and links.

Probably a "well duh..." thought for many here.  But I was _amazed_ when
working with very senior engineers for network hardware companies, who said
all testing was done with a static blend of "i-mix" traffic (in both
directions), even though they were looking at last-mile network usage which
was going to be primarily TCP download, just like a home, and nothing like
i-mix.  Or that the applications running on top of that gear were actually
reactive to their (mis-)management of their queues and loads.

On Fri, Jul 9, 2021 at 4:56 PM Jonathan Morton <chromatix99@gmail.com>
wrote:

> > On 10 Jul, 2021, at 2:01 am, Leonard Kleinrock <lk@cs.ucla.edu> wrote:
> >
> > No question that non-stationarity and instability are what we often see
> in networks.  And, non-stationarity and instability are both topics that
> lead to very complex analytical problems in queueing theory.  You can find
> some results on the transient analysis in the queueing theory literature
> (including the second volume of my Queueing Systems book), but they are
> limited and hard. Nevertheless, the literature does contain some works on
> transient analysis of queueing systems as applied to network congestion
> control - again limited. On the other hand, as you said, control theory
> addresses stability head on and does offer some tools as well, but again,
> it is hairy.
>
> I was just about to mention control theory.
>
> One basic characteristic of Poisson traffic is that it is inelastic, and
> assumes there is no control feedback whatsoever.  This means it can only be
> a valid model when the following are both true:
>
> 1: The offered load is *below* the link capacity, for all links, averaged
> over time.
>
> 2: A high degree of statistical multiplexing exists.
>
> If 1: is not true and the traffic is truly inelastic, then the queues will
> inevitably fill up and congestion collapse will result, as shown from
> ARPANET experience in the 1980s; the solution was to introduce control
> feedback to the traffic, initially in the form of TCP Reno.  If 2: is not
> true then the traffic cannot be approximated as Poisson arrivals,
> regardless of load relative to capacity, because the degree of correlation
> is too high.
>
> Taking the iPhone introduction anecdote as an illustrative example,
> measuring utilisation as very close to 100% is a clear warning sign that
> the Poisson model was inappropriate, and a control-theory approach was
> needed instead, to capture the feedback effects of congestion control.  The
> high degree of statistical multiplexing inherent to a major ISP backhaul is
> irrelevant to that determination.
>
> Such a model would have found that the primary source of control feedback
> was human users giving up in disgust.  However, different humans have
> different levels of tolerance and persistence, so this feedback was not
> sufficient to reduce the load sufficiently to give the majority of users a
> good service; instead, *all* users received a poor service and many users
> received no usable service.  Introducing a technological control feedback,
> in the form of packet loss upon overflow of correctly-sized queues,
> improved service for everyone.
>
> (BTW, DNS becomes significantly unreliable around 1-2 seconds RTT, due to
> protocol timeouts, which is inherited by all applications that rely on DNS
> lookups.  Merely reducing the delays consistently below that threshold
> would have improved perceived reliability markedly.)
>
> Conversely, when talking about the traffic on a single ISP subscriber's
> last-mile link, the Poisson model has to be discarded due to criterion 2
> being false.  The number of flows going to even a family household is
> probably in the low dozens at best.  A control-theory approach can also
> work here.
>
>  - Jonathan Morton
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast

[-- Attachment #2: Type: text/html, Size: 5480 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-17 23:29                             ` [Cerowrt-devel] " Aaron Wood
@ 2021-07-18 19:06                               ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-18 19:06 UTC (permalink / raw)
  To: Aaron Wood
  Cc: Ben Greear, starlink, Make-Wifi-fast, Leonard Kleinrock,
	David P. Reed, Cake List, codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 14259 bytes --]

Just an FYI,

iperf 2 uses a 4 usec delay for TCP and 100 usec delay for UDP to fill the
token bucket. We thought about providing a knob for this but decided not
to. We figured a busy wait CPU thread wasn't a big deal because of the
trend of many CPU cores. The threaded design works well for this. We also
support fq-pacing and isochronous traffic using clock_nanosleep() to
schedule the writes. We'll probably add Markov chain support but that's not
critical and may not affect actionable engineering. We found isoch as a
useful traffic profile, at least for our WiFi testing. I'm going to add
support for TCP_NOTSENT_LOWAT for select()/write() based transmissions. I'm
doubtful this is very useful as event based scheduling based on times seems
better. We'll probably use it for unit testing WiFi aggregation and see if
it helps there or not. I'll see if it aligns with the OWD measurements.

On queue depth, we use two techniques. The most obvious is to measure the
end to end delay and use rx histograms, getting all the samples without
averaging. The second, internal for us only, is using network telemetry and
mapping all the clock domains to the GPS domain. Any moment in time the
end/end path can be inspected to where every packet is.

Our automated testing is focused around unit tests and used to
statistically monitor code changes (which come at a high rate and apply to
a broad range of chips) - so the requirements can be very different from a
network or service provider.

Agreed that the amount of knobs and reactive components are a challenge.
And one must assume non-linearity which becomes obvious after a few direct
measurements (i.e. no averaging.) The challenge of statistical;y
reproducible is always there. We find Montec Carlo techniques can be useful
only when they are proven to be statistically reproducible.

Bob


On Sat, Jul 17, 2021 at 4:29 PM Aaron Wood <woody77@gmail.com> wrote:

> On Mon, Jul 12, 2021 at 1:32 PM Ben Greear <greearb@candelatech.com>
> wrote:
>
>> UDP is better for getting actual packet latency, for sure.  TCP is
>> typical-user-experience-latency though,
>> so it is also useful.
>>
>> I'm interested in the test and visualization side of this.  If there were
>> a way to give engineers
>> a good real-time look at a complex real-world network, then they have
>> something to go on while trying
>> to tune various knobs in their network to improve it.
>>
>
> I've always liked the smoke-ping visualization, although a single graph is
> only really useful for a single pair of endpoints (or a single segment,
> maybe).  But I can see using a repeated set of graphs (Tufte has some
> examples), that can represent an overview of pairwise collections of
> latency+loss:
> https://www.edwardtufte.com/bboard/images/0003Cs-8047.GIF
> https://www.edwardtufte.com/tufte/psysvcs_p2
>
> These work for understanding because the tiled graphs are all identically
> constructed, and the reader first learns how to read a single tile, and
> then learns the pattern of which tiles represent which measurements.
>
> Further, they are opinionated.  In the second link above, the y axis is
> not based on the measured data, but standardized expected values, which (I
> think) is key to quick readability.  You never need to read the axes.  Much
> like setting up gauges such that "nominal" is always at the same indicator
> position for all graphs (e.g. straight up).  At a glance, you can see if
> things are "correct" or not.
>
> That tiling arrangement wouldn't be great for showing interrelationships
> (although it may give you a good historical view of correlated behavior).
> One thought is to overlay a network graph diagram (graph of all network
> links) with small "sparkline" type graphs.
>
> For a more physical-based network graph, I could see visualizing the queue
> depth for each egress port (max value over a time of X, or percentage of
> time at max depth).
>
> Taken together, the timewise correlation could be useful (which peers are
> having problems communicating, and which ports between them are impacted?).
>
> I think getting good data about queue depth may be the hard part,
> especially catching transients and the duty cycle / pulse-width of the load
> (and then converting that to a number).  Back when I uncovered the iperf
> application-level pacing granularity was too high 5 years ago, I called it
> them "millibursts", and maybe dtaht pointed out that link utilization is
> always 0% or 100%, and it's just a matter of the PWM of the packet rate
> that makes it look like something in between.
> https://burntchrome.blogspot.com/2016/09/iperf3-and-microbursts.html
>
>
>
> I'll let others try to figure out how build and tune the knobs, but the
>> data acquisition and
>> visualization is something we might try to accomplish.  I have a feeling
>> I'm not the
>> first person to think of this, however....probably someone already has
>> done such
>> a thing.
>>
>> Thanks,
>> Ben
>>
>> On 7/12/21 1:04 PM, Bob McMahon wrote:
>> > I believe end host's TCP stats are insufficient as seen per the
>> "failed" congested control mechanisms over the last decades. I think Jaffe
>> pointed this out in
>> > 1979 though he was using what's been deemed on this thread as
>> "spherical cow queueing theory."
>> >
>> > "Flow control in store-and-forward computer networks is appropriate for
>> decentralized execution. A formal description of a class of "decentralized
>> flow control
>> > algorithms" is given. The feasibility of maximizing power with such
>> algorithms is investigated. On the assumption that communication links
>> behave like M/M/1
>> > servers it is shown that no "decentralized flow control algorithm" can
>> maximize network power. Power has been suggested in the literature as a
>> network
>> > performance objective. It is also shown that no objective based only on
>> the users' throughputs and average delay is decentralizable. Finally, a
>> restricted class
>> > of algorithms cannot even approximate power."
>> >
>> > https://ieeexplore.ieee.org/document/1095152
>> >
>> > Did Jaffe make a mistake?
>> >
>> > Also, it's been observed that latency is non-parametric in it's
>> distributions and computing gaussians per the central limit theorem for OWD
>> feedback loops
>> > aren't effective. How does one design a control loop around things that
>> are non-parametric? It also begs the question, what are the feed forward
>> knobs that can
>> > actually help?
>> >
>> > Bob
>> >
>> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com
>> <mailto:greearb@candelatech.com>> wrote:
>> >
>> >     Measuring one or a few links provides a bit of data, but seems like
>> if someone is trying to understand
>> >     a large and real network, then the OWD between point A and B needs
>> to just be input into something much
>> >     more grand.  Assuming real-time OWD data exists between 100 to 1000
>> endpoint pairs, has anyone found a way
>> >     to visualize this in a useful manner?
>> >
>> >     Also, considering something better than ntp may not really scale to
>> 1000+ endpoints, maybe round-trip
>> >     time is only viable way to get this type of data.  In that case,
>> maybe clever logic could use things
>> >     like trace-route to get some idea of how long it takes to get
>> 'onto' the internet proper, and so estimate
>> >     the last-mile latency.  My assumption is that the last-mile latency
>> is where most of the pervasive
>> >     assymetric network latencies would exist (or just ping 8.8.8.8
>> which is 20ms from everywhere due to
>> >     $magic).
>> >
>> >     Endpoints could also triangulate a bit if needed, using some anchor
>> points in the network
>> >     under test.
>> >
>> >     Thanks,
>> >     Ben
>> >
>> >     On 7/12/21 11:21 AM, Bob McMahon wrote:
>> >      > iperf 2 supports OWD and gives full histograms for TCP write to
>> read, TCP connect times, latency of packets (with UDP), latency of "frames"
>> with
>> >      > simulated video traffic (TCP and UDP), xfer times of bursts with
>> low duty cycle traffic, and TCP RTT (sampling based.) It also has support
>> for sampling (per
>> >      > interval reports) down to 100 usecs if configured with
>> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
>> released all this as open source.
>> >      >
>> >      > OWD only works if the end realtime clocks are synchronized using
>> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
>> centers don't
>> >     provide
>> >      > sufficient level of clock accuracy and the GPS pulse per second
>> * to colo and vm customers.
>> >      >
>> >      > https://iperf2.sourceforge.io/iperf-manpage.html
>> >      >
>> >      > Bob
>> >      >
>> >      > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <
>> dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:
>> dpreed@deepplum.com
>> >     <mailto:dpreed@deepplum.com>>> wrote:
>> >      >
>> >      >
>> >      >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
>> Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
>> >     <mailto:Jason_Livingood@comcast.com <mailto:
>> Jason_Livingood@comcast.com>>> said:
>> >      >
>> >      >      > I think latency/delay is becoming seen to be as important
>> certainly, if not a more direct proxy for end user QoE. This is all still
>> evolving and I
>> >     have
>> >      >     to say is a super interesting & fun thing to work on. :-)
>> >      >
>> >      >     If I could manage to sell one idea to the management
>> hierarchy of communications industry CEOs (operators, vendors, ...) it is
>> this one:
>> >      >
>> >      >     "It's the end-to-end latency, stupid!"
>> >      >
>> >      >     And I mean, by end-to-end, latency to complete a task at a
>> relevant layer of abstraction.
>> >      >
>> >      >     At the link level, it's packet send to packet receive
>> completion.
>> >      >
>> >      >     But at the transport level including retransmission buffers,
>> it's datagram (or message) origination until the acknowledgement arrives
>> for that
>> >     message being
>> >      >     delivered after whatever number of retransmissions, freeing
>> the retransmission buffer.
>> >      >
>> >      >     At the WWW level, it's mouse click to display update
>> corresponding to completion of the request.
>> >      >
>> >      >     What should be noted is that lower level latencies don't
>> directly predict the magnitude of higher-level latencies. But longer lower
>> level latencies
>> >     almost
>> >      >     always amplfify higher level latencies. Often non-linearly.
>> >      >
>> >      >     Throughput is very, very weakly related to these latencies,
>> in contrast.
>> >      >
>> >      >     The amplification process has to do with the presence of
>> queueing. Queueing is ALWAYS bad for latency, and throughput only helps if
>> it is in exactly the
>> >      >     right place (the so-called input queue of the bottleneck
>> process, which is often a link, but not always).
>> >      >
>> >      >     Can we get that slogan into Harvard Business Review? Can we
>> get it taught in Managerial Accounting at HBS? (which does address
>> logistics/supply chain
>> >     queueing).
>> >      >
>> >      >
>> >      >
>> >      >
>> >      >
>> >      >
>> >      >
>> >      > This electronic communication and the information and any files
>> transmitted with it, or attached to it, are confidential and are intended
>> solely for the
>> >     use of
>> >      > the individual or entity to whom it is addressed and may contain
>> information that is confidential, legally privileged, protected by privacy
>> laws, or
>> >     otherwise
>> >      > restricted from disclosure to anyone else. If you are not the
>> intended recipient or the person responsible for delivering the e-mail to
>> the intended
>> >     recipient,
>> >      > you are hereby notified that any use, copying, distributing,
>> dissemination, forwarding, printing, or copying of this e-mail is strictly
>> prohibited. If you
>> >      > received this e-mail in error, please return the e-mail to the
>> sender, delete it from your computer, and destroy any printed copy of it.
>> >
>> >
>> >     --
>> >     Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com
>> >>
>> >     Candela Technologies Inc http://www.candelatech.com
>> >
>> >
>> > This electronic communication and the information and any files
>> transmitted with it, or attached to it, are confidential and are intended
>> solely for the use of
>> > the individual or entity to whom it is addressed and may contain
>> information that is confidential, legally privileged, protected by privacy
>> laws, or otherwise
>> > restricted from disclosure to anyone else. If you are not the intended
>> recipient or the person responsible for delivering the e-mail to the
>> intended recipient,
>> > you are hereby notified that any use, copying, distributing,
>> dissemination, forwarding, printing, or copying of this e-mail is strictly
>> prohibited. If you
>> > received this e-mail in error, please return the e-mail to the sender,
>> delete it from your computer, and destroy any printed copy of it.
>>
>>
>> --
>> Ben Greear <greearb@candelatech.com>
>> Candela Technologies Inc  http://www.candelatech.com
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 17981 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] Little's Law mea culpa, but not invalidating my main point
       [not found]                                   ` <A5E35F34-A4D5-45B1-8E2D-E2F6DE988A1E@cs.ucla.edu>
@ 2021-07-22 16:30                                     ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-07-22 16:30 UTC (permalink / raw)
  To: Leonard Kleinrock
  Cc: David P. Reed, Amr Rizk, Ben Greear, starlink, Make-Wifi-fast,
	Cake List, codel, cerowrt-devel, bloat, Dave Taht


[-- Attachment #1.1: Type: text/plain, Size: 21572 bytes --]

Thanks for this. I plan to purchase the second volume to go with my copy of
volume 1. There is (always) more to learn and your expertise is very
helpful.

Bob

PS.  As a side note, I've added support for TCP_NOTSENT_LOWAT in iperf 2.1.4
<https://iperf2.sourceforge.io/iperf-manpage.html> and it's proving
interesting per WiFi/BT latency testing including helping to mitigate
sender side bloat.
*--tcp-write-prefetch **n*[kmKM]Set TCP_NOTSENT_LOWAT on the socket and use
event based writes per select() on the socket.I'll probably add measuring
the select() delays to see if that correlates to things like RF
arbitrations, etc.


On Wed, Jul 21, 2021 at 4:20 PM Leonard Kleinrock <lk@cs.ucla.edu> wrote:

> Just a few comments following David Reed's insightful comments re the
> history of the ARPANET and its approach to flow control.  I have attached
> some pages from my Volume II which provide an understanding of how we
> addressed flow control and its implementation in the ARPANET.
>
> The early days of the ARPANET design and evaluation involved detailed
> design of what we did call “Flow Control”.  In my "Queueing Systems, Volume
> II: Computer Applications”, John Wiley, 1976, I documented much of what we
> designed and evaluated for the ARPANET, and focused on performance,
> deadlocks, lockups and degradations due to flow control design.  Aspects of
> congestion control were considered, but this 2-volume book was mostly about
> understanding congestion.    Of interest are the many deadlocks that we
> discovered in those early days as we evaluated and measured the network
> behavior.  Flow control was designed into that early network, but it had a
> certain ad-hoc flavor and I point out the danger of requiring flows to
> depend upon the acquisition of multiple tokens that were allocated from
> different portions of the network at the same time in a distributed
> fashion.  The attached relevant sections of the book address these issues;
>  I thought it would be of value to see what we were looking at back then.
>
> On a related topic regarding flow and congestion control (as triggered by
> David’s comment* "**at most one packet waiting for each egress link in
> the bottleneck path.”*), in 1978, I published a paper
> <https://www.lk.cs.ucla.edu/data/files/Kleinrock/On%20Flow%20Control%20in%20Computer%20Networks.pdf> in
> which I extended the notion of Power (the ratio of throughput to response
> time) that had been introduced by Giessler, et a
> <https://www.sciencedirect.com/science/article/abs/pii/0376507578900284>l
> and I pointed out the amazing properties that emerged when Power is
> optimized, e.g., that one should keep each hop in the pipe “just full”,
> i.e., one message per hop.  As it turns out, and as has been discussed in
> this email chain, Jaffe
> <https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1095152> showed
> in 1981 that this optimization was not decentralizable and so no one
> pursued this optimal operating point (notwithstanding the fact that I
> published other papers on this issue, for example in 1979
> <https://www.lk.cs.ucla.edu/data/files/Kleinrock/Power%20and%20Deterministic%20Rules%20of%20Thumb%20for%20Probabilistic.pdf> and
> in 1981 <https://www.lk.cs.ucla.edu/data/files/Gail/power.pdf>).  So this
> issue of Power lay dormant for decades until Van Jacobsen, et al,
> resurrected the idea with their BBR flow control design in 2016
> <https://queue.acm.org/detail.cfm?id=3022184> when they showed that
> indeed one could decentralize power.  Considerable research has since
> followed their paper including another by me in 2018
> <https://www.lk.cs.ucla.edu/data/files/Kleinrock/Internet%20congestion%20control%20using%20the%20power%20metric%20LK%20Mod%20aug%202%202018.pdf>.
> (This was not the first time that a publication challenging the merits of a
> new idea negatively impacted that idea for decades - for example, the 1988
> book “Perceptrons”
> <https://www.amazon.com/Perceptrons-Introduction-Computational-Geometry-Expanded/dp/0262631113/ref=sr_1_2?dchild=1&keywords=perceptrons&qid=1626846378&sr=8-2> by
> Minsky and Papert discouraged research into neural networks for many years
> until that idea was proven to have merit.)  But the story is not over as
> much  work has yet to be done to develop the algorithms that can properly
> deal with congestion in the sense that this email chain continues to
> discuss it.
>
> Best,
> Len
>
>
>
>
>
>
>
> On Jul 13, 2021, at 10:49 AM, David P. Reed <dpreed@deepplum.com> wrote:
>
> Bob -
>
> On Tuesday, July 13, 2021 1:07pm, "Bob McMahon" <bob.mcmahon@broadcom.com>
> said:
>
> "Control at endpoints benefits greatly from even small amounts of
> information supplied by the network about the degree of congestion present
> on the path."
>
> Agreed. The ECN mechanism seems like a shared thermostat in a building.
> It's basically an on/off where everyone is trying to set the temperature.
> It does affect, in a non-linear manner, but still an effect. Better than a
> thermostat set at infinity or 0 Kelvin for sure.
>
> I find the assumption that congestion occurs "in network" as not always
> true. Taking OWD measurements with read side rate limiting suggests that
> equally important to mitigating bufferbloat driven latency using congestion
> signals is to make sure apps read "fast enough" whatever that means. I
> rarely hear about how important it is for apps to prioritize reads over
> open sockets. Not sure why that's overlooked and bufferbloat gets all the
> attention. I'm probably missing something.
>
>
> In the early days of the Internet protocol and also even ARPANET Host-Host
> protocol there were those who conflated host-level "flow control" (matching
> production rate of data into the network to the destination *process*
> consumption rate of data on a virtual circuit with a source capable of
> variable and unbounded bit rate) with "congestion control" in the network.
> The term "congestion control" wasn't even used in the Internetworking
> project when it was discussing design in the late 1970's. I tried to use it
> in our working group meetings, and every time I said "congestion" the
> response would be phrased as "flow".
>
> The classic example was printing a file's contents from disk to an ASR33
> terminal on an TIP (Terminal IMP). There was flow control in the end-to-end
> protocol to avoid overflowing the TTY's limited buffer. But those who grew
> up with ARPANET knew that thare was no way to accumulate queueing in the
> IMP network, because of RFNM's that required permission for each new packet
> to be sent. RFNM's implicitly prevented congestion from being caused by a
> virtual circuit. But a flow control problem remained, because at the higher
> level protocol, buffering would overflow at the TIP.
>
> TCP adopted a different end-to-end *flow* control, so it solved the flow
> control problem by creating a Windowing mechanism. But it did not by itself
> solve the *congestion* control problem, even congestion built up inside the
> network by a wide-open window and a lazy operating system at the receiving
> end that just said, I've got a lot of virtual memory so I'll open the
> window to maximum size.
>
> There was a lot of confusion, because the guys who came from the ARPANET
> environment, with all links being the same speed and RFNM limits on rate,
> couldn't see why the Internet stack was so collapse-prone. I think Multics,
> for example, as a giant virtual memory system caused congestion by opening
> up its window too much.
>
> This is where Van Jacobson discovered that dropped packets were a "good
> enough" congestion signal because of "fate sharing" among the packets that
> flowed on a bottleneck path, and that windowing (invented for flow control
> by the receiver to protect itself from overflow if the receiver couldn't
> receive fast enough) could be used to slow down the sender to match the
> rate of senders to the capacity of the internal bottleneck link. An elegant
> "hack" that actually worked really well in practice.
>
> Now we view it as a bug if the receiver opens its window too much, or
> otherwise doesn't translate dropped packets (or other incipient-congestion
> signals) to shut down the source transmission rate as quickly as possible.
> Fortunately, the proper state of the internet - the one it should seek as
> its ideal state - is that there is at most one packet waiting for each
> egress link in the bottleneck path. This stable state ensures that the
> window-reduction or slow-down signal encounters no congestion, with high
> probability. [Excursions from one-packet queue occur, but since only
> one-packet waiting is sufficient to fill the bottleneck link to capacity,
> they can't achieve higher throughput in steady state. In practice, noisy
> arrival distributions can reduce throughput, so allowing a small number of
> packets to be waiting on a bottleneck link's queue can slightly increase
> throughput. That's not asymptotically relevant, but as mentioned, the
> Internet is never near asymptotic behavior.]
>
>
>
> Bob
>
> On Tue, Jul 13, 2021 at 12:15 AM Amr Rizk <amr@rizk.com.de> wrote:
>
> Ben,
>
> it depends on what one tries to measure. Doing a rate scan using UDP (to
> measure latency distributions under load) is the best thing that we have
> but without actually knowing how resources are shared (fair share as in
> WiFi, FIFO as nearly everywhere else) it becomes very difficult to
> interpret the results or provide a proper argument on latency. You are
> right - TCP stats are a proxy for user experience but I believe they are
> difficult to reproduce (we are always talking about very short TCP flows -
> the infinite TCP flow that converges to a steady behavior is purely
> academic).
>
> By the way, Little's law is a strong tool when it comes to averages. To be
> able to say more (e.g. 1% of the delays is larger than x) one requires more
> information (e.g. the traffic - On-OFF pattern) see [1].  I am not sure
> when does such information readily exist.
>
> Best
> Amr
>
> [1] https://dl.acm.org/doi/10.1145/3341617.3326146 or if behind a paywall
> https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf
>
> --------------------------------
> Amr Rizk (amr.rizk@uni-due.de)
> University of Duisburg-Essen
>
> -----Ursprüngliche Nachricht-----
> Von: Bloat <bloat-bounces@lists.bufferbloat.net> Im Auftrag von Ben Greear
> Gesendet: Montag, 12. Juli 2021 22:32
> An: Bob McMahon <bob.mcmahon@broadcom.com>
> Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast <
> make-wifi-fast@lists.bufferbloat.net>; Leonard Kleinrock <lk@cs.ucla.edu>;
> David P. Reed <dpreed@deepplum.com>; Cake List <cake@lists.bufferbloat.net
> >;
> codel@lists.bufferbloat.net; cerowrt-devel <
> cerowrt-devel@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
> Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidating my main
> point
>
> UDP is better for getting actual packet latency, for sure.  TCP is
> typical-user-experience-latency though, so it is also useful.
>
> I'm interested in the test and visualization side of this.  If there were
> a way to give engineers a good real-time look at a complex real-world
> network, then they have something to go on while trying to tune various
> knobs in their network to improve it.
>
> I'll let others try to figure out how build and tune the knobs, but the
> data acquisition and visualization is something we might try to
> accomplish.  I have a feeling I'm not the first person to think of this,
> however....probably someone already has done such a thing.
>
> Thanks,
> Ben
>
> On 7/12/21 1:04 PM, Bob McMahon wrote:
>
> I believe end host's TCP stats are insufficient as seen per the
> "failed" congested control mechanisms over the last decades. I think
> Jaffe pointed this out in
> 1979 though he was using what's been deemed on this thread as "spherical
>
> cow queueing theory."
>
>
> "Flow control in store-and-forward computer networks is appropriate
> for decentralized execution. A formal description of a class of
> "decentralized flow control algorithms" is given. The feasibility of
> maximizing power with such algorithms is investigated. On the
> assumption that communication links behave like M/M/1 servers it is
>
> shown that no "decentralized flow control algorithm" can maximize network
> power. Power has been suggested in the literature as a network performance
> objective. It is also shown that no objective based only on the users'
> throughputs and average delay is decentralizable. Finally, a restricted
> class of algorithms cannot even approximate power."
>
>
> https://ieeexplore.ieee.org/document/1095152
>
> Did Jaffe make a mistake?
>
> Also, it's been observed that latency is non-parametric in it's
> distributions and computing gaussians per the central limit theorem
> for OWD feedback loops aren't effective. How does one design a control
>
> loop around things that are non-parametric? It also begs the question, what
> are the feed forward knobs that can actually help?
>
>
> Bob
>
> On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb@candelatech.com
>
> <mailto:greearb@candelatech.com>> wrote:
>
>
>    Measuring one or a few links provides a bit of data, but seems like
>
> if someone is trying to understand
>
>    a large and real network, then the OWD between point A and B needs
>
> to just be input into something much
>
>    more grand.  Assuming real-time OWD data exists between 100 to 1000
>
> endpoint pairs, has anyone found a way
>
>    to visualize this in a useful manner?
>
>    Also, considering something better than ntp may not really scale to
>
> 1000+ endpoints, maybe round-trip
>
>    time is only viable way to get this type of data.  In that case,
>
> maybe clever logic could use things
>
>    like trace-route to get some idea of how long it takes to get 'onto'
>
> the internet proper, and so estimate
>
>    the last-mile latency.  My assumption is that the last-mile latency
>
> is where most of the pervasive
>
>    assymetric network latencies would exist (or just ping 8.8.8.8 which
>
> is 20ms from everywhere due to
>
>    $magic).
>
>    Endpoints could also triangulate a bit if needed, using some anchor
>
> points in the network
>
>    under test.
>
>    Thanks,
>    Ben
>
>    On 7/12/21 11:21 AM, Bob McMahon wrote:
>
> iperf 2 supports OWD and gives full histograms for TCP write to
>
> read, TCP connect times, latency of packets (with UDP), latency of "frames"
> with
>
> simulated video traffic (TCP and UDP), xfer times of bursts with
>
> low duty cycle traffic, and TCP RTT (sampling based.) It also has support
> for sampling (per
>
> interval reports) down to 100 usecs if configured with
>
> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
> released all this as open source.
>
>
> OWD only works if the end realtime clocks are synchronized using
>
> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
> centers don't
>
>    provide
>
> sufficient level of clock accuracy and the GPS pulse per second *
>
> to colo and vm customers.
>
>
> https://iperf2.sourceforge.io/iperf-manpage.html
>
> Bob
>
> On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <
>
> dpreed@deepplum.com <mailto:dpreed@deepplum.com> <mailto:
> dpreed@deepplum.com
>
>    <mailto:dpreed@deepplum.com>>> wrote:
>
>
>
>    On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
>
> Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>
>
>    <mailto:Jason_Livingood@comcast.com <mailto:
>
> Jason_Livingood@comcast.com>>> said:
>
>
> I think latency/delay is becoming seen to be as important
>
> certainly, if not a more direct proxy for end user QoE. This is all still
> evolving and I
>
>    have
>
>    to say is a super interesting & fun thing to work on. :-)
>
>    If I could manage to sell one idea to the management
>
> hierarchy of communications industry CEOs (operators, vendors, ...) it is
> this one:
>
>
>    "It's the end-to-end latency, stupid!"
>
>    And I mean, by end-to-end, latency to complete a task at a
>
> relevant layer of abstraction.
>
>
>    At the link level, it's packet send to packet receive
>
> completion.
>
>
>    But at the transport level including retransmission buffers,
>
> it's datagram (or message) origination until the acknowledgement arrives
> for that
>
>    message being
>
>    delivered after whatever number of retransmissions, freeing
>
> the retransmission buffer.
>
>
>    At the WWW level, it's mouse click to display update
>
> corresponding to completion of the request.
>
>
>    What should be noted is that lower level latencies don't
>
> directly predict the magnitude of higher-level latencies. But longer lower
> level latencies
>
>    almost
>
>    always amplfify higher level latencies. Often non-linearly.
>
>    Throughput is very, very weakly related to these latencies,
>
> in contrast.
>
>
>    The amplification process has to do with the presence of
>
> queueing. Queueing is ALWAYS bad for latency, and throughput only helps if
> it is in exactly the
>
>    right place (the so-called input queue of the bottleneck
>
> process, which is often a link, but not always).
>
>
>    Can we get that slogan into Harvard Business Review? Can we
>
> get it taught in Managerial Accounting at HBS? (which does address
> logistics/supply chain
>
>    queueing).
>
>
>
>
>
>
>
>
> This electronic communication and the information and any files
>
> transmitted with it, or attached to it, are confidential and are intended
> solely for the
>
>    use of
>
> the individual or entity to whom it is addressed and may contain
>
> information that is confidential, legally privileged, protected by privacy
> laws, or
>
>    otherwise
>
> restricted from disclosure to anyone else. If you are not the
>
> intended recipient or the person responsible for delivering the e-mail to
> the intended
>
>    recipient,
>
> you are hereby notified that any use, copying, distributing,
>
> dissemination, forwarding, printing, or copying of this e-mail is strictly
> prohibited. If you
>
> received this e-mail in error, please return the e-mail to the
>
> sender, delete it from your computer, and destroy any printed copy of it.
>
>
>
>    --
>    Ben Greear <greearb@candelatech.com <mailto:greearb@candelatech.com
>
>
>    Candela Technologies Inc http://www.candelatech.com
>
>
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are
> intended solely for the use of the individual or entity to whom it is
> addressed and may contain information that is confidential, legally
> privileged, protected by privacy laws, or otherwise restricted from
>
> disclosure to anyone else. If you are not the intended recipient or the
> person responsible for delivering the e-mail to the intended recipient, you
> are hereby notified that any use, copying, distributing, dissemination,
> forwarding, printing, or copying of this e-mail is strictly prohibited. If
> you received this e-mail in error, please return the e-mail to the sender,
> delete it from your computer, and destroy any printed copy of it.
>
>
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc  http://www.candelatech.com
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> This electronic communication and the information and any files transmitted
> with it, or attached to it, are confidential and are intended solely for
> the use of the individual or entity to whom it is addressed and may contain
> information that is confidential, legally privileged, protected by privacy
> laws, or otherwise restricted from disclosure to anyone else. If you are
> not the intended recipient or the person responsible for delivering the
> e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 35382 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-07-09 10:05             ` [Cerowrt-devel] [Make-wifi-fast] [Starlink] " Luca Muscariello
  2021-07-09 19:31               ` [Cerowrt-devel] Little's Law mea culpa, but not invalidating my main point David P. Reed
@ 2021-08-02 22:59               ` Bob McMahon
  2021-08-02 23:16                 ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
  1 sibling, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-08-02 22:59 UTC (permalink / raw)
  To: Luca Muscariello
  Cc: Leonard Kleinrock, David P. Reed, starlink, Make-Wifi-fast,
	Cake List, codel, cerowrt-devel, bloat, Ben Greear


[-- Attachment #1.1.1: Type: text/plain, Size: 19818 bytes --]

Hi All,

Broadcom has an interest in helping engineers produce a multipath RF test
system which hopefully would be at a reasonable cost and support both RF
mixing and ranges. Both the hardware and software can be open source too.
We can also provide chip support for things like the channel estimates
(each transmission produces one), i.e. output the h-matrices.

A rough set of slides is attached.  Do contact me if this is interesting to
you or someone you know.

Thanks,
Bob

PS. I know folks want open source drivers for our WiFi and switch chips.
That's not something I can support - too much work, both technical and
human interactions, for one guy. Sorry about that.


On Fri, Jul 9, 2021 at 3:05 AM Luca Muscariello <muscariello@ieee.org>
wrote:

> For those who might be interested in Little's law
> there is a nice paper by John Little on the occasion
> of the 50th anniversary  of the result.
>
>
> https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary
>
>
> https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf
>
> Nice read.
> Luca
>
> P.S.
> Who has not a copy of L. Kleinrock's books? I do have and am not ready to
> lend them!
>
> On Fri, Jul 9, 2021 at 11:01 AM Leonard Kleinrock <lk@cs.ucla.edu> wrote:
>
>> David,
>>
>> I totally appreciate  your attention to when and when not analytical
>> modeling works. Let me clarify a few things from your note.
>>
>> First, Little's law (also known as Little’s lemma or, as I use in my
>> book, Little’s result) does not assume Poisson arrivals -  it is good for
>> *any* arrival process and any service process and is an equality between
>> time averages.  It states that the time average of the number in a system
>> (for a sample path *w)* is equal to the average arrival rate to the
>> system multiplied by the time-averaged time in the system for that sample
>> path.  This is often written as   NTimeAvg =λ·TTimeAvg .  Moreover, if
>> the system is also ergodic, then the time average equals the ensemble
>> average and we often write it as N ̄ = λ T ̄ .  In any case, this
>> requires neither Poisson arrivals nor exponential service times.
>>
>> Queueing theorists often do study the case of Poisson arrivals.  True, it
>> makes the analysis easier, yet there is a better reason it is often used,
>> and that is because the sum of a large number of independent stationary
>> renewal processes approaches a Poisson process.  So nature often gives us
>> Poisson arrivals.
>>
>> Best,
>> Len
>>
>>
>>
>> On Jul 8, 2021, at 12:38 PM, David P. Reed <dpreed@deepplum.com> wrote:
>>
>> I will tell you flat out that the arrival time distribution assumption
>> made by Little's Lemma that allows "estimation of queue depth" is totally
>> unreasonable on ANY Internet in practice.
>>
>>
>> The assumption is a Poisson Arrival Process. In reality, traffic arrivals
>> in real internet applications are extremely far from Poisson, and, of
>> course, using TCP windowing, become highly intercorrelated with crossing
>> traffic that shares the same queue.
>>
>>
>> So, as I've tried to tell many, many net-heads (people who ignore
>> applications layer behavior, like the people that think latency doesn't
>> matter to end users, only throughput), end-to-end packet arrival times on a
>> practical network are incredibly far from Poisson - and they are more like
>> fractal probability distributions, very irregular at all scales of time.
>>
>>
>> So, the idea that iperf can estimate queue depth by Little's Lemma by
>> just measuring saturation of capacity of a path is bogus.The less Poisson,
>> the worse the estimate gets, by a huge factor.
>>
>>
>>
>>
>> Where does the Poisson assumption come from?  Well, like many theorems,
>> it is the simplest tractable closed form solution - it creates a simplified
>> view, by being a "single-parameter" distribution (the parameter is called
>> lambda for a Poisson distribution).  And the analysis of a simple queue
>> with poisson arrival distribution and a static, fixed service time is the
>> first interesting Queueing Theory example in most textbooks. It is
>> suggestive of an interesting phenomenon, but it does NOT characterize any
>> real system.
>>
>>
>> It's the queueing theory equivalent of "First, we assume a spherical
>> cow...". in doing an example in a freshman physics class.
>>
>>
>> Unfortunately, most networking engineers understand neither queuing
>> theory nor application networking usage in interactive applications. Which
>> makes them arrogant. They assume all distributions are poisson!
>>
>>
>>
>>
>> On Tuesday, July 6, 2021 9:46am, "Ben Greear" <greearb@candelatech.com>
>> said:
>>
>> > Hello,
>> >
>> > I am interested to hear wish lists for network testing features. We
>> make test
>> > equipment, supporting lots
>> > of wifi stations and a distributed architecture, with built-in udp,
>> tcp, ipv6,
>> > http, ... protocols,
>> > and open to creating/improving some of our automated tests.
>> >
>> > I know Dave has some test scripts already, so I'm not necessarily
>> looking to
>> > reimplement that,
>> > but more fishing for other/new ideas.
>> >
>> > Thanks,
>> > Ben
>> >
>> > On 7/2/21 4:28 PM, Bob McMahon wrote:
>> > > I think we need the language of math here. It seems like the network
>> > power metric, introduced by Kleinrock and Jaffe in the late 70s, is
>> something
>> > useful.
>> > > Effective end/end queue depths per Little's law also seems useful.
>> Both are
>> > available in iperf 2 from a test perspective. Repurposing test
>> techniques to
>> > actual
>> > > traffic could be useful. Hence the question around what exact
>> telemetry
>> > is useful to apps making socket write() and read() calls.
>> > >
>> > > Bob
>> > >
>> > > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht <dave.taht@gmail.com
>> > <mailto:dave.taht@gmail.com <dave.taht@gmail.com>>> wrote:
>> > >
>> > > In terms of trying to find "Quality" I have tried to encourage folk to
>> > > both read "zen and the art of motorcycle maintenance"[0], and Deming's
>> > > work on "total quality management".
>> > >
>> > > My own slice at this network, computer and lifestyle "issue" is aiming
>> > > for "imperceptible latency" in all things. [1]. There's a lot of
>> > > fallout from that in terms of not just addressing queuing delay, but
>> > > caching, prefetching, and learning more about what a user really needs
>> > > (as opposed to wants) to know via intelligent agents.
>> > >
>> > > [0] If you want to get depressed, read Pirsig's successor to "zen...",
>> > > lila, which is in part about what happens when an engineer hits an
>> > > insoluble problem.
>> > > [1] https://www.internetsociety.org/events/latency2013/
>> > <https://www.internetsociety.org/events/latency2013/>
>> > >
>> > >
>> > >
>> > > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed <dpreed@deepplum.com
>> > <mailto:dpreed@deepplum.com <dpreed@deepplum.com>>> wrote:
>> > > >
>> > > > Well, nice that the folks doing the conference  are willing to
>> > consider that quality of user experience has little to do with
>> signalling rate at
>> > the
>> > > physical layer or throughput of FTP transfers.
>> > > >
>> > > >
>> > > >
>> > > > But honestly, the fact that they call the problem "network quality"
>> > suggests that they REALLY, REALLY don't understand the Internet isn't
>> the hardware
>> > or
>> > > the routers or even the routing algorithms *to its users*.
>> > > >
>> > > >
>> > > >
>> > > > By ignoring the diversity of applications now and in the future,
>> > and the fact that we DON'T KNOW what will be coming up, this conference
>> will
>> > likely fall
>> > > into the usual trap that net-heads fall into - optimizing for some
>> > imaginary reality that doesn't exist, and in fact will probably never
>> be what
>> > users
>> > > actually will do given the chance.
>> > > >
>> > > >
>> > > >
>> > > > I saw this issue in 1976 in the group developing the original
>> > Internet protocols - a desire to put *into the network* special tricks
>> to optimize
>> > ASR33
>> > > logins to remote computers from terminal concentrators (aka remote
>> > login), bulk file transfers between file systems on different
>> time-sharing
>> > systems, and
>> > > "sessions" (virtual circuits) that required logins. And then trying to
>> > exploit underlying "multicast" by building it into the IP layer,
>> because someone
>> > > thought that TV broadcast would be the dominant application.
>> > > >
>> > > >
>> > > >
>> > > > Frankly, to think of "quality" as something that can be "provided"
>> > by "the network" misses the entire point of "end-to-end argument in
>> system
>> > design".
>> > > Quality is not a property defined or created by The Network. If you
>> want
>> > to talk about Quality, you need to talk about users - all the users at
>> all times,
>> > > now and into the future, and that's something you can't do if you
>> don't
>> > bother to include current and future users talking about what they
>> might expect
>> > to
>> > > experience that they don't experience.
>> > > >
>> > > >
>> > > >
>> > > > There was much fighting back in 1976 that basically involved
>> > "network experts" saying that the network was the place to "solve" such
>> issues as
>> > quality,
>> > > so applications could avoid having to solve such issues.
>> > > >
>> > > >
>> > > >
>> > > > What some of us managed to do was to argue that you can't "solve"
>> > such issues. All you can do is provide a framework that enables
>> different uses to
>> > > *cooperate* in some way.
>> > > >
>> > > >
>> > > >
>> > > > Which is why the Internet drops packets rather than queueing them,
>> > and why diffserv cannot work.
>> > > >
>> > > > (I know the latter is conftroversial, but at the moment, ALL of
>> > diffserv attempts to talk about end-to-end applicaiton specific
>> metrics, but
>> > never, ever
>> > > explains what the diffserv control points actually do w.r.t. what the
>> IP
>> > layer can actually control. So it is meaningless - another violation of
>> the
>> > > so-called end-to-end principle).
>> > > >
>> > > >
>> > > >
>> > > > Networks are about getting packets from here to there, multiplexing
>> > the underlying resources. That's it. Quality is a whole different
>> thing. Quality
>> > can
>> > > be improved by end-to-end approaches, if the underlying network
>> provides
>> > some kind of thing that actually creates a way for end-to-end
>> applications to
>> > > affect queueing and routing decisions, and more importantly getting
>> > "telemetry" from the network regarding what is actually going on with
>> the other
>> > > end-to-end users sharing the infrastructure.
>> > > >
>> > > >
>> > > >
>> > > > This conference won't talk about it this way. So don't waste your
>> > time.
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > On Wednesday, June 30, 2021 8:12pm, "Dave Taht"
>> > <dave.taht@gmail.com <mailto:dave.taht@gmail.com <dave.taht@gmail.com>>>
>> said:
>> > > >
>> > > > > The program committee members are *amazing*. Perhaps, finally,
>> > we can
>> > > > > move the bar for the internet's quality metrics past endless,
>> > blind
>> > > > > repetitions of speedtest.
>> > > > >
>> > > > > For complete details, please see:
>> > > > > https://www.iab.org/activities/workshops/network-quality/
>> > <https://www.iab.org/activities/workshops/network-quality/>
>> > > > >
>> > > > > Submissions Due: Monday 2nd August 2021, midnight AOE
>> > (Anywhere On Earth)
>> > > > > Invitations Issued by: Monday 16th August 2021
>> > > > >
>> > > > > Workshop Date: This will be a virtual workshop, spread over
>> > three days:
>> > > > >
>> > > > > 1400-1800 UTC Tue 14th September 2021
>> > > > > 1400-1800 UTC Wed 15th September 2021
>> > > > > 1400-1800 UTC Thu 16th September 2021
>> > > > >
>> > > > > Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
>> > > > >
>> > > > > The Program Committee members:
>> > > > >
>> > > > > Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire,
>> > Sam
>> > > > > Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen,
>> > Geoff
>> > > > > Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja
>> > Kuehlewind,
>> > > > > Jason Livingood, Matt Mathias, Randall Meyer, Kathleen
>> > Nichols,
>> > > > > Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
>> > > > >
>> > > > > Send Submissions to: network-quality-workshop-pc@iab.org
>> > <mailto:network-quality-workshop-pc@iab.org
>> <network-quality-workshop-pc@iab.org>>.
>> > > > >
>> > > > > Position papers from academia, industry, the open source
>> > community and
>> > > > > others that focus on measurements, experiences, observations
>> > and
>> > > > > advice for the future are welcome. Papers that reflect
>> > experience
>> > > > > based on deployed services are especially welcome. The
>> > organizers
>> > > > > understand that specific actions taken by operators are
>> > unlikely to be
>> > > > > discussed in detail, so papers discussing general categories
>> > of
>> > > > > actions and issues without naming specific technologies,
>> > products, or
>> > > > > other players in the ecosystem are expected. Papers should not
>> > focus
>> > > > > on specific protocol solutions.
>> > > > >
>> > > > > The workshop will be by invitation only. Those wishing to
>> > attend
>> > > > > should submit a position paper to the address above; it may
>> > take the
>> > > > > form of an Internet-Draft.
>> > > > >
>> > > > > All inputs submitted and considered relevant will be published
>> > on the
>> > > > > workshop website. The organisers will decide whom to invite
>> > based on
>> > > > > the submissions received. Sessions will be organized according
>> > to
>> > > > > content, and not every accepted submission or invited attendee
>> > will
>> > > > > have an opportunity to present as the intent is to foster
>> > discussion
>> > > > > and not simply to have a sequence of presentations.
>> > > > >
>> > > > > Position papers from those not planning to attend the virtual
>> > sessions
>> > > > > themselves are also encouraged. A workshop report will be
>> > published
>> > > > > afterwards.
>> > > > >
>> > > > > Overview:
>> > > > >
>> > > > > "We believe that one of the major factors behind this lack of
>> > progress
>> > > > > is the popular perception that throughput is the often sole
>> > measure of
>> > > > > the quality of Internet connectivity. With such narrow focus,
>> > people
>> > > > > don’t consider questions such as:
>> > > > >
>> > > > > What is the latency under typical working conditions?
>> > > > > How reliable is the connectivity across longer time periods?
>> > > > > Does the network allow the use of a broad range of protocols?
>> > > > > What services can be run by clients of the network?
>> > > > > What kind of IPv4, NAT or IPv6 connectivity is offered, and
>> > are there firewalls?
>> > > > > What security mechanisms are available for local services,
>> > such as DNS?
>> > > > > To what degree are the privacy, confidentiality, integrity
>> > and
>> > > > > authenticity of user communications guarded?
>> > > > >
>> > > > > Improving these aspects of network quality will likely depend
>> > on
>> > > > > measurement and exposing metrics to all involved parties,
>> > including to
>> > > > > end users in a meaningful way. Such measurements and exposure
>> > of the
>> > > > > right metrics will allow service providers and network
>> > operators to
>> > > > > focus on the aspects that impacts the users’ experience
>> > most and at
>> > > > > the same time empowers users to choose the Internet service
>> > that will
>> > > > > give them the best experience."
>> > > > >
>> > > > >
>> > > > > --
>> > > > > Latest Podcast:
>> > > > >
>> >
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> > <
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> >
>> > > > >
>> > > > > Dave Täht CTO, TekLibre, LLC
>> > > > > _______________________________________________
>> > > > > Cerowrt-devel mailing list
>> > > > > Cerowrt-devel@lists.bufferbloat.net
>> > <mailto:Cerowrt-devel@lists.bufferbloat.net
>> <Cerowrt-devel@lists.bufferbloat.net>>
>> > > > > https://lists.bufferbloat.net/listinfo/cerowrt-devel
>> > <https://lists.bufferbloat.net/listinfo/cerowrt-devel>
>> > > > >
>> > >
>> > >
>> > >
>> > > --
>> > > Latest Podcast:
>> > >
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> > <
>> https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/
>> >
>> > >
>> > > Dave Täht CTO, TekLibre, LLC
>> > > _______________________________________________
>> > > Make-wifi-fast mailing list
>> > > Make-wifi-fast@lists.bufferbloat.net
>> > <mailto:Make-wifi-fast@lists.bufferbloat.net
>> <Make-wifi-fast@lists.bufferbloat.net>>
>> > > https://lists.bufferbloat.net/listinfo/make-wifi-fast
>> > <https://lists.bufferbloat.net/listinfo/make-wifi-fast>
>> > >
>> > >
>> > > This electronic communication and the information and any files
>> transmitted
>> > with it, or attached to it, are confidential and are intended solely
>> for the use
>> > of
>> > > the individual or entity to whom it is addressed and may contain
>> information
>> > that is confidential, legally privileged, protected by privacy laws, or
>> otherwise
>> > > restricted from disclosure to anyone else. If you are not the intended
>> > recipient or the person responsible for delivering the e-mail to the
>> intended
>> > recipient,
>> > > you are hereby notified that any use, copying, distributing,
>> dissemination,
>> > forwarding, printing, or copying of this e-mail is strictly prohibited.
>> If you
>> > > received this e-mail in error, please return the e-mail to the
>> sender, delete
>> > it from your computer, and destroy any printed copy of it.
>> > >
>> > > _______________________________________________
>> > > Starlink mailing list
>> > > Starlink@lists.bufferbloat.net
>> > > https://lists.bufferbloat.net/listinfo/starlink
>> > >
>> >
>> >
>> > --
>> > Ben Greear <greearb@candelatech.com>
>> > Candela Technologies Inc http://www.candelatech.com
>> >
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>> _______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.1.2: Type: text/html, Size: 28482 bytes --]

[-- Attachment #1.2: RF-Topologies.pdf --]
[-- Type: application/pdf, Size: 1716168 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-02 22:59               ` [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board Bob McMahon
@ 2021-08-02 23:16                 ` David Lang
  2021-08-02 23:50                   ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
                                     ` (2 more replies)
  0 siblings, 3 replies; 108+ messages in thread
From: David Lang @ 2021-08-02 23:16 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Luca Muscariello, Cake List, Make-Wifi-fast, Leonard Kleinrock,
	starlink, codel, cerowrt-devel, bloat, Ben Greear

If you are going to setup a test environment for wifi, you need to include the 
ability to make a fe cases that only happen with RF, not with wired networks and 
are commonly overlooked

1. station A can hear station B and C but they cannot hear each other
2. station A can hear station B but station B cannot hear station A 
3. station A can hear that station B is transmitting, but not with a strong 
enough signal to decode the signal (yes in theory you can work around 
interference, but in practice interference is still a real thing)

David Lang


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-02 23:16                 ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
@ 2021-08-02 23:50                   ` Bob McMahon
  2021-08-03  3:06                     ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
  2021-08-02 23:55                   ` Ben Greear
  2021-08-03  0:37                   ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " Leonard Kleinrock
  2 siblings, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-08-02 23:50 UTC (permalink / raw)
  To: David Lang
  Cc: Luca Muscariello, Cake List, Make-Wifi-fast, Leonard Kleinrock,
	starlink, codel, cerowrt-devel, bloat, Ben Greear


[-- Attachment #1.1: Type: text/plain, Size: 1909 bytes --]

That distance matrices manage energy between nodes.  The slides show a 5
branch tree to realize 4 nodes (and that distance matrix) and a diagram for
11 degrees of freedom for 6 nodes (3 BSS)  The python code will compute the
branch attenuations based on a supplied distance matrix. There will of
course be some distance matrices that cannot be achieved  per the reduction
of the degrees of freedom required to realize this using
physical devices in a cost effective manner.

Bob

On Mon, Aug 2, 2021 at 4:16 PM David Lang <david@lang.hm> wrote:

> If you are going to setup a test environment for wifi, you need to include
> the
> ability to make a fe cases that only happen with RF, not with wired
> networks and
> are commonly overlooked
>
> 1. station A can hear station B and C but they cannot hear each other
> 2. station A can hear station B but station B cannot hear station A
> 3. station A can hear that station B is transmitting, but not with a
> strong
> enough signal to decode the signal (yes in theory you can work around
> interference, but in practice interference is still a real thing)
>
> David Lang
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 2302 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-02 23:16                 ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
  2021-08-02 23:50                   ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
@ 2021-08-02 23:55                   ` Ben Greear
  2021-08-03  0:01                     ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
  2021-08-03  0:37                   ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " Leonard Kleinrock
  2 siblings, 1 reply; 108+ messages in thread
From: Ben Greear @ 2021-08-02 23:55 UTC (permalink / raw)
  To: David Lang, Bob McMahon
  Cc: Luca Muscariello, Cake List, Make-Wifi-fast, Leonard Kleinrock,
	starlink, codel, cerowrt-devel, bloat

On 8/2/21 4:16 PM, David Lang wrote:
> If you are going to setup a test environment for wifi, you need to include the ability to make a fe cases that only happen with RF, not with wired networks and 
> are commonly overlooked
> 
> 1. station A can hear station B and C but they cannot hear each other
> 2. station A can hear station B but station B cannot hear station A 3. station A can hear that station B is transmitting, but not with a strong enough signal to 
> decode the signal (yes in theory you can work around interference, but in practice interference is still a real thing)
> 
> David Lang
> 

To add to this, I think you need lots of different station devices, different capabilities (/n, /ac, /ax, etc)
different numbers of spatial streams, and different distances from the AP.  From download queueing perspective, changing
the capabilities may be sufficient while keeping all stations at same distance.  This assumes you are not
actually testing the wifi rate-ctrl alg. itself, so different throughput levels for different stations would be enough.

So, a good station emulator setup (and/or pile of real stations) and a few RF chambers and
programmable attenuators and you can test that setup...

 From upload perspective, I guess same setup would do the job.  Queuing/fairness might depend a bit more on the
station devices, emulated or otherwise, but I guess a clever AP could enforce fairness in upstream direction
too by implementing per-sta queues.

Thanks,
Ben

-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-02 23:55                   ` Ben Greear
@ 2021-08-03  0:01                     ` Bob McMahon
  2021-08-03  3:12                       ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
  0 siblings, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-08-03  0:01 UTC (permalink / raw)
  To: Ben Greear
  Cc: David Lang, Luca Muscariello, Cake List, Make-Wifi-fast,
	Leonard Kleinrock, starlink, codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 3131 bytes --]

We find four nodes, a primary BSS and an adjunct one quite good for lots of
testing.  The six nodes allows for a primary BSS and two adjacent ones. We
want to minimize complexity to necessary and sufficient.

The challenge we find is having variability (e.g. montecarlos) that's
reproducible and has relevant information. Basically, the distance matrices
have h-matrices as their elements. Our chips can provide these h-matrices.

The parts for solid state programmable attenuators and phase shifters
aren't very expensive. A device that supports a five branch tree and 2x2
MIMO seems a very good starting point.

Bob

On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com> wrote:

> On 8/2/21 4:16 PM, David Lang wrote:
> > If you are going to setup a test environment for wifi, you need to
> include the ability to make a fe cases that only happen with RF, not with
> wired networks and
> > are commonly overlooked
> >
> > 1. station A can hear station B and C but they cannot hear each other
> > 2. station A can hear station B but station B cannot hear station A 3.
> station A can hear that station B is transmitting, but not with a strong
> enough signal to
> > decode the signal (yes in theory you can work around interference, but
> in practice interference is still a real thing)
> >
> > David Lang
> >
>
> To add to this, I think you need lots of different station devices,
> different capabilities (/n, /ac, /ax, etc)
> different numbers of spatial streams, and different distances from the
> AP.  From download queueing perspective, changing
> the capabilities may be sufficient while keeping all stations at same
> distance.  This assumes you are not
> actually testing the wifi rate-ctrl alg. itself, so different throughput
> levels for different stations would be enough.
>
> So, a good station emulator setup (and/or pile of real stations) and a few
> RF chambers and
> programmable attenuators and you can test that setup...
>
>  From upload perspective, I guess same setup would do the job.
> Queuing/fairness might depend a bit more on the
> station devices, emulated or otherwise, but I guess a clever AP could
> enforce fairness in upstream direction
> too by implementing per-sta queues.
>
> Thanks,
> Ben
>
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc  http://www.candelatech.com
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 3765 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-02 23:16                 ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
  2021-08-02 23:50                   ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
  2021-08-02 23:55                   ` Ben Greear
@ 2021-08-03  0:37                   ` Leonard Kleinrock
  2021-08-03  1:24                     ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
  2021-08-08  4:20                     ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
  2 siblings, 2 replies; 108+ messages in thread
From: Leonard Kleinrock @ 2021-08-03  0:37 UTC (permalink / raw)
  To: David Lang
  Cc: Leonard Kleinrock, Bob McMahon, Luca Muscariello, Cake List,
	Make-Wifi-fast, starlink, codel, cerowrt-devel, bloat,
	Ben Greear

These cases are what my student, Fouad Tobagi and I called the Hidden Terminal Problem (with the Busy Tone solution) back in 1975.

Len 


> On Aug 2, 2021, at 4:16 PM, David Lang <david@lang.hm> wrote:
> 
> If you are going to setup a test environment for wifi, you need to include the ability to make a fe cases that only happen with RF, not with wired networks and are commonly overlooked
> 
> 1. station A can hear station B and C but they cannot hear each other
> 2. station A can hear station B but station B cannot hear station A 3. station A can hear that station B is transmitting, but not with a strong enough signal to decode the signal (yes in theory you can work around interference, but in practice interference is still a real thing)
> 
> David Lang
> 


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  0:37                   ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " Leonard Kleinrock
@ 2021-08-03  1:24                     ` Bob McMahon
  2021-08-08  5:07                       ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
  2021-08-08  4:20                     ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
  1 sibling, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-08-03  1:24 UTC (permalink / raw)
  To: Leonard Kleinrock
  Cc: David Lang, Luca Muscariello, Cake List, Make-Wifi-fast,
	starlink, codel, cerowrt-devel, bloat, Ben Greear


[-- Attachment #1.1: Type: text/plain, Size: 2019 bytes --]

I found the following talk relevant to distances between all the nodes.
https://www.youtube.com/watch?v=PNoUcQTCxiM

Distance is an abstract idea but applies to energy into a node as well as
phylogenetic trees. It's the same problem, i.e. fitting a distance matrix
using some sort of tree. I've found the five branch tree works well for
four nodes.

Bob

On Mon, Aug 2, 2021 at 5:37 PM Leonard Kleinrock <lk@cs.ucla.edu> wrote:

> These cases are what my student, Fouad Tobagi and I called the Hidden
> Terminal Problem (with the Busy Tone solution) back in 1975.
>
> Len
>
>
> > On Aug 2, 2021, at 4:16 PM, David Lang <david@lang.hm> wrote:
> >
> > If you are going to setup a test environment for wifi, you need to
> include the ability to make a fe cases that only happen with RF, not with
> wired networks and are commonly overlooked
> >
> > 1. station A can hear station B and C but they cannot hear each other
> > 2. station A can hear station B but station B cannot hear station A 3.
> station A can hear that station B is transmitting, but not with a strong
> enough signal to decode the signal (yes in theory you can work around
> interference, but in practice interference is still a real thing)
> >
> > David Lang
> >
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 2570 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-02 23:50                   ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
@ 2021-08-03  3:06                     ` David Lang
  0 siblings, 0 replies; 108+ messages in thread
From: David Lang @ 2021-08-03  3:06 UTC (permalink / raw)
  To: Bob McMahon
  Cc: David Lang, Luca Muscariello, Cake List, Make-Wifi-fast,
	Leonard Kleinrock, starlink, codel, cerowrt-devel, bloat,
	Ben Greear

that matrix cannot create asymmetric paths (at least, not unless you are also 
tinkering with power settings on the nodes), and will have trouble making hidden 
transmitters (station A can hear station B and C but B and C cannot tell the 
other exists) as a node can hear that something is transmitting at much lower 
power levels than it cn decode the signal.

David Lang

On Mon, 2 Aug 2021, Bob McMahon wrote:

> On Mon, Aug 2, 2021 at 4:16 PM David Lang <david@lang.hm> wrote:
>
>> If you are going to setup a test environment for wifi, you need to include
>> the
>> ability to make a fe cases that only happen with RF, not with wired
>> networks and
>> are commonly overlooked
>>
>> 1. station A can hear station B and C but they cannot hear each other
>> 2. station A can hear station B but station B cannot hear station A
>> 3. station A can hear that station B is transmitting, but not with a
>> strong
>> enough signal to decode the signal (yes in theory you can work around
>> interference, but in practice interference is still a real thing)
>>
>> David Lang
>>
>>
>
>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  0:01                     ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
@ 2021-08-03  3:12                       ` David Lang
  2021-08-03  3:23                         ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: David Lang @ 2021-08-03  3:12 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Ben Greear, David Lang, Luca Muscariello, Cake List,
	Make-Wifi-fast, Leonard Kleinrock, starlink, codel,
	cerowrt-devel, bloat

I guess it depends on what you are intending to test. If you are not going to 
tinker with any of the over-the-air settings (including the number of packets 
transmitted in one aggregate), the details of what happen over the air don't 
matter much.

But if you are going to be doing any tinkering with what is getting sent, and 
you ignore the hidden transmitter type problems, you will create a solution that 
seems to work really well in the lab and falls on it's face out in the wild 
where spectrum overload and hidden transmitters are the norm (at least in urban 
areas), not rare corner cases.

you don't need to include them in every test, but you need to have a way to 
configure your lab to include them before you consider any settings/algorithm 
ready to try in the wild.

David Lang

On Mon, 2 Aug 2021, Bob McMahon wrote:

> We find four nodes, a primary BSS and an adjunct one quite good for lots of
> testing.  The six nodes allows for a primary BSS and two adjacent ones. We
> want to minimize complexity to necessary and sufficient.
>
> The challenge we find is having variability (e.g. montecarlos) that's
> reproducible and has relevant information. Basically, the distance matrices
> have h-matrices as their elements. Our chips can provide these h-matrices.
>
> The parts for solid state programmable attenuators and phase shifters
> aren't very expensive. A device that supports a five branch tree and 2x2
> MIMO seems a very good starting point.
>
> Bob
>
> On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com> wrote:
>
>> On 8/2/21 4:16 PM, David Lang wrote:
>>> If you are going to setup a test environment for wifi, you need to
>> include the ability to make a fe cases that only happen with RF, not with
>> wired networks and
>>> are commonly overlooked
>>>
>>> 1. station A can hear station B and C but they cannot hear each other
>>> 2. station A can hear station B but station B cannot hear station A 3.
>> station A can hear that station B is transmitting, but not with a strong
>> enough signal to
>>> decode the signal (yes in theory you can work around interference, but
>> in practice interference is still a real thing)
>>>
>>> David Lang
>>>
>>
>> To add to this, I think you need lots of different station devices,
>> different capabilities (/n, /ac, /ax, etc)
>> different numbers of spatial streams, and different distances from the
>> AP.  From download queueing perspective, changing
>> the capabilities may be sufficient while keeping all stations at same
>> distance.  This assumes you are not
>> actually testing the wifi rate-ctrl alg. itself, so different throughput
>> levels for different stations would be enough.
>>
>> So, a good station emulator setup (and/or pile of real stations) and a few
>> RF chambers and
>> programmable attenuators and you can test that setup...
>>
>>  From upload perspective, I guess same setup would do the job.
>> Queuing/fairness might depend a bit more on the
>> station devices, emulated or otherwise, but I guess a clever AP could
>> enforce fairness in upstream direction
>> too by implementing per-sta queues.
>>
>> Thanks,
>> Ben
>>
>> --
>> Ben Greear <greearb@candelatech.com>
>> Candela Technologies Inc  http://www.candelatech.com
>>
>
>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  3:12                       ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
@ 2021-08-03  3:23                         ` Bob McMahon
  2021-08-03  4:30                           ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
                                             ` (2 more replies)
  0 siblings, 3 replies; 108+ messages in thread
From: Bob McMahon @ 2021-08-03  3:23 UTC (permalink / raw)
  To: David Lang
  Cc: Ben Greear, Luca Muscariello, Cake List, Make-Wifi-fast,
	Leonard Kleinrock, starlink, codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 5208 bytes --]

The distance matrix defines signal attenuations/loss between pairs.  It's
straightforward to create a distance matrix that has hidden nodes because
all "signal  loss" between pairs is defined.  Let's say a 120dB attenuation
path will cause a node to be hidden as an example.

     A    B     C    D
A   -   35   120   65
B         -      65   65
C               -       65
D                         -

So in the above, AC are hidden from each other but nobody else is. It does
assume symmetry between pairs but that's typically true.

The RF device takes these distance matrices as settings and calculates the
five branch tree values (as demonstrated in the video). There are
limitations to solutions though but I've found those not to be an issue to
date. I've been able to produce hidden nodes quite readily. Add the phase
shifters and spatial stream powers can also be affected, but this isn't
shown in this simple example.

Bob

On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:

> I guess it depends on what you are intending to test. If you are not going
> to
> tinker with any of the over-the-air settings (including the number of
> packets
> transmitted in one aggregate), the details of what happen over the air
> don't
> matter much.
>
> But if you are going to be doing any tinkering with what is getting sent,
> and
> you ignore the hidden transmitter type problems, you will create a
> solution that
> seems to work really well in the lab and falls on it's face out in the
> wild
> where spectrum overload and hidden transmitters are the norm (at least in
> urban
> areas), not rare corner cases.
>
> you don't need to include them in every test, but you need to have a way
> to
> configure your lab to include them before you consider any
> settings/algorithm
> ready to try in the wild.
>
> David Lang
>
> On Mon, 2 Aug 2021, Bob McMahon wrote:
>
> > We find four nodes, a primary BSS and an adjunct one quite good for lots
> of
> > testing.  The six nodes allows for a primary BSS and two adjacent ones.
> We
> > want to minimize complexity to necessary and sufficient.
> >
> > The challenge we find is having variability (e.g. montecarlos) that's
> > reproducible and has relevant information. Basically, the distance
> matrices
> > have h-matrices as their elements. Our chips can provide these
> h-matrices.
> >
> > The parts for solid state programmable attenuators and phase shifters
> > aren't very expensive. A device that supports a five branch tree and 2x2
> > MIMO seems a very good starting point.
> >
> > Bob
> >
> > On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> wrote:
> >
> >> On 8/2/21 4:16 PM, David Lang wrote:
> >>> If you are going to setup a test environment for wifi, you need to
> >> include the ability to make a fe cases that only happen with RF, not
> with
> >> wired networks and
> >>> are commonly overlooked
> >>>
> >>> 1. station A can hear station B and C but they cannot hear each other
> >>> 2. station A can hear station B but station B cannot hear station A 3.
> >> station A can hear that station B is transmitting, but not with a strong
> >> enough signal to
> >>> decode the signal (yes in theory you can work around interference, but
> >> in practice interference is still a real thing)
> >>>
> >>> David Lang
> >>>
> >>
> >> To add to this, I think you need lots of different station devices,
> >> different capabilities (/n, /ac, /ax, etc)
> >> different numbers of spatial streams, and different distances from the
> >> AP.  From download queueing perspective, changing
> >> the capabilities may be sufficient while keeping all stations at same
> >> distance.  This assumes you are not
> >> actually testing the wifi rate-ctrl alg. itself, so different throughput
> >> levels for different stations would be enough.
> >>
> >> So, a good station emulator setup (and/or pile of real stations) and a
> few
> >> RF chambers and
> >> programmable attenuators and you can test that setup...
> >>
> >>  From upload perspective, I guess same setup would do the job.
> >> Queuing/fairness might depend a bit more on the
> >> station devices, emulated or otherwise, but I guess a clever AP could
> >> enforce fairness in upstream direction
> >> too by implementing per-sta queues.
> >>
> >> Thanks,
> >> Ben
> >>
> >> --
> >> Ben Greear <greearb@candelatech.com>
> >> Candela Technologies Inc  http://www.candelatech.com
> >>
> >
> >
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 6508 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  3:23                         ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
@ 2021-08-03  4:30                           ` David Lang
  2021-08-03  4:38                             ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
  2021-08-08  4:35                             ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
  2021-08-08  5:04                           ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
  2021-08-10 14:10                           ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Rodney W. Grimes
  2 siblings, 2 replies; 108+ messages in thread
From: David Lang @ 2021-08-03  4:30 UTC (permalink / raw)
  To: Bob McMahon
  Cc: David Lang, Ben Greear, Luca Muscariello, Cake List,
	Make-Wifi-fast, Leonard Kleinrock, starlink, codel,
	cerowrt-devel, bloat

symmetry is not always (or usually) true. stations are commonly heard at much 
larger distances than they can talk, mobile devices have much less transmit 
power (becuase they are operating on batteries) than fixed stations, and when 
you adjust the transmit power on a station, you don't adjust it's receive 
sensitivity.

David Lang

  On Mon, 2 Aug 2021, Bob McMahon wrote:

> Date: Mon, 2 Aug 2021 20:23:06 -0700
> From: Bob McMahon <bob.mcmahon@broadcom.com>
> To: David Lang <david@lang.hm>
> Cc: Ben Greear <greearb@candelatech.com>,
>     Luca Muscariello <muscariello@ieee.org>,
>     Cake List <cake@lists.bufferbloat.net>,
>     Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>,
>     Leonard Kleinrock <lk@cs.ucla.edu>, starlink@lists.bufferbloat.net,
>     codel@lists.bufferbloat.net,
>     cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>,
>     bloat <bloat@lists.bufferbloat.net>
> Subject: Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2:
>     Internet Quality workshop CFP for the internet architecture board
> 
> The distance matrix defines signal attenuations/loss between pairs.  It's
> straightforward to create a distance matrix that has hidden nodes because
> all "signal  loss" between pairs is defined.  Let's say a 120dB attenuation
> path will cause a node to be hidden as an example.
>
>     A    B     C    D
> A   -   35   120   65
> B         -      65   65
> C               -       65
> D                         -
>
> So in the above, AC are hidden from each other but nobody else is. It does
> assume symmetry between pairs but that's typically true.
>
> The RF device takes these distance matrices as settings and calculates the
> five branch tree values (as demonstrated in the video). There are
> limitations to solutions though but I've found those not to be an issue to
> date. I've been able to produce hidden nodes quite readily. Add the phase
> shifters and spatial stream powers can also be affected, but this isn't
> shown in this simple example.
>
> Bob
>
> On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
>
>> I guess it depends on what you are intending to test. If you are not going
>> to
>> tinker with any of the over-the-air settings (including the number of
>> packets
>> transmitted in one aggregate), the details of what happen over the air
>> don't
>> matter much.
>>
>> But if you are going to be doing any tinkering with what is getting sent,
>> and
>> you ignore the hidden transmitter type problems, you will create a
>> solution that
>> seems to work really well in the lab and falls on it's face out in the
>> wild
>> where spectrum overload and hidden transmitters are the norm (at least in
>> urban
>> areas), not rare corner cases.
>>
>> you don't need to include them in every test, but you need to have a way
>> to
>> configure your lab to include them before you consider any
>> settings/algorithm
>> ready to try in the wild.
>>
>> David Lang
>>
>> On Mon, 2 Aug 2021, Bob McMahon wrote:
>>
>>> We find four nodes, a primary BSS and an adjunct one quite good for lots
>> of
>>> testing.  The six nodes allows for a primary BSS and two adjacent ones.
>> We
>>> want to minimize complexity to necessary and sufficient.
>>>
>>> The challenge we find is having variability (e.g. montecarlos) that's
>>> reproducible and has relevant information. Basically, the distance
>> matrices
>>> have h-matrices as their elements. Our chips can provide these
>> h-matrices.
>>>
>>> The parts for solid state programmable attenuators and phase shifters
>>> aren't very expensive. A device that supports a five branch tree and 2x2
>>> MIMO seems a very good starting point.
>>>
>>> Bob
>>>
>>> On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
>> wrote:
>>>
>>>> On 8/2/21 4:16 PM, David Lang wrote:
>>>>> If you are going to setup a test environment for wifi, you need to
>>>> include the ability to make a fe cases that only happen with RF, not
>> with
>>>> wired networks and
>>>>> are commonly overlooked
>>>>>
>>>>> 1. station A can hear station B and C but they cannot hear each other
>>>>> 2. station A can hear station B but station B cannot hear station A 3.
>>>> station A can hear that station B is transmitting, but not with a strong
>>>> enough signal to
>>>>> decode the signal (yes in theory you can work around interference, but
>>>> in practice interference is still a real thing)
>>>>>
>>>>> David Lang
>>>>>
>>>>
>>>> To add to this, I think you need lots of different station devices,
>>>> different capabilities (/n, /ac, /ax, etc)
>>>> different numbers of spatial streams, and different distances from the
>>>> AP.  From download queueing perspective, changing
>>>> the capabilities may be sufficient while keeping all stations at same
>>>> distance.  This assumes you are not
>>>> actually testing the wifi rate-ctrl alg. itself, so different throughput
>>>> levels for different stations would be enough.
>>>>
>>>> So, a good station emulator setup (and/or pile of real stations) and a
>> few
>>>> RF chambers and
>>>> programmable attenuators and you can test that setup...
>>>>
>>>>  From upload perspective, I guess same setup would do the job.
>>>> Queuing/fairness might depend a bit more on the
>>>> station devices, emulated or otherwise, but I guess a clever AP could
>>>> enforce fairness in upstream direction
>>>> too by implementing per-sta queues.
>>>>
>>>> Thanks,
>>>> Ben
>>>>
>>>> --
>>>> Ben Greear <greearb@candelatech.com>
>>>> Candela Technologies Inc  http://www.candelatech.com
>>>>
>>>
>>>
>>
>
>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  4:30                           ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
@ 2021-08-03  4:38                             ` Bob McMahon
  2021-08-03  4:44                               ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
  2021-08-08  4:35                             ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
  1 sibling, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-08-03  4:38 UTC (permalink / raw)
  To: David Lang
  Cc: Ben Greear, Luca Muscariello, Cake List, Make-Wifi-fast,
	Leonard Kleinrock, starlink, codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 7219 bytes --]

fair enough, but for this "RF emulator device" being able to support
distance matrices, even hollow symmetric ones, is much better than what's
typically done. The variable solid state phase shifters are 0-360 so don't
provide real time delays either.

This is another "something is better than nothing" type proposal. I think
it can be deployed at a relatively low cost which allows for more
standardized, automated test rigs and much less human interactions and
human errors.

Bob

On Mon, Aug 2, 2021 at 9:30 PM David Lang <david@lang.hm> wrote:

> symmetry is not always (or usually) true. stations are commonly heard at
> much
> larger distances than they can talk, mobile devices have much less
> transmit
> power (becuase they are operating on batteries) than fixed stations, and
> when
> you adjust the transmit power on a station, you don't adjust it's receive
> sensitivity.
>
> David Lang
>
>   On Mon, 2 Aug 2021, Bob McMahon wrote:
>
> > Date: Mon, 2 Aug 2021 20:23:06 -0700
> > From: Bob McMahon <bob.mcmahon@broadcom.com>
> > To: David Lang <david@lang.hm>
> > Cc: Ben Greear <greearb@candelatech.com>,
> >     Luca Muscariello <muscariello@ieee.org>,
> >     Cake List <cake@lists.bufferbloat.net>,
> >     Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>,
> >     Leonard Kleinrock <lk@cs.ucla.edu>, starlink@lists.bufferbloat.net,
> >     codel@lists.bufferbloat.net,
> >     cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>,
> >     bloat <bloat@lists.bufferbloat.net>
> > Subject: Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug
> 2:
> >     Internet Quality workshop CFP for the internet architecture board
> >
> > The distance matrix defines signal attenuations/loss between pairs.  It's
> > straightforward to create a distance matrix that has hidden nodes because
> > all "signal  loss" between pairs is defined.  Let's say a 120dB
> attenuation
> > path will cause a node to be hidden as an example.
> >
> >     A    B     C    D
> > A   -   35   120   65
> > B         -      65   65
> > C               -       65
> > D                         -
> >
> > So in the above, AC are hidden from each other but nobody else is. It
> does
> > assume symmetry between pairs but that's typically true.
> >
> > The RF device takes these distance matrices as settings and calculates
> the
> > five branch tree values (as demonstrated in the video). There are
> > limitations to solutions though but I've found those not to be an issue
> to
> > date. I've been able to produce hidden nodes quite readily. Add the phase
> > shifters and spatial stream powers can also be affected, but this isn't
> > shown in this simple example.
> >
> > Bob
> >
> > On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
> >
> >> I guess it depends on what you are intending to test. If you are not
> going
> >> to
> >> tinker with any of the over-the-air settings (including the number of
> >> packets
> >> transmitted in one aggregate), the details of what happen over the air
> >> don't
> >> matter much.
> >>
> >> But if you are going to be doing any tinkering with what is getting
> sent,
> >> and
> >> you ignore the hidden transmitter type problems, you will create a
> >> solution that
> >> seems to work really well in the lab and falls on it's face out in the
> >> wild
> >> where spectrum overload and hidden transmitters are the norm (at least
> in
> >> urban
> >> areas), not rare corner cases.
> >>
> >> you don't need to include them in every test, but you need to have a way
> >> to
> >> configure your lab to include them before you consider any
> >> settings/algorithm
> >> ready to try in the wild.
> >>
> >> David Lang
> >>
> >> On Mon, 2 Aug 2021, Bob McMahon wrote:
> >>
> >>> We find four nodes, a primary BSS and an adjunct one quite good for
> lots
> >> of
> >>> testing.  The six nodes allows for a primary BSS and two adjacent ones.
> >> We
> >>> want to minimize complexity to necessary and sufficient.
> >>>
> >>> The challenge we find is having variability (e.g. montecarlos) that's
> >>> reproducible and has relevant information. Basically, the distance
> >> matrices
> >>> have h-matrices as their elements. Our chips can provide these
> >> h-matrices.
> >>>
> >>> The parts for solid state programmable attenuators and phase shifters
> >>> aren't very expensive. A device that supports a five branch tree and
> 2x2
> >>> MIMO seems a very good starting point.
> >>>
> >>> Bob
> >>>
> >>> On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> >> wrote:
> >>>
> >>>> On 8/2/21 4:16 PM, David Lang wrote:
> >>>>> If you are going to setup a test environment for wifi, you need to
> >>>> include the ability to make a fe cases that only happen with RF, not
> >> with
> >>>> wired networks and
> >>>>> are commonly overlooked
> >>>>>
> >>>>> 1. station A can hear station B and C but they cannot hear each other
> >>>>> 2. station A can hear station B but station B cannot hear station A
> 3.
> >>>> station A can hear that station B is transmitting, but not with a
> strong
> >>>> enough signal to
> >>>>> decode the signal (yes in theory you can work around interference,
> but
> >>>> in practice interference is still a real thing)
> >>>>>
> >>>>> David Lang
> >>>>>
> >>>>
> >>>> To add to this, I think you need lots of different station devices,
> >>>> different capabilities (/n, /ac, /ax, etc)
> >>>> different numbers of spatial streams, and different distances from the
> >>>> AP.  From download queueing perspective, changing
> >>>> the capabilities may be sufficient while keeping all stations at same
> >>>> distance.  This assumes you are not
> >>>> actually testing the wifi rate-ctrl alg. itself, so different
> throughput
> >>>> levels for different stations would be enough.
> >>>>
> >>>> So, a good station emulator setup (and/or pile of real stations) and a
> >> few
> >>>> RF chambers and
> >>>> programmable attenuators and you can test that setup...
> >>>>
> >>>>  From upload perspective, I guess same setup would do the job.
> >>>> Queuing/fairness might depend a bit more on the
> >>>> station devices, emulated or otherwise, but I guess a clever AP could
> >>>> enforce fairness in upstream direction
> >>>> too by implementing per-sta queues.
> >>>>
> >>>> Thanks,
> >>>> Ben
> >>>>
> >>>> --
> >>>> Ben Greear <greearb@candelatech.com>
> >>>> Candela Technologies Inc  http://www.candelatech.com
> >>>>
> >>>
> >>>
> >>
> >
> >
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 10210 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  4:38                             ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
@ 2021-08-03  4:44                               ` David Lang
  2021-08-03 16:01                                 ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: David Lang @ 2021-08-03  4:44 UTC (permalink / raw)
  To: Bob McMahon
  Cc: David Lang, Ben Greear, Luca Muscariello, Cake List,
	Make-Wifi-fast, Leonard Kleinrock, starlink, codel,
	cerowrt-devel, bloat

I agree that we don't want to make perfect the enemy of better.

A lot of the issues I'm calling out can be simulated/enhanced with different 
power levels.

over wifi distances, I don't think time delays are going to be noticable (we're 
talking 10s to low 100s of feet, not miles)

David Lang

On Mon, 2 Aug 2021, Bob McMahon wrote:

> fair enough, but for this "RF emulator device" being able to support
> distance matrices, even hollow symmetric ones, is much better than what's
> typically done. The variable solid state phase shifters are 0-360 so don't
> provide real time delays either.
>
> This is another "something is better than nothing" type proposal. I think
> it can be deployed at a relatively low cost which allows for more
> standardized, automated test rigs and much less human interactions and
> human errors.
>
> Bob
>
> On Mon, Aug 2, 2021 at 9:30 PM David Lang <david@lang.hm> wrote:
>
>> symmetry is not always (or usually) true. stations are commonly heard at
>> much
>> larger distances than they can talk, mobile devices have much less
>> transmit
>> power (becuase they are operating on batteries) than fixed stations, and
>> when
>> you adjust the transmit power on a station, you don't adjust it's receive
>> sensitivity.
>>
>> David Lang
>>
>>   On Mon, 2 Aug 2021, Bob McMahon wrote:
>>
>>> Date: Mon, 2 Aug 2021 20:23:06 -0700
>>> From: Bob McMahon <bob.mcmahon@broadcom.com>
>>> To: David Lang <david@lang.hm>
>>> Cc: Ben Greear <greearb@candelatech.com>,
>>>     Luca Muscariello <muscariello@ieee.org>,
>>>     Cake List <cake@lists.bufferbloat.net>,
>>>     Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>,
>>>     Leonard Kleinrock <lk@cs.ucla.edu>, starlink@lists.bufferbloat.net,
>>>     codel@lists.bufferbloat.net,
>>>     cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>,
>>>     bloat <bloat@lists.bufferbloat.net>
>>> Subject: Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug
>> 2:
>>>     Internet Quality workshop CFP for the internet architecture board
>>>
>>> The distance matrix defines signal attenuations/loss between pairs.  It's
>>> straightforward to create a distance matrix that has hidden nodes because
>>> all "signal  loss" between pairs is defined.  Let's say a 120dB
>> attenuation
>>> path will cause a node to be hidden as an example.
>>>
>>>     A    B     C    D
>>> A   -   35   120   65
>>> B         -      65   65
>>> C               -       65
>>> D                         -
>>>
>>> So in the above, AC are hidden from each other but nobody else is. It
>> does
>>> assume symmetry between pairs but that's typically true.
>>>
>>> The RF device takes these distance matrices as settings and calculates
>> the
>>> five branch tree values (as demonstrated in the video). There are
>>> limitations to solutions though but I've found those not to be an issue
>> to
>>> date. I've been able to produce hidden nodes quite readily. Add the phase
>>> shifters and spatial stream powers can also be affected, but this isn't
>>> shown in this simple example.
>>>
>>> Bob
>>>
>>> On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
>>>
>>>> I guess it depends on what you are intending to test. If you are not
>> going
>>>> to
>>>> tinker with any of the over-the-air settings (including the number of
>>>> packets
>>>> transmitted in one aggregate), the details of what happen over the air
>>>> don't
>>>> matter much.
>>>>
>>>> But if you are going to be doing any tinkering with what is getting
>> sent,
>>>> and
>>>> you ignore the hidden transmitter type problems, you will create a
>>>> solution that
>>>> seems to work really well in the lab and falls on it's face out in the
>>>> wild
>>>> where spectrum overload and hidden transmitters are the norm (at least
>> in
>>>> urban
>>>> areas), not rare corner cases.
>>>>
>>>> you don't need to include them in every test, but you need to have a way
>>>> to
>>>> configure your lab to include them before you consider any
>>>> settings/algorithm
>>>> ready to try in the wild.
>>>>
>>>> David Lang
>>>>
>>>> On Mon, 2 Aug 2021, Bob McMahon wrote:
>>>>
>>>>> We find four nodes, a primary BSS and an adjunct one quite good for
>> lots
>>>> of
>>>>> testing.  The six nodes allows for a primary BSS and two adjacent ones.
>>>> We
>>>>> want to minimize complexity to necessary and sufficient.
>>>>>
>>>>> The challenge we find is having variability (e.g. montecarlos) that's
>>>>> reproducible and has relevant information. Basically, the distance
>>>> matrices
>>>>> have h-matrices as their elements. Our chips can provide these
>>>> h-matrices.
>>>>>
>>>>> The parts for solid state programmable attenuators and phase shifters
>>>>> aren't very expensive. A device that supports a five branch tree and
>> 2x2
>>>>> MIMO seems a very good starting point.
>>>>>
>>>>> Bob
>>>>>
>>>>> On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
>>>> wrote:
>>>>>
>>>>>> On 8/2/21 4:16 PM, David Lang wrote:
>>>>>>> If you are going to setup a test environment for wifi, you need to
>>>>>> include the ability to make a fe cases that only happen with RF, not
>>>> with
>>>>>> wired networks and
>>>>>>> are commonly overlooked
>>>>>>>
>>>>>>> 1. station A can hear station B and C but they cannot hear each other
>>>>>>> 2. station A can hear station B but station B cannot hear station A
>> 3.
>>>>>> station A can hear that station B is transmitting, but not with a
>> strong
>>>>>> enough signal to
>>>>>>> decode the signal (yes in theory you can work around interference,
>> but
>>>>>> in practice interference is still a real thing)
>>>>>>>
>>>>>>> David Lang
>>>>>>>
>>>>>>
>>>>>> To add to this, I think you need lots of different station devices,
>>>>>> different capabilities (/n, /ac, /ax, etc)
>>>>>> different numbers of spatial streams, and different distances from the
>>>>>> AP.  From download queueing perspective, changing
>>>>>> the capabilities may be sufficient while keeping all stations at same
>>>>>> distance.  This assumes you are not
>>>>>> actually testing the wifi rate-ctrl alg. itself, so different
>> throughput
>>>>>> levels for different stations would be enough.
>>>>>>
>>>>>> So, a good station emulator setup (and/or pile of real stations) and a
>>>> few
>>>>>> RF chambers and
>>>>>> programmable attenuators and you can test that setup...
>>>>>>
>>>>>>  From upload perspective, I guess same setup would do the job.
>>>>>> Queuing/fairness might depend a bit more on the
>>>>>> station devices, emulated or otherwise, but I guess a clever AP could
>>>>>> enforce fairness in upstream direction
>>>>>> too by implementing per-sta queues.
>>>>>>
>>>>>> Thanks,
>>>>>> Ben
>>>>>>
>>>>>> --
>>>>>> Ben Greear <greearb@candelatech.com>
>>>>>> Candela Technologies Inc  http://www.candelatech.com
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>
>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  4:44                               ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
@ 2021-08-03 16:01                                 ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-08-03 16:01 UTC (permalink / raw)
  To: David Lang
  Cc: Ben Greear, Luca Muscariello, Cake List, Make-Wifi-fast,
	Leonard Kleinrock, starlink, codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 8592 bytes --]

Another thing to keep in mind is we're using a poor man's version of
emulating "passive channels" so the device transmit powers can provide
power asymmetry. The distance matrix is about the h-matrices (as shown
early in the slides.)  Even with that though, the h-matrix elements aren't
likely symmetric but it's a reasonable starting point to assume they are.
Also being able to switch them in near realtime allows for some forms of
transmit and receive asymmetry in the emulated channels as well.

Bob

On Mon, Aug 2, 2021 at 9:44 PM David Lang <david@lang.hm> wrote:

> I agree that we don't want to make perfect the enemy of better.
>
> A lot of the issues I'm calling out can be simulated/enhanced with
> different
> power levels.
>
> over wifi distances, I don't think time delays are going to be noticable
> (we're
> talking 10s to low 100s of feet, not miles)
>
> David Lang
>
> On Mon, 2 Aug 2021, Bob McMahon wrote:
>
> > fair enough, but for this "RF emulator device" being able to support
> > distance matrices, even hollow symmetric ones, is much better than what's
> > typically done. The variable solid state phase shifters are 0-360 so
> don't
> > provide real time delays either.
> >
> > This is another "something is better than nothing" type proposal. I think
> > it can be deployed at a relatively low cost which allows for more
> > standardized, automated test rigs and much less human interactions and
> > human errors.
> >
> > Bob
> >
> > On Mon, Aug 2, 2021 at 9:30 PM David Lang <david@lang.hm> wrote:
> >
> >> symmetry is not always (or usually) true. stations are commonly heard at
> >> much
> >> larger distances than they can talk, mobile devices have much less
> >> transmit
> >> power (becuase they are operating on batteries) than fixed stations, and
> >> when
> >> you adjust the transmit power on a station, you don't adjust it's
> receive
> >> sensitivity.
> >>
> >> David Lang
> >>
> >>   On Mon, 2 Aug 2021, Bob McMahon wrote:
> >>
> >>> Date: Mon, 2 Aug 2021 20:23:06 -0700
> >>> From: Bob McMahon <bob.mcmahon@broadcom.com>
> >>> To: David Lang <david@lang.hm>
> >>> Cc: Ben Greear <greearb@candelatech.com>,
> >>>     Luca Muscariello <muscariello@ieee.org>,
> >>>     Cake List <cake@lists.bufferbloat.net>,
> >>>     Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>,
> >>>     Leonard Kleinrock <lk@cs.ucla.edu>, starlink@lists.bufferbloat.net
> ,
> >>>     codel@lists.bufferbloat.net,
> >>>     cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>,
> >>>     bloat <bloat@lists.bufferbloat.net>
> >>> Subject: Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug
> >> 2:
> >>>     Internet Quality workshop CFP for the internet architecture board
> >>>
> >>> The distance matrix defines signal attenuations/loss between pairs.
> It's
> >>> straightforward to create a distance matrix that has hidden nodes
> because
> >>> all "signal  loss" between pairs is defined.  Let's say a 120dB
> >> attenuation
> >>> path will cause a node to be hidden as an example.
> >>>
> >>>     A    B     C    D
> >>> A   -   35   120   65
> >>> B         -      65   65
> >>> C               -       65
> >>> D                         -
> >>>
> >>> So in the above, AC are hidden from each other but nobody else is. It
> >> does
> >>> assume symmetry between pairs but that's typically true.
> >>>
> >>> The RF device takes these distance matrices as settings and calculates
> >> the
> >>> five branch tree values (as demonstrated in the video). There are
> >>> limitations to solutions though but I've found those not to be an issue
> >> to
> >>> date. I've been able to produce hidden nodes quite readily. Add the
> phase
> >>> shifters and spatial stream powers can also be affected, but this isn't
> >>> shown in this simple example.
> >>>
> >>> Bob
> >>>
> >>> On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
> >>>
> >>>> I guess it depends on what you are intending to test. If you are not
> >> going
> >>>> to
> >>>> tinker with any of the over-the-air settings (including the number of
> >>>> packets
> >>>> transmitted in one aggregate), the details of what happen over the air
> >>>> don't
> >>>> matter much.
> >>>>
> >>>> But if you are going to be doing any tinkering with what is getting
> >> sent,
> >>>> and
> >>>> you ignore the hidden transmitter type problems, you will create a
> >>>> solution that
> >>>> seems to work really well in the lab and falls on it's face out in the
> >>>> wild
> >>>> where spectrum overload and hidden transmitters are the norm (at least
> >> in
> >>>> urban
> >>>> areas), not rare corner cases.
> >>>>
> >>>> you don't need to include them in every test, but you need to have a
> way
> >>>> to
> >>>> configure your lab to include them before you consider any
> >>>> settings/algorithm
> >>>> ready to try in the wild.
> >>>>
> >>>> David Lang
> >>>>
> >>>> On Mon, 2 Aug 2021, Bob McMahon wrote:
> >>>>
> >>>>> We find four nodes, a primary BSS and an adjunct one quite good for
> >> lots
> >>>> of
> >>>>> testing.  The six nodes allows for a primary BSS and two adjacent
> ones.
> >>>> We
> >>>>> want to minimize complexity to necessary and sufficient.
> >>>>>
> >>>>> The challenge we find is having variability (e.g. montecarlos) that's
> >>>>> reproducible and has relevant information. Basically, the distance
> >>>> matrices
> >>>>> have h-matrices as their elements. Our chips can provide these
> >>>> h-matrices.
> >>>>>
> >>>>> The parts for solid state programmable attenuators and phase shifters
> >>>>> aren't very expensive. A device that supports a five branch tree and
> >> 2x2
> >>>>> MIMO seems a very good starting point.
> >>>>>
> >>>>> Bob
> >>>>>
> >>>>> On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> >>>> wrote:
> >>>>>
> >>>>>> On 8/2/21 4:16 PM, David Lang wrote:
> >>>>>>> If you are going to setup a test environment for wifi, you need to
> >>>>>> include the ability to make a fe cases that only happen with RF, not
> >>>> with
> >>>>>> wired networks and
> >>>>>>> are commonly overlooked
> >>>>>>>
> >>>>>>> 1. station A can hear station B and C but they cannot hear each
> other
> >>>>>>> 2. station A can hear station B but station B cannot hear station A
> >> 3.
> >>>>>> station A can hear that station B is transmitting, but not with a
> >> strong
> >>>>>> enough signal to
> >>>>>>> decode the signal (yes in theory you can work around interference,
> >> but
> >>>>>> in practice interference is still a real thing)
> >>>>>>>
> >>>>>>> David Lang
> >>>>>>>
> >>>>>>
> >>>>>> To add to this, I think you need lots of different station devices,
> >>>>>> different capabilities (/n, /ac, /ax, etc)
> >>>>>> different numbers of spatial streams, and different distances from
> the
> >>>>>> AP.  From download queueing perspective, changing
> >>>>>> the capabilities may be sufficient while keeping all stations at
> same
> >>>>>> distance.  This assumes you are not
> >>>>>> actually testing the wifi rate-ctrl alg. itself, so different
> >> throughput
> >>>>>> levels for different stations would be enough.
> >>>>>>
> >>>>>> So, a good station emulator setup (and/or pile of real stations)
> and a
> >>>> few
> >>>>>> RF chambers and
> >>>>>> programmable attenuators and you can test that setup...
> >>>>>>
> >>>>>>  From upload perspective, I guess same setup would do the job.
> >>>>>> Queuing/fairness might depend a bit more on the
> >>>>>> station devices, emulated or otherwise, but I guess a clever AP
> could
> >>>>>> enforce fairness in upstream direction
> >>>>>> too by implementing per-sta queues.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Ben
> >>>>>>
> >>>>>> --
> >>>>>> Ben Greear <greearb@candelatech.com>
> >>>>>> Candela Technologies Inc  http://www.candelatech.com
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>
> >
> >
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 12855 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  0:37                   ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " Leonard Kleinrock
  2021-08-03  1:24                     ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
@ 2021-08-08  4:20                     ` Dick Roy
  1 sibling, 0 replies; 108+ messages in thread
From: Dick Roy @ 2021-08-08  4:20 UTC (permalink / raw)
  To: 'Leonard Kleinrock', 'David Lang'
  Cc: starlink, 'Make-Wifi-fast', 'Bob McMahon',
	'Cake List', codel, 'cerowrt-devel',
	'bloat'


-----Original Message-----
From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
Leonard Kleinrock
Sent: Monday, August 2, 2021 5:38 PM
To: David Lang
Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast; Bob McMahon; Cake List;
codel@lists.bufferbloat.net; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
Internet Quality workshop CFP for the internet architecture board

These cases are what my student, Fouad Tobagi and I called the Hidden
Terminal Problem (with the Busy Tone solution) back in 1975.

[RR] Also known as the "hidden node" problem!

Len 


> On Aug 2, 2021, at 4:16 PM, David Lang <david@lang.hm> wrote:
> 
> If you are going to setup a test environment for wifi, you need to include
the ability to make a fe cases that only happen with RF, not with wired
networks and are commonly overlooked
> 
> 1. station A can hear station B and C but they cannot hear each other
[RR] Lots of reasons for that in the wireless world of mobility.

> 2. station A can hear station B but station B cannot hear station A 

[RR] This is largely due to link imbalance. In TDD systems like Wi-Fi, time
variability of the RF channels is also an issue that is dealt with by having
turn around times that are much less than the "(de-)coherence time" of the
channel. Link imbalance is largely due to differences in tx power and rx
front end noise figure. Smart antenna technology makes this a bit more
complicated, but the essentials are still the same.

3. station A can hear that station B is transmitting, but not with a strong
enough signal to decode the signal (yes in theory you can work around
interference, but in practice interference is still a real thing)

[RR] Yes, energy can be detected at levels that are insufficient for
decoding.  That said, if you can't decode the signal, you generally cannot
know who sent it :^)))
> 
> David Lang
> 

_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  4:30                           ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
  2021-08-03  4:38                             ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
@ 2021-08-08  4:35                             ` Dick Roy
  2021-08-08  5:04                               ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
  1 sibling, 1 reply; 108+ messages in thread
From: Dick Roy @ 2021-08-08  4:35 UTC (permalink / raw)
  To: 'David Lang', 'Bob McMahon'
  Cc: starlink, 'Make-Wifi-fast', 'Cake List',
	codel, 'cerowrt-devel', 'bloat'



-----Original Message-----
From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
David Lang
Sent: Monday, August 2, 2021 9:31 PM
To: Bob McMahon
Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast; Cake List;
codel@lists.bufferbloat.net; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
Internet Quality workshop CFP for the internet architecture board

symmetry is not always (or usually) true. 
[RR] There is a big difference between "symmetric RF channels" and "balanced
RF links".  Be careful not to confuse the two.

stations are commonly heard at much 
larger distances than they can talk, mobile devices have much less transmit 
power (becuase they are operating on batteries) than fixed stations, and
when 
you adjust the transmit power on a station, you don't adjust it's receive 
sensitivity.

[RR] Not quite true. Rx sensitivity is a function of MCS (the modulation and
coding scheme) and those levels can be adjusted, both up and down, by
changing the MCS.  This is in fact one of the major tools that needs to be
integrated into wireless systems today.  It's generally overlooked, though
not always! Starlink should be doing this if they are not already BTW!

David Lang

  On Mon, 2 Aug 2021, Bob McMahon wrote:

> Date: Mon, 2 Aug 2021 20:23:06 -0700
> From: Bob McMahon <bob.mcmahon@broadcom.com>
> To: David Lang <david@lang.hm>
> Cc: Ben Greear <greearb@candelatech.com>,
>     Luca Muscariello <muscariello@ieee.org>,
>     Cake List <cake@lists.bufferbloat.net>,
>     Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>,
>     Leonard Kleinrock <lk@cs.ucla.edu>, starlink@lists.bufferbloat.net,
>     codel@lists.bufferbloat.net,
>     cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>,
>     bloat <bloat@lists.bufferbloat.net>
> Subject: Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2:
>     Internet Quality workshop CFP for the internet architecture board
> 
> The distance matrix defines signal attenuations/loss between pairs.  It's
> straightforward to create a distance matrix that has hidden nodes because
> all "signal  loss" between pairs is defined.  Let's say a 120dB
attenuation
> path will cause a node to be hidden as an example.
>
>     A    B     C    D
> A   -   35   120   65
> B         -      65   65
> C               -       65
> D                         -
>
> So in the above, AC are hidden from each other but nobody else is. It does
> assume symmetry between pairs but that's typically true.
>
> The RF device takes these distance matrices as settings and calculates the
> five branch tree values (as demonstrated in the video). There are
> limitations to solutions though but I've found those not to be an issue to
> date. I've been able to produce hidden nodes quite readily. Add the phase
> shifters and spatial stream powers can also be affected, but this isn't
> shown in this simple example.
>
> Bob
>
> On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
>
>> I guess it depends on what you are intending to test. If you are not
going
>> to
>> tinker with any of the over-the-air settings (including the number of
>> packets
>> transmitted in one aggregate), the details of what happen over the air
>> don't
>> matter much.
>>
>> But if you are going to be doing any tinkering with what is getting sent,
>> and
>> you ignore the hidden transmitter type problems, you will create a
>> solution that
>> seems to work really well in the lab and falls on it's face out in the
>> wild
>> where spectrum overload and hidden transmitters are the norm (at least in
>> urban
>> areas), not rare corner cases.
>>
>> you don't need to include them in every test, but you need to have a way
>> to
>> configure your lab to include them before you consider any
>> settings/algorithm
>> ready to try in the wild.
>>
>> David Lang
>>
>> On Mon, 2 Aug 2021, Bob McMahon wrote:
>>
>>> We find four nodes, a primary BSS and an adjunct one quite good for lots
>> of
>>> testing.  The six nodes allows for a primary BSS and two adjacent ones.
>> We
>>> want to minimize complexity to necessary and sufficient.
>>>
>>> The challenge we find is having variability (e.g. montecarlos) that's
>>> reproducible and has relevant information. Basically, the distance
>> matrices
>>> have h-matrices as their elements. Our chips can provide these
>> h-matrices.
>>>
>>> The parts for solid state programmable attenuators and phase shifters
>>> aren't very expensive. A device that supports a five branch tree and 2x2
>>> MIMO seems a very good starting point.
>>>
>>> Bob
>>>
>>> On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
>> wrote:
>>>
>>>> On 8/2/21 4:16 PM, David Lang wrote:
>>>>> If you are going to setup a test environment for wifi, you need to
>>>> include the ability to make a fe cases that only happen with RF, not
>> with
>>>> wired networks and
>>>>> are commonly overlooked
>>>>>
>>>>> 1. station A can hear station B and C but they cannot hear each other
>>>>> 2. station A can hear station B but station B cannot hear station A 3.
>>>> station A can hear that station B is transmitting, but not with a
strong
>>>> enough signal to
>>>>> decode the signal (yes in theory you can work around interference, but
>>>> in practice interference is still a real thing)
>>>>>
>>>>> David Lang
>>>>>
>>>>
>>>> To add to this, I think you need lots of different station devices,
>>>> different capabilities (/n, /ac, /ax, etc)
>>>> different numbers of spatial streams, and different distances from the
>>>> AP.  From download queueing perspective, changing
>>>> the capabilities may be sufficient while keeping all stations at same
>>>> distance.  This assumes you are not
>>>> actually testing the wifi rate-ctrl alg. itself, so different
throughput
>>>> levels for different stations would be enough.
>>>>
>>>> So, a good station emulator setup (and/or pile of real stations) and a
>> few
>>>> RF chambers and
>>>> programmable attenuators and you can test that setup...
>>>>
>>>>  From upload perspective, I guess same setup would do the job.
>>>> Queuing/fairness might depend a bit more on the
>>>> station devices, emulated or otherwise, but I guess a clever AP could
>>>> enforce fairness in upstream direction
>>>> too by implementing per-sta queues.
>>>>
>>>> Thanks,
>>>> Ben
>>>>
>>>> --
>>>> Ben Greear <greearb@candelatech.com>
>>>> Candela Technologies Inc  http://www.candelatech.com
>>>>
>>>
>>>
>>
>
>
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  3:23                         ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
  2021-08-03  4:30                           ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
@ 2021-08-08  5:04                           ` Dick Roy
  2021-08-08  5:07                             ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
  2021-08-10 14:10                           ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Rodney W. Grimes
  2 siblings, 1 reply; 108+ messages in thread
From: Dick Roy @ 2021-08-08  5:04 UTC (permalink / raw)
  To: 'Bob McMahon', 'David Lang'
  Cc: starlink, 'Make-Wifi-fast', 'Cake List',
	codel, 'cerowrt-devel', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 5917 bytes --]

 

 

  _____  

From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
Bob McMahon
Sent: Monday, August 2, 2021 8:23 PM
To: David Lang
Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast; Cake List;
codel@lists.bufferbloat.net; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
Internet Quality workshop CFP for the internet architecture board

 

The distance matrix defines signal attenuations/loss between pairs.  

[RR] Which makes it a path loss matrix rather than a distance matrix
actually.

It's straightforward to create a distance matrix that has hidden nodes
because all "signal  loss" between pairs is defined.  Let's say a 120dB
attenuation path will cause a node to be hidden as an example.

     A    B     C    D 

A   -   35   120   65

B         -      65   65

C               -       65

D                         -

So in the above, AC are hidden from each other but nobody else is. It does
assume symmetry between pairs but that's typically true.

[RR] I'm guessing you really mean reciprocal rather than symmetric. An RF
channel is reciprocal if the loss when A is transmitting to B is the same as
that when B is transmitting to A. When the tx powers and rx sensitivities
are such that when combined with the path loss(es) the "link budget" is  the
same in both directions, the links are balanced and therefore have the same
capacity. 



The RF device takes these distance matrices as settings and calculates the
five branch tree values (as demonstrated in the video). 

There are limitations to solutions though but I've found those not to be an
issue to date. I've been able to produce hidden nodes quite readily. Add the
phase shifters and spatial stream powers can also be affected, but this
isn't shown in this simple example.

Bob

 

On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:

I guess it depends on what you are intending to test. If you are not going
to 
tinker with any of the over-the-air settings (including the number of
packets 
transmitted in one aggregate), the details of what happen over the air don't

matter much.

But if you are going to be doing any tinkering with what is getting sent,
and 
you ignore the hidden transmitter type problems, you will create a solution
that 
seems to work really well in the lab and falls on it's face out in the wild 
where spectrum overload and hidden transmitters are the norm (at least in
urban 
areas), not rare corner cases.

you don't need to include them in every test, but you need to have a way to 
configure your lab to include them before you consider any
settings/algorithm 
ready to try in the wild.

David Lang

On Mon, 2 Aug 2021, Bob McMahon wrote:

> We find four nodes, a primary BSS and an adjunct one quite good for lots
of
> testing.  The six nodes allows for a primary BSS and two adjacent ones. We
> want to minimize complexity to necessary and sufficient.
>
> The challenge we find is having variability (e.g. montecarlos) that's
> reproducible and has relevant information. Basically, the distance
matrices
> have h-matrices as their elements. Our chips can provide these h-matrices.
>
> The parts for solid state programmable attenuators and phase shifters
> aren't very expensive. A device that supports a five branch tree and 2x2
> MIMO seems a very good starting point.
>
> Bob
>
> On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com> wrote:
>
>> On 8/2/21 4:16 PM, David Lang wrote:
>>> If you are going to setup a test environment for wifi, you need to
>> include the ability to make a fe cases that only happen with RF, not with
>> wired networks and
>>> are commonly overlooked
>>>
>>> 1. station A can hear station B and C but they cannot hear each other
>>> 2. station A can hear station B but station B cannot hear station A 3.
>> station A can hear that station B is transmitting, but not with a strong
>> enough signal to
>>> decode the signal (yes in theory you can work around interference, but
>> in practice interference is still a real thing)
>>>
>>> David Lang
>>>
>>
>> To add to this, I think you need lots of different station devices,
>> different capabilities (/n, /ac, /ax, etc)
>> different numbers of spatial streams, and different distances from the
>> AP.  From download queueing perspective, changing
>> the capabilities may be sufficient while keeping all stations at same
>> distance.  This assumes you are not
>> actually testing the wifi rate-ctrl alg. itself, so different throughput
>> levels for different stations would be enough.
>>
>> So, a good station emulator setup (and/or pile of real stations) and a
few
>> RF chambers and
>> programmable attenuators and you can test that setup...
>>
>>  From upload perspective, I guess same setup would do the job.
>> Queuing/fairness might depend a bit more on the
>> station devices, emulated or otherwise, but I guess a clever AP could
>> enforce fairness in upstream direction
>> too by implementing per-sta queues.
>>
>> Thanks,
>> Ben
>>
>> --
>> Ben Greear <greearb@candelatech.com>
>> Candela Technologies Inc  http://www.candelatech.com
>>
>
>


This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for the
use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are not
the intended recipient or the person responsible for delivering the e-mail
to the intended recipient, you are hereby notified that any use, copying,
distributing, dissemination, forwarding, printing, or copying of this e-mail
is strictly prohibited. If you received this e-mail in error, please return
the e-mail to the sender, delete it from your computer, and destroy any
printed copy of it.


[-- Attachment #2: Type: text/html, Size: 12479 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-08  4:35                             ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
@ 2021-08-08  5:04                               ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-08-08  5:04 UTC (permalink / raw)
  To: dickroy
  Cc: David Lang, starlink, Make-Wifi-fast, Cake List, codel,
	cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 8613 bytes --]

The four nodes being connected per a five branch tree using variable
attenuators on the branches can produce hidden nodes quite readily. The
solutions for the attenuations are straight forward per being given the
desired distance matrices.

The challenging part is supporting distance matrices that have h-matrices
as the elements. The solid state variable phase shifters combined with
chips that can dump the h-matrix works fairly well per running monte carlos
and grabbing such a distance matrix. Mapping these distance matrices,
matrices with h-matrices as the elements, to "real world" conditions is not
so easy. But having the ability to do so at a reasonable price and in a
reproducible way is very worthwhile to automated testing systems.

Bob

On Sat, Aug 7, 2021 at 9:35 PM Dick Roy <dickroy@alum.mit.edu> wrote:

>
>
> -----Original Message-----
> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf
> Of
> David Lang
> Sent: Monday, August 2, 2021 9:31 PM
> To: Bob McMahon
> Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast; Cake List;
> codel@lists.bufferbloat.net; cerowrt-devel; bloat
> Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
> Internet Quality workshop CFP for the internet architecture board
>
> symmetry is not always (or usually) true.
> [RR] There is a big difference between "symmetric RF channels" and
> "balanced
> RF links".  Be careful not to confuse the two.
>
> stations are commonly heard at much
> larger distances than they can talk, mobile devices have much less
> transmit
> power (becuase they are operating on batteries) than fixed stations, and
> when
> you adjust the transmit power on a station, you don't adjust it's receive
> sensitivity.
>
> [RR] Not quite true. Rx sensitivity is a function of MCS (the modulation
> and
> coding scheme) and those levels can be adjusted, both up and down, by
> changing the MCS.  This is in fact one of the major tools that needs to be
> integrated into wireless systems today.  It's generally overlooked, though
> not always! Starlink should be doing this if they are not already BTW!
>
> David Lang
>
>   On Mon, 2 Aug 2021, Bob McMahon wrote:
>
> > Date: Mon, 2 Aug 2021 20:23:06 -0700
> > From: Bob McMahon <bob.mcmahon@broadcom.com>
> > To: David Lang <david@lang.hm>
> > Cc: Ben Greear <greearb@candelatech.com>,
> >     Luca Muscariello <muscariello@ieee.org>,
> >     Cake List <cake@lists.bufferbloat.net>,
> >     Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>,
> >     Leonard Kleinrock <lk@cs.ucla.edu>, starlink@lists.bufferbloat.net,
> >     codel@lists.bufferbloat.net,
> >     cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>,
> >     bloat <bloat@lists.bufferbloat.net>
> > Subject: Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug
> 2:
> >     Internet Quality workshop CFP for the internet architecture board
> >
> > The distance matrix defines signal attenuations/loss between pairs.  It's
> > straightforward to create a distance matrix that has hidden nodes because
> > all "signal  loss" between pairs is defined.  Let's say a 120dB
> attenuation
> > path will cause a node to be hidden as an example.
> >
> >     A    B     C    D
> > A   -   35   120   65
> > B         -      65   65
> > C               -       65
> > D                         -
> >
> > So in the above, AC are hidden from each other but nobody else is. It
> does
> > assume symmetry between pairs but that's typically true.
> >
> > The RF device takes these distance matrices as settings and calculates
> the
> > five branch tree values (as demonstrated in the video). There are
> > limitations to solutions though but I've found those not to be an issue
> to
> > date. I've been able to produce hidden nodes quite readily. Add the phase
> > shifters and spatial stream powers can also be affected, but this isn't
> > shown in this simple example.
> >
> > Bob
> >
> > On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
> >
> >> I guess it depends on what you are intending to test. If you are not
> going
> >> to
> >> tinker with any of the over-the-air settings (including the number of
> >> packets
> >> transmitted in one aggregate), the details of what happen over the air
> >> don't
> >> matter much.
> >>
> >> But if you are going to be doing any tinkering with what is getting
> sent,
> >> and
> >> you ignore the hidden transmitter type problems, you will create a
> >> solution that
> >> seems to work really well in the lab and falls on it's face out in the
> >> wild
> >> where spectrum overload and hidden transmitters are the norm (at least
> in
> >> urban
> >> areas), not rare corner cases.
> >>
> >> you don't need to include them in every test, but you need to have a way
> >> to
> >> configure your lab to include them before you consider any
> >> settings/algorithm
> >> ready to try in the wild.
> >>
> >> David Lang
> >>
> >> On Mon, 2 Aug 2021, Bob McMahon wrote:
> >>
> >>> We find four nodes, a primary BSS and an adjunct one quite good for
> lots
> >> of
> >>> testing.  The six nodes allows for a primary BSS and two adjacent ones.
> >> We
> >>> want to minimize complexity to necessary and sufficient.
> >>>
> >>> The challenge we find is having variability (e.g. montecarlos) that's
> >>> reproducible and has relevant information. Basically, the distance
> >> matrices
> >>> have h-matrices as their elements. Our chips can provide these
> >> h-matrices.
> >>>
> >>> The parts for solid state programmable attenuators and phase shifters
> >>> aren't very expensive. A device that supports a five branch tree and
> 2x2
> >>> MIMO seems a very good starting point.
> >>>
> >>> Bob
> >>>
> >>> On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> >> wrote:
> >>>
> >>>> On 8/2/21 4:16 PM, David Lang wrote:
> >>>>> If you are going to setup a test environment for wifi, you need to
> >>>> include the ability to make a fe cases that only happen with RF, not
> >> with
> >>>> wired networks and
> >>>>> are commonly overlooked
> >>>>>
> >>>>> 1. station A can hear station B and C but they cannot hear each other
> >>>>> 2. station A can hear station B but station B cannot hear station A
> 3.
> >>>> station A can hear that station B is transmitting, but not with a
> strong
> >>>> enough signal to
> >>>>> decode the signal (yes in theory you can work around interference,
> but
> >>>> in practice interference is still a real thing)
> >>>>>
> >>>>> David Lang
> >>>>>
> >>>>
> >>>> To add to this, I think you need lots of different station devices,
> >>>> different capabilities (/n, /ac, /ax, etc)
> >>>> different numbers of spatial streams, and different distances from the
> >>>> AP.  From download queueing perspective, changing
> >>>> the capabilities may be sufficient while keeping all stations at same
> >>>> distance.  This assumes you are not
> >>>> actually testing the wifi rate-ctrl alg. itself, so different
> throughput
> >>>> levels for different stations would be enough.
> >>>>
> >>>> So, a good station emulator setup (and/or pile of real stations) and a
> >> few
> >>>> RF chambers and
> >>>> programmable attenuators and you can test that setup...
> >>>>
> >>>>  From upload perspective, I guess same setup would do the job.
> >>>> Queuing/fairness might depend a bit more on the
> >>>> station devices, emulated or otherwise, but I guess a clever AP could
> >>>> enforce fairness in upstream direction
> >>>> too by implementing per-sta queues.
> >>>>
> >>>> Thanks,
> >>>> Ben
> >>>>
> >>>> --
> >>>> Ben Greear <greearb@candelatech.com>
> >>>> Candela Technologies Inc  http://www.candelatech.com
> >>>>
> >>>
> >>>
> >>
> >
> >
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 12105 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  1:24                     ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
@ 2021-08-08  5:07                       ` Dick Roy
  2021-08-08  5:15                         ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: Dick Roy @ 2021-08-08  5:07 UTC (permalink / raw)
  To: 'Bob McMahon', 'Leonard Kleinrock'
  Cc: starlink, 'Make-Wifi-fast', 'Cake List',
	codel, 'cerowrt-devel', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 2723 bytes --]

 

 

  _____  

From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
Bob McMahon
Sent: Monday, August 2, 2021 6:24 PM
To: Leonard Kleinrock
Cc: starlink@lists.bufferbloat.net; Make-Wifi-fast; Cake List;
codel@lists.bufferbloat.net; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
Internet Quality workshop CFP for the internet architecture board

 

I found the following talk relevant to distances between all the nodes.
https://www.youtube.com/watch?v=PNoUcQTCxiM 

Distance is an abstract idea but applies to energy into a node as well as
phylogenetic trees. It's the same problem, i.e. fitting a distance matrix
using some sort of tree. I've found the five branch tree works well for four
nodes.

[RR] These trees are means for approximating a higher dimensional real-world
problem with a lower dimensional structure.  You may be doing this to save
hardware when trying to cable up some complex test scenarios, however I'm
wondering why?  Why not just put the STAs in the lab and turn them on rather
than cabling them?



Bob 

 

On Mon, Aug 2, 2021 at 5:37 PM Leonard Kleinrock <lk@cs.ucla.edu> wrote:

These cases are what my student, Fouad Tobagi and I called the Hidden
Terminal Problem (with the Busy Tone solution) back in 1975.

Len 


> On Aug 2, 2021, at 4:16 PM, David Lang <david@lang.hm> wrote:
> 
> If you are going to setup a test environment for wifi, you need to include
the ability to make a fe cases that only happen with RF, not with wired
networks and are commonly overlooked
> 
> 1. station A can hear station B and C but they cannot hear each other
> 2. station A can hear station B but station B cannot hear station A 3.
station A can hear that station B is transmitting, but not with a strong
enough signal to decode the signal (yes in theory you can work around
interference, but in practice interference is still a real thing)
> 
> David Lang
> 


This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for the
use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are not
the intended recipient or the person responsible for delivering the e-mail
to the intended recipient, you are hereby notified that any use, copying,
distributing, dissemination, forwarding, printing, or copying of this e-mail
is strictly prohibited. If you received this e-mail in error, please return
the e-mail to the sender, delete it from your computer, and destroy any
printed copy of it.


[-- Attachment #2: Type: text/html, Size: 6904 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-08  5:04                           ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
@ 2021-08-08  5:07                             ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-08-08  5:07 UTC (permalink / raw)
  To: dickroy
  Cc: David Lang, starlink, Make-Wifi-fast, Cake List, codel,
	cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 7519 bytes --]

Thanks - your wording is more accurate. The path loss matrix is hollow
symmetric while the RF channel is reciprocal.

The challenge comes when adding phase shifters. Then it's not just a path
loss matrix anymore.

Bob

On Sat, Aug 7, 2021 at 10:04 PM Dick Roy <dickroy@alum.mit.edu> wrote:

>
>
>
> ------------------------------
>
> *From:* Starlink [mailto:starlink-bounces@lists.bufferbloat.net] *On
> Behalf Of *Bob McMahon
> *Sent:* Monday, August 2, 2021 8:23 PM
> *To:* David Lang
> *Cc:* starlink@lists.bufferbloat.net; Make-Wifi-fast; Cake List;
> codel@lists.bufferbloat.net; cerowrt-devel; bloat
> *Subject:* Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug
> 2: Internet Quality workshop CFP for the internet architecture board
>
>
>
> The distance matrix defines signal attenuations/loss between pairs.
>
> *[RR] Which makes it a path loss matrix rather than a distance matrix
> actually.*
>
> It's straightforward to create a distance matrix that has hidden nodes
> because all "signal  loss" between pairs is defined.  Let's say a 120dB
> attenuation path will cause a node to be hidden as an example.
>
>      A    B     C    D
>
> A   -   35   120   65
>
> B         -      65   65
>
> C               -       65
>
> D                         -
>
> So in the above, AC are hidden from each other but nobody else is. It does
> assume symmetry between pairs but that's typically true.
>
> *[RR] I’m guessing you really mean reciprocal rather than symmetric. An RF
> channel is reciprocal if the loss when A is transmitting to B is the same
> as that when B is transmitting to A. When the tx powers and rx
> sensitivities are such that when combined with the path loss(es) the “link
> budget” is  the same in both directions, the links are balanced and
> therefore have the same capacity. *
>
>
>
> The RF device takes these distance matrices as settings and calculates the
> five branch tree values (as demonstrated in the video).
>
> There are limitations to solutions though but I've found those not to be
> an issue to date. I've been able to produce hidden nodes quite readily. Add
> the phase shifters and spatial stream powers can also be affected, but this
> isn't shown in this simple example.
>
> Bob
>
>
>
> On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
>
> I guess it depends on what you are intending to test. If you are not going
> to
> tinker with any of the over-the-air settings (including the number of
> packets
> transmitted in one aggregate), the details of what happen over the air
> don't
> matter much.
>
> But if you are going to be doing any tinkering with what is getting sent,
> and
> you ignore the hidden transmitter type problems, you will create a
> solution that
> seems to work really well in the lab and falls on it's face out in the
> wild
> where spectrum overload and hidden transmitters are the norm (at least in
> urban
> areas), not rare corner cases.
>
> you don't need to include them in every test, but you need to have a way
> to
> configure your lab to include them before you consider any
> settings/algorithm
> ready to try in the wild.
>
> David Lang
>
> On Mon, 2 Aug 2021, Bob McMahon wrote:
>
> > We find four nodes, a primary BSS and an adjunct one quite good for lots
> of
> > testing.  The six nodes allows for a primary BSS and two adjacent ones.
> We
> > want to minimize complexity to necessary and sufficient.
> >
> > The challenge we find is having variability (e.g. montecarlos) that's
> > reproducible and has relevant information. Basically, the distance
> matrices
> > have h-matrices as their elements. Our chips can provide these
> h-matrices.
> >
> > The parts for solid state programmable attenuators and phase shifters
> > aren't very expensive. A device that supports a five branch tree and 2x2
> > MIMO seems a very good starting point.
> >
> > Bob
> >
> > On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> wrote:
> >
> >> On 8/2/21 4:16 PM, David Lang wrote:
> >>> If you are going to setup a test environment for wifi, you need to
> >> include the ability to make a fe cases that only happen with RF, not
> with
> >> wired networks and
> >>> are commonly overlooked
> >>>
> >>> 1. station A can hear station B and C but they cannot hear each other
> >>> 2. station A can hear station B but station B cannot hear station A 3.
> >> station A can hear that station B is transmitting, but not with a strong
> >> enough signal to
> >>> decode the signal (yes in theory you can work around interference, but
> >> in practice interference is still a real thing)
> >>>
> >>> David Lang
> >>>
> >>
> >> To add to this, I think you need lots of different station devices,
> >> different capabilities (/n, /ac, /ax, etc)
> >> different numbers of spatial streams, and different distances from the
> >> AP.  From download queueing perspective, changing
> >> the capabilities may be sufficient while keeping all stations at same
> >> distance.  This assumes you are not
> >> actually testing the wifi rate-ctrl alg. itself, so different throughput
> >> levels for different stations would be enough.
> >>
> >> So, a good station emulator setup (and/or pile of real stations) and a
> few
> >> RF chambers and
> >> programmable attenuators and you can test that setup...
> >>
> >>  From upload perspective, I guess same setup would do the job.
> >> Queuing/fairness might depend a bit more on the
> >> station devices, emulated or otherwise, but I guess a clever AP could
> >> enforce fairness in upstream direction
> >> too by implementing per-sta queues.
> >>
> >> Thanks,
> >> Ben
> >>
> >> --
> >> Ben Greear <greearb@candelatech.com>
> >> Candela Technologies Inc  http://www.candelatech.com
> >>
> >
> >
>
>
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 12679 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-08  5:07                       ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
@ 2021-08-08  5:15                         ` Bob McMahon
  2021-08-08 18:36                           ` [Cerowrt-devel] [Make-wifi-fast] [Starlink] [Cake] " Aaron Wood
  0 siblings, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-08-08  5:15 UTC (permalink / raw)
  To: dickroy
  Cc: Leonard Kleinrock, starlink, Make-Wifi-fast, Cake List, codel,
	cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 4478 bytes --]

We have hundreds of test rigs in multiple labs all over geography. Each rig
is shielded from the others using things like RF enclosures. We want
reproducibility in the RF paths/channels as well as variability. Most have
built fixed rigs using conducted equipment. This is far from anything real.
A butler matrix produces great condition numbers but that makes it too easy
for MIMO rate selection algorithms.

Our real world test is using a real house that has been rented. Not cheap
nor scalable.

There is quite a gap between the two. A RF path device that supports both
variable range and variable mixing is a step towards closing the gap.

Bob

On Sat, Aug 7, 2021 at 10:07 PM Dick Roy <dickroy@alum.mit.edu> wrote:

>
>
>
> ------------------------------
>
> *From:* Starlink [mailto:starlink-bounces@lists.bufferbloat.net] *On
> Behalf Of *Bob McMahon
> *Sent:* Monday, August 2, 2021 6:24 PM
> *To:* Leonard Kleinrock
> *Cc:* starlink@lists.bufferbloat.net; Make-Wifi-fast; Cake List;
> codel@lists.bufferbloat.net; cerowrt-devel; bloat
> *Subject:* Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug
> 2: Internet Quality workshop CFP for the internet architecture board
>
>
>
> I found the following talk relevant to distances between all the nodes.
> https://www.youtube.com/watch?v=PNoUcQTCxiM
>
> Distance is an abstract idea but applies to energy into a node as well as
> phylogenetic trees. It's the same problem, i.e. fitting a distance matrix
> using some sort of tree. I've found the five branch tree works well for
> four nodes.
>
> *[RR] These trees are means for approximating a higher dimensional
> real-world problem with a lower dimensional structure.  You may be doing
> this to save hardware when trying to cable up some complex test scenarios,
> however I’m wondering why?  Why not just put the STAs in the lab and turn
> them on rather than cabling them?*
>
>
>
> Bob
>
>
>
> On Mon, Aug 2, 2021 at 5:37 PM Leonard Kleinrock <lk@cs.ucla.edu> wrote:
>
> These cases are what my student, Fouad Tobagi and I called the Hidden
> Terminal Problem (with the Busy Tone solution) back in 1975.
>
> Len
>
>
> > On Aug 2, 2021, at 4:16 PM, David Lang <david@lang.hm> wrote:
> >
> > If you are going to setup a test environment for wifi, you need to
> include the ability to make a fe cases that only happen with RF, not with
> wired networks and are commonly overlooked
> >
> > 1. station A can hear station B and C but they cannot hear each other
> > 2. station A can hear station B but station B cannot hear station A 3.
> station A can hear that station B is transmitting, but not with a strong
> enough signal to decode the signal (yes in theory you can work around
> interference, but in practice interference is still a real thing)
> >
> > David Lang
> >
>
>
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 7758 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Make-wifi-fast] [Starlink] [Cake] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-08  5:15                         ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
@ 2021-08-08 18:36                           ` Aaron Wood
  2021-08-08 18:48                             ` [Cerowrt-devel] [Bloat] " Jonathan Morton
  0 siblings, 1 reply; 108+ messages in thread
From: Aaron Wood @ 2021-08-08 18:36 UTC (permalink / raw)
  To: Bob McMahon
  Cc: dickroy, Cake List, Make-Wifi-fast, Leonard Kleinrock, starlink,
	codel, cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 6221 bytes --]

My own experiments with this, in the past (5+ years ago), was that you
absolutely had to use cabled setups for repeatability, but then didn't have
enough randomness in the variability to really test anything that was
problematic.  We could create hidden nodes, or arbitrary meshes of devices,
but they were always static.

We used N-way RF splitters and either direct coax in lieu of antennas, or
isolation boxes with an antenna attached to a bulkhead fitting, with coax
on the outside.  One other problem we ran into was that unshielded radio
front-ends could "hear" each other without isolation boxes.

I really wanted both variable attenuators, and points where I could inject
RF noise, so that instead of broad-band attenuation, maybe we could just
swamp the communications with other noise (which is also a common thing we
were running into with both our 900Mhz (ZWave) and 2.4GHz (wifi) radios.

Less common, but something I still see, is that a moving station has
continual issues staying in proper MIMO phase(s) with the AP.  Or I think
that's what's happening.  Slow, continual movement of the two, relative to
each other, and the packet rate drops through the floor until they stop
having relative motion.  And I assume that also applies to time-varying
path-loss and path-distance (multipath reflections).

On Sat, Aug 7, 2021 at 10:15 PM Bob McMahon via Make-wifi-fast <
make-wifi-fast@lists.bufferbloat.net> wrote:

> We have hundreds of test rigs in multiple labs all over geography. Each
> rig is shielded from the others using things like RF enclosures. We want
> reproducibility in the RF paths/channels as well as variability. Most have
> built fixed rigs using conducted equipment. This is far from anything real.
> A butler matrix produces great condition numbers but that makes it too easy
> for MIMO rate selection algorithms.
>
> Our real world test is using a real house that has been rented. Not cheap
> nor scalable.
>
> There is quite a gap between the two. A RF path device that supports both
> variable range and variable mixing is a step towards closing the gap.
>
> Bob
>
> On Sat, Aug 7, 2021 at 10:07 PM Dick Roy <dickroy@alum.mit.edu> wrote:
>
>>
>>
>>
>> ------------------------------
>>
>> *From:* Starlink [mailto:starlink-bounces@lists.bufferbloat.net] *On
>> Behalf Of *Bob McMahon
>> *Sent:* Monday, August 2, 2021 6:24 PM
>> *To:* Leonard Kleinrock
>> *Cc:* starlink@lists.bufferbloat.net; Make-Wifi-fast; Cake List;
>> codel@lists.bufferbloat.net; cerowrt-devel; bloat
>> *Subject:* Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due
>> Aug 2: Internet Quality workshop CFP for the internet architecture board
>>
>>
>>
>> I found the following talk relevant to distances between all the nodes.
>> https://www.youtube.com/watch?v=PNoUcQTCxiM
>>
>> Distance is an abstract idea but applies to energy into a node as well as
>> phylogenetic trees. It's the same problem, i.e. fitting a distance matrix
>> using some sort of tree. I've found the five branch tree works well for
>> four nodes.
>>
>> *[RR] These trees are means for approximating a higher dimensional
>> real-world problem with a lower dimensional structure.  You may be doing
>> this to save hardware when trying to cable up some complex test scenarios,
>> however I’m wondering why?  Why not just put the STAs in the lab and turn
>> them on rather than cabling them?*
>>
>>
>>
>> Bob
>>
>>
>>
>> On Mon, Aug 2, 2021 at 5:37 PM Leonard Kleinrock <lk@cs.ucla.edu> wrote:
>>
>> These cases are what my student, Fouad Tobagi and I called the Hidden
>> Terminal Problem (with the Busy Tone solution) back in 1975.
>>
>> Len
>>
>>
>> > On Aug 2, 2021, at 4:16 PM, David Lang <david@lang.hm> wrote:
>> >
>> > If you are going to setup a test environment for wifi, you need to
>> include the ability to make a fe cases that only happen with RF, not with
>> wired networks and are commonly overlooked
>> >
>> > 1. station A can hear station B and C but they cannot hear each other
>> > 2. station A can hear station B but station B cannot hear station A 3.
>> station A can hear that station B is transmitting, but not with a strong
>> enough signal to decode the signal (yes in theory you can work around
>> interference, but in practice interference is still a real thing)
>> >
>> > David Lang
>> >
>>
>>
>> This electronic communication and the information and any files
>> transmitted with it, or attached to it, are confidential and are intended
>> solely for the use of the individual or entity to whom it is addressed and
>> may contain information that is confidential, legally privileged, protected
>> by privacy laws, or otherwise restricted from disclosure to anyone else. If
>> you are not the intended recipient or the person responsible for delivering
>> the e-mail to the intended recipient, you are hereby notified that any use,
>> copying, distributing, dissemination, forwarding, printing, or copying of
>> this e-mail is strictly prohibited. If you received this e-mail in error,
>> please return the e-mail to the sender, delete it from your computer, and
>> destroy any printed copy of it.
>>
>
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast

[-- Attachment #2: Type: text/html, Size: 10068 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] [Make-wifi-fast] [Starlink] [Cake] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-08 18:36                           ` [Cerowrt-devel] [Make-wifi-fast] [Starlink] [Cake] " Aaron Wood
@ 2021-08-08 18:48                             ` Jonathan Morton
  2021-08-08 19:58                               ` [Bloat] [Make-wifi-fast] [Starlink] [Cake] [Cerowrt-devel] " Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: Jonathan Morton @ 2021-08-08 18:48 UTC (permalink / raw)
  To: Aaron Wood
  Cc: Bob McMahon, starlink, Make-Wifi-fast, Leonard Kleinrock,
	Cake List, codel, cerowrt-devel, bloat, dickroy

> On 8 Aug, 2021, at 9:36 pm, Aaron Wood <woody77@gmail.com> wrote:
> 
> Less common, but something I still see, is that a moving station has continual issues staying in proper MIMO phase(s) with the AP.  Or I think that's what's happening.  Slow, continual movement of the two, relative to each other, and the packet rate drops through the floor until they stop having relative motion.  And I assume that also applies to time-varying path-loss and path-distance (multipath reflections).

So is it time to mount test stations on model railway wagons?

 - Jonathan Morton

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] [Make-wifi-fast] [Starlink] [Cake] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-08 18:48                             ` [Cerowrt-devel] [Bloat] " Jonathan Morton
@ 2021-08-08 19:58                               ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-08-08 19:58 UTC (permalink / raw)
  To: Jonathan Morton
  Cc: Aaron Wood, starlink, Make-Wifi-fast, Leonard Kleinrock,
	Cake List, codel, cerowrt-devel, bloat, dickroy


[-- Attachment #1.1: Type: text/plain, Size: 1555 bytes --]

Some people put them on roombas. Doesn't work well inside these
http://ramseytest.com/index.php

On Sun, Aug 8, 2021 at 11:48 AM Jonathan Morton <chromatix99@gmail.com>
wrote:

> > On 8 Aug, 2021, at 9:36 pm, Aaron Wood <woody77@gmail.com> wrote:
> >
> > Less common, but something I still see, is that a moving station has
> continual issues staying in proper MIMO phase(s) with the AP.  Or I think
> that's what's happening.  Slow, continual movement of the two, relative to
> each other, and the packet rate drops through the floor until they stop
> having relative motion.  And I assume that also applies to time-varying
> path-loss and path-distance (multipath reflections).
>
> So is it time to mount test stations on model railway wagons?
>
>  - Jonathan Morton

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 2046 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-03  3:23                         ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
  2021-08-03  4:30                           ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
  2021-08-08  5:04                           ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
@ 2021-08-10 14:10                           ` Rodney W. Grimes
  2021-08-10 16:13                             ` Dick Roy
  2 siblings, 1 reply; 108+ messages in thread
From: Rodney W. Grimes @ 2021-08-10 14:10 UTC (permalink / raw)
  To: Bob McMahon
  Cc: David Lang, starlink, Make-Wifi-fast, Cake List, codel,
	cerowrt-devel, bloat

> The distance matrix defines signal attenuations/loss between pairs.  It's
> straightforward to create a distance matrix that has hidden nodes because
> all "signal  loss" between pairs is defined.  Let's say a 120dB attenuation
> path will cause a node to be hidden as an example.
> 
>      A    B     C    D
> A   -   35   120   65
> B         -      65   65
> C               -       65
> D                         -
> 
> So in the above, AC are hidden from each other but nobody else is. It does
> assume symmetry between pairs but that's typically true.

That is not correct, symmetry in the RF world, especially wifi, is rare
due to topology issues.  A high transmitter, A,  and a low receiver, B,
has a good path A - > B, but a very weak path B -> A.   Multipathing
is another major issue that causes assymtry.

> 
> The RF device takes these distance matrices as settings and calculates the
> five branch tree values (as demonstrated in the video). There are
> limitations to solutions though but I've found those not to be an issue to
> date. I've been able to produce hidden nodes quite readily. Add the phase
> shifters and spatial stream powers can also be affected, but this isn't
> shown in this simple example.
> 
> Bob
> 
> On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
> 
> > I guess it depends on what you are intending to test. If you are not going
> > to
> > tinker with any of the over-the-air settings (including the number of
> > packets
> > transmitted in one aggregate), the details of what happen over the air
> > don't
> > matter much.
> >
> > But if you are going to be doing any tinkering with what is getting sent,
> > and
> > you ignore the hidden transmitter type problems, you will create a
> > solution that
> > seems to work really well in the lab and falls on it's face out in the
> > wild
> > where spectrum overload and hidden transmitters are the norm (at least in
> > urban
> > areas), not rare corner cases.
> >
> > you don't need to include them in every test, but you need to have a way
> > to
> > configure your lab to include them before you consider any
> > settings/algorithm
> > ready to try in the wild.
> >
> > David Lang
> >
> > On Mon, 2 Aug 2021, Bob McMahon wrote:
> >
> > > We find four nodes, a primary BSS and an adjunct one quite good for lots
> > of
> > > testing.  The six nodes allows for a primary BSS and two adjacent ones.
> > We
> > > want to minimize complexity to necessary and sufficient.
> > >
> > > The challenge we find is having variability (e.g. montecarlos) that's
> > > reproducible and has relevant information. Basically, the distance
> > matrices
> > > have h-matrices as their elements. Our chips can provide these
> > h-matrices.
> > >
> > > The parts for solid state programmable attenuators and phase shifters
> > > aren't very expensive. A device that supports a five branch tree and 2x2
> > > MIMO seems a very good starting point.
> > >
> > > Bob
> > >
> > > On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> > wrote:
> > >
> > >> On 8/2/21 4:16 PM, David Lang wrote:
> > >>> If you are going to setup a test environment for wifi, you need to
> > >> include the ability to make a fe cases that only happen with RF, not
> > with
> > >> wired networks and
> > >>> are commonly overlooked
> > >>>
> > >>> 1. station A can hear station B and C but they cannot hear each other
> > >>> 2. station A can hear station B but station B cannot hear station A 3.
> > >> station A can hear that station B is transmitting, but not with a strong
> > >> enough signal to
> > >>> decode the signal (yes in theory you can work around interference, but
> > >> in practice interference is still a real thing)
> > >>>
> > >>> David Lang
> > >>>
> > >>
> > >> To add to this, I think you need lots of different station devices,
> > >> different capabilities (/n, /ac, /ax, etc)
> > >> different numbers of spatial streams, and different distances from the
> > >> AP.  From download queueing perspective, changing
> > >> the capabilities may be sufficient while keeping all stations at same
> > >> distance.  This assumes you are not
> > >> actually testing the wifi rate-ctrl alg. itself, so different throughput
> > >> levels for different stations would be enough.
> > >>
> > >> So, a good station emulator setup (and/or pile of real stations) and a
> > few
> > >> RF chambers and
> > >> programmable attenuators and you can test that setup...
> > >>
> > >>  From upload perspective, I guess same setup would do the job.
> > >> Queuing/fairness might depend a bit more on the
> > >> station devices, emulated or otherwise, but I guess a clever AP could
> > >> enforce fairness in upstream direction
> > >> too by implementing per-sta queues.
> > >>
> > >> Thanks,
> > >> Ben
> > >>
> > >> --
> > >> Ben Greear <greearb@candelatech.com>
> > >> Candela Technologies Inc  http://www.candelatech.com
> > >>
> > >
> > >
> >
> 
> -- 
> This electronic communication and the information and any files transmitted 
> with it, or attached to it, are confidential and are intended solely for 
> the use of the individual or entity to whom it is addressed and may contain 
> information that is confidential, legally privileged, protected by privacy 
> laws, or otherwise restricted from disclosure to anyone else. If you are 
> not the intended recipient or the person responsible for delivering the 
> e-mail to the intended recipient, you are hereby notified that any use, 
> copying, distributing, dissemination, forwarding, printing, or copying of 
> this e-mail is strictly prohibited. If you received this e-mail in error, 
> please return the e-mail to the sender, delete it from your computer, and 
> destroy any printed copy of it.

[ Charset UTF-8 unsupported, converting... ]
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-10 14:10                           ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Rodney W. Grimes
@ 2021-08-10 16:13                             ` Dick Roy
  2021-08-10 17:06                               ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: Dick Roy @ 2021-08-10 16:13 UTC (permalink / raw)
  To: 'Rodney W. Grimes', 'Bob McMahon'
  Cc: 'Cake List', 'Make-Wifi-fast',
	starlink, codel, 'cerowrt-devel', 'bloat'

Well, I hesitate to drag this out, however Maxwell's equations and the
invariance of the laws of physics ensure that all path loss matrices are
reciprocal.  What that means is that at any for any given set of fixed
boundary conditions (nothing moving/changing!), the propagation loss between
any two points in the domain is the same in both directions. The
"multipathing" in one direction is the same in the other because the
two-parameter (angle1,angle2) scattering cross sections of all objects
(remember they are fixed here) are independent of the ordering of the
angles.  

Very importantly, path loss is NOT the same as the link loss (aka link
budget) which involves tx power and rx noise figure (and in the case of
smart antennas, there is a link per spatial stream and how those links are
managed/controlled really matters, but let's just keep it simple for this
discussion) and these generally are different on both ends of a link for a
variety of reasons. The other very important issue is that of the
""measurement plane", or "where tx power and rx noise figure are being
measured/referenced to and how well the interface at that plane is
"matched".  We generally assume that the matching is perfect, however it
never is. All of these effects contribute to the link loss which determines
the strength of the signal coming out of the receiver (not the receive
antenna, the receiver) for a given signal strength coming out of the
transmitter (not the transmit antenna, the tx output port).   

In the real world, things change.  Sources and sinks move as do many of the
objects around them.  This creates a time-varying RF environment, and now
the path loss matrix is a function of time and a few others things, so it
matters WHEN something is transmitted, and WHEN it is received, and the two
WHEN's are generally separated by "the speed of light" which is a ft/ns
roughly. As important is the fact that it's no longer really a path loss
matrix containing a single scalar because among other things, the time
varying environment induces change in the transmitted waveform on its way to
the receiver most commonly referred to as the Doppler effect which means
there is a frequency translation/shift for each (multi-)path of which there
are in general an uncountably infinite number because this is a continuous
world in which we live (the space quantization experiment being conducted in
the central US aside:^)). As a consequence of these physical laws, the
entries in the path loss matrix become complex functions of a number of
variables including time. These functions are quite often characterized in
terms of Doppler and delay-spread, terms used to describe in just a few
parameters the amount of "distortion" a complex function causes. 

Hope this helps ... probably a bit more than you really wanted to know as
queuing theorists, but ...

-----Original Message-----
From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
Rodney W. Grimes
Sent: Tuesday, August 10, 2021 7:10 AM
To: Bob McMahon
Cc: Cake List; Make-Wifi-fast; starlink@lists.bufferbloat.net;
codel@lists.bufferbloat.net; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
Internet Quality workshop CFP for the internet architecture board

> The distance matrix defines signal attenuations/loss between pairs.  It's
> straightforward to create a distance matrix that has hidden nodes because
> all "signal  loss" between pairs is defined.  Let's say a 120dB
attenuation
> path will cause a node to be hidden as an example.
> 
>      A    B     C    D
> A   -   35   120   65
> B         -      65   65
> C               -       65
> D                         -
> 
> So in the above, AC are hidden from each other but nobody else is. It does
> assume symmetry between pairs but that's typically true.

That is not correct, symmetry in the RF world, especially wifi, is rare
due to topology issues.  A high transmitter, A,  and a low receiver, B,
has a good path A - > B, but a very weak path B -> A.   Multipathing
is another major issue that causes assymtry.

> 
> The RF device takes these distance matrices as settings and calculates the
> five branch tree values (as demonstrated in the video). There are
> limitations to solutions though but I've found those not to be an issue to
> date. I've been able to produce hidden nodes quite readily. Add the phase
> shifters and spatial stream powers can also be affected, but this isn't
> shown in this simple example.
> 
> Bob
> 
> On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
> 
> > I guess it depends on what you are intending to test. If you are not
going
> > to
> > tinker with any of the over-the-air settings (including the number of
> > packets
> > transmitted in one aggregate), the details of what happen over the air
> > don't
> > matter much.
> >
> > But if you are going to be doing any tinkering with what is getting
sent,
> > and
> > you ignore the hidden transmitter type problems, you will create a
> > solution that
> > seems to work really well in the lab and falls on it's face out in the
> > wild
> > where spectrum overload and hidden transmitters are the norm (at least
in
> > urban
> > areas), not rare corner cases.
> >
> > you don't need to include them in every test, but you need to have a way
> > to
> > configure your lab to include them before you consider any
> > settings/algorithm
> > ready to try in the wild.
> >
> > David Lang
> >
> > On Mon, 2 Aug 2021, Bob McMahon wrote:
> >
> > > We find four nodes, a primary BSS and an adjunct one quite good for
lots
> > of
> > > testing.  The six nodes allows for a primary BSS and two adjacent
ones.
> > We
> > > want to minimize complexity to necessary and sufficient.
> > >
> > > The challenge we find is having variability (e.g. montecarlos) that's
> > > reproducible and has relevant information. Basically, the distance
> > matrices
> > > have h-matrices as their elements. Our chips can provide these
> > h-matrices.
> > >
> > > The parts for solid state programmable attenuators and phase shifters
> > > aren't very expensive. A device that supports a five branch tree and
2x2
> > > MIMO seems a very good starting point.
> > >
> > > Bob
> > >
> > > On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> > wrote:
> > >
> > >> On 8/2/21 4:16 PM, David Lang wrote:
> > >>> If you are going to setup a test environment for wifi, you need to
> > >> include the ability to make a fe cases that only happen with RF, not
> > with
> > >> wired networks and
> > >>> are commonly overlooked
> > >>>
> > >>> 1. station A can hear station B and C but they cannot hear each
other
> > >>> 2. station A can hear station B but station B cannot hear station A
3.
> > >> station A can hear that station B is transmitting, but not with a
strong
> > >> enough signal to
> > >>> decode the signal (yes in theory you can work around interference,
but
> > >> in practice interference is still a real thing)
> > >>>
> > >>> David Lang
> > >>>
> > >>
> > >> To add to this, I think you need lots of different station devices,
> > >> different capabilities (/n, /ac, /ax, etc)
> > >> different numbers of spatial streams, and different distances from
the
> > >> AP.  From download queueing perspective, changing
> > >> the capabilities may be sufficient while keeping all stations at same
> > >> distance.  This assumes you are not
> > >> actually testing the wifi rate-ctrl alg. itself, so different
throughput
> > >> levels for different stations would be enough.
> > >>
> > >> So, a good station emulator setup (and/or pile of real stations) and
a
> > few
> > >> RF chambers and
> > >> programmable attenuators and you can test that setup...
> > >>
> > >>  From upload perspective, I guess same setup would do the job.
> > >> Queuing/fairness might depend a bit more on the
> > >> station devices, emulated or otherwise, but I guess a clever AP could
> > >> enforce fairness in upstream direction
> > >> too by implementing per-sta queues.
> > >>
> > >> Thanks,
> > >> Ben
> > >>
> > >> --
> > >> Ben Greear <greearb@candelatech.com>
> > >> Candela Technologies Inc  http://www.candelatech.com
> > >>
> > >
> > >
> >
> 
> -- 
> This electronic communication and the information and any files
transmitted 
> with it, or attached to it, are confidential and are intended solely for 
> the use of the individual or entity to whom it is addressed and may
contain 
> information that is confidential, legally privileged, protected by privacy

> laws, or otherwise restricted from disclosure to anyone else. If you are 
> not the intended recipient or the person responsible for delivering the 
> e-mail to the intended recipient, you are hereby notified that any use, 
> copying, distributing, dissemination, forwarding, printing, or copying of 
> this e-mail is strictly prohibited. If you received this e-mail in error, 
> please return the e-mail to the sender, delete it from your computer, and 
> destroy any printed copy of it.

[ Charset UTF-8 unsupported, converting... ]
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
> 
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-10 16:13                             ` Dick Roy
@ 2021-08-10 17:06                               ` Bob McMahon
  2021-08-10 17:56                                 ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
  2021-08-10 18:11                                 ` Dick Roy
  0 siblings, 2 replies; 108+ messages in thread
From: Bob McMahon @ 2021-08-10 17:06 UTC (permalink / raw)
  To: dickroy
  Cc: Rodney W. Grimes, Cake List, Make-Wifi-fast, starlink, codel,
	cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 11740 bytes --]

The slides show that for WiFi every transmission produces a complex
frequency response, aka the h-matrix. This is valid for that one
transmission only.  The slides show an amplitude plot for a 3 radio device
hence the 9 elements per the h-matrix. It's assumed that the WiFi STA/AP is
stationary such that doppler effects aren't a consideration. WiFi isn't a
car trying to connect to a cell tower.  The plot doesn't show the phase
effects but they are included as the output of the channel estimate is a
complex frequency response. Each RX produces the h-matrix ahead of the MAC.
These may not be symmetric in the real world but that's ok as
transmission and reception is one way only, i.e. the treating them as
repcripocol and the matrix as hollows symmetric isn't going to be a "test
blocker" as the goal is to be able to use software and programmable devices
to change them in near real time. The current approach used by many using
butler matrices to produce off-diagonal effects  is woefully inadequate.
And we're paying about $2.5K per each butler.

Bob


On Tue, Aug 10, 2021 at 9:13 AM Dick Roy <dickroy@alum.mit.edu> wrote:

> Well, I hesitate to drag this out, however Maxwell's equations and the
> invariance of the laws of physics ensure that all path loss matrices are
> reciprocal.  What that means is that at any for any given set of fixed
> boundary conditions (nothing moving/changing!), the propagation loss
> between
> any two points in the domain is the same in both directions. The
> "multipathing" in one direction is the same in the other because the
> two-parameter (angle1,angle2) scattering cross sections of all objects
> (remember they are fixed here) are independent of the ordering of the
> angles.
>
> Very importantly, path loss is NOT the same as the link loss (aka link
> budget) which involves tx power and rx noise figure (and in the case of
> smart antennas, there is a link per spatial stream and how those links are
> managed/controlled really matters, but let's just keep it simple for this
> discussion) and these generally are different on both ends of a link for a
> variety of reasons. The other very important issue is that of the
> ""measurement plane", or "where tx power and rx noise figure are being
> measured/referenced to and how well the interface at that plane is
> "matched".  We generally assume that the matching is perfect, however it
> never is. All of these effects contribute to the link loss which determines
> the strength of the signal coming out of the receiver (not the receive
> antenna, the receiver) for a given signal strength coming out of the
> transmitter (not the transmit antenna, the tx output port).
>
> In the real world, things change.  Sources and sinks move as do many of the
> objects around them.  This creates a time-varying RF environment, and now
> the path loss matrix is a function of time and a few others things, so it
> matters WHEN something is transmitted, and WHEN it is received, and the two
> WHEN's are generally separated by "the speed of light" which is a ft/ns
> roughly. As important is the fact that it's no longer really a path loss
> matrix containing a single scalar because among other things, the time
> varying environment induces change in the transmitted waveform on its way
> to
> the receiver most commonly referred to as the Doppler effect which means
> there is a frequency translation/shift for each (multi-)path of which there
> are in general an uncountably infinite number because this is a continuous
> world in which we live (the space quantization experiment being conducted
> in
> the central US aside:^)). As a consequence of these physical laws, the
> entries in the path loss matrix become complex functions of a number of
> variables including time. These functions are quite often characterized in
> terms of Doppler and delay-spread, terms used to describe in just a few
> parameters the amount of "distortion" a complex function causes.
>
> Hope this helps ... probably a bit more than you really wanted to know as
> queuing theorists, but ...
>
> -----Original Message-----
> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf
> Of
> Rodney W. Grimes
> Sent: Tuesday, August 10, 2021 7:10 AM
> To: Bob McMahon
> Cc: Cake List; Make-Wifi-fast; starlink@lists.bufferbloat.net;
> codel@lists.bufferbloat.net; cerowrt-devel; bloat
> Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
> Internet Quality workshop CFP for the internet architecture board
>
> > The distance matrix defines signal attenuations/loss between pairs.  It's
> > straightforward to create a distance matrix that has hidden nodes because
> > all "signal  loss" between pairs is defined.  Let's say a 120dB
> attenuation
> > path will cause a node to be hidden as an example.
> >
> >      A    B     C    D
> > A   -   35   120   65
> > B         -      65   65
> > C               -       65
> > D                         -
> >
> > So in the above, AC are hidden from each other but nobody else is. It
> does
> > assume symmetry between pairs but that's typically true.
>
> That is not correct, symmetry in the RF world, especially wifi, is rare
> due to topology issues.  A high transmitter, A,  and a low receiver, B,
> has a good path A - > B, but a very weak path B -> A.   Multipathing
> is another major issue that causes assymtry.
>
> >
> > The RF device takes these distance matrices as settings and calculates
> the
> > five branch tree values (as demonstrated in the video). There are
> > limitations to solutions though but I've found those not to be an issue
> to
> > date. I've been able to produce hidden nodes quite readily. Add the phase
> > shifters and spatial stream powers can also be affected, but this isn't
> > shown in this simple example.
> >
> > Bob
> >
> > On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
> >
> > > I guess it depends on what you are intending to test. If you are not
> going
> > > to
> > > tinker with any of the over-the-air settings (including the number of
> > > packets
> > > transmitted in one aggregate), the details of what happen over the air
> > > don't
> > > matter much.
> > >
> > > But if you are going to be doing any tinkering with what is getting
> sent,
> > > and
> > > you ignore the hidden transmitter type problems, you will create a
> > > solution that
> > > seems to work really well in the lab and falls on it's face out in the
> > > wild
> > > where spectrum overload and hidden transmitters are the norm (at least
> in
> > > urban
> > > areas), not rare corner cases.
> > >
> > > you don't need to include them in every test, but you need to have a
> way
> > > to
> > > configure your lab to include them before you consider any
> > > settings/algorithm
> > > ready to try in the wild.
> > >
> > > David Lang
> > >
> > > On Mon, 2 Aug 2021, Bob McMahon wrote:
> > >
> > > > We find four nodes, a primary BSS and an adjunct one quite good for
> lots
> > > of
> > > > testing.  The six nodes allows for a primary BSS and two adjacent
> ones.
> > > We
> > > > want to minimize complexity to necessary and sufficient.
> > > >
> > > > The challenge we find is having variability (e.g. montecarlos) that's
> > > > reproducible and has relevant information. Basically, the distance
> > > matrices
> > > > have h-matrices as their elements. Our chips can provide these
> > > h-matrices.
> > > >
> > > > The parts for solid state programmable attenuators and phase shifters
> > > > aren't very expensive. A device that supports a five branch tree and
> 2x2
> > > > MIMO seems a very good starting point.
> > > >
> > > > Bob
> > > >
> > > > On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> > > wrote:
> > > >
> > > >> On 8/2/21 4:16 PM, David Lang wrote:
> > > >>> If you are going to setup a test environment for wifi, you need to
> > > >> include the ability to make a fe cases that only happen with RF, not
> > > with
> > > >> wired networks and
> > > >>> are commonly overlooked
> > > >>>
> > > >>> 1. station A can hear station B and C but they cannot hear each
> other
> > > >>> 2. station A can hear station B but station B cannot hear station A
> 3.
> > > >> station A can hear that station B is transmitting, but not with a
> strong
> > > >> enough signal to
> > > >>> decode the signal (yes in theory you can work around interference,
> but
> > > >> in practice interference is still a real thing)
> > > >>>
> > > >>> David Lang
> > > >>>
> > > >>
> > > >> To add to this, I think you need lots of different station devices,
> > > >> different capabilities (/n, /ac, /ax, etc)
> > > >> different numbers of spatial streams, and different distances from
> the
> > > >> AP.  From download queueing perspective, changing
> > > >> the capabilities may be sufficient while keeping all stations at
> same
> > > >> distance.  This assumes you are not
> > > >> actually testing the wifi rate-ctrl alg. itself, so different
> throughput
> > > >> levels for different stations would be enough.
> > > >>
> > > >> So, a good station emulator setup (and/or pile of real stations) and
> a
> > > few
> > > >> RF chambers and
> > > >> programmable attenuators and you can test that setup...
> > > >>
> > > >>  From upload perspective, I guess same setup would do the job.
> > > >> Queuing/fairness might depend a bit more on the
> > > >> station devices, emulated or otherwise, but I guess a clever AP
> could
> > > >> enforce fairness in upstream direction
> > > >> too by implementing per-sta queues.
> > > >>
> > > >> Thanks,
> > > >> Ben
> > > >>
> > > >> --
> > > >> Ben Greear <greearb@candelatech.com>
> > > >> Candela Technologies Inc  http://www.candelatech.com
> > > >>
> > > >
> > > >
> > >
> >
> > --
> > This electronic communication and the information and any files
> transmitted
> > with it, or attached to it, are confidential and are intended solely for
> > the use of the individual or entity to whom it is addressed and may
> contain
> > information that is confidential, legally privileged, protected by
> privacy
>
> > laws, or otherwise restricted from disclosure to anyone else. If you are
> > not the intended recipient or the person responsible for delivering the
> > e-mail to the intended recipient, you are hereby notified that any use,
> > copying, distributing, dissemination, forwarding, printing, or copying
> of
> > this e-mail is strictly prohibited. If you received this e-mail in
> error,
> > please return the e-mail to the sender, delete it from your computer,
> and
> > destroy any printed copy of it.
>
> [ Charset UTF-8 unsupported, converting... ]
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
> >
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 15164 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-10 17:06                               ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
@ 2021-08-10 17:56                                 ` Dick Roy
  2021-08-10 18:11                                 ` Dick Roy
  1 sibling, 0 replies; 108+ messages in thread
From: Dick Roy @ 2021-08-10 17:56 UTC (permalink / raw)
  To: 'Bob McMahon'
  Cc: 'Rodney W. Grimes', 'Cake List',
	'Make-Wifi-fast', starlink, 'codel',
	'cerowrt-devel', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 14038 bytes --]

You can approximate the H-matrix as containing only complex numbers or
complex frequency responses as below, however the truth is that in the real
world, in general, the entries in the H-matrix are Green's functions, aka
impulse response functions derivable from Maxwell's equations and all the
surrounding boundary conditions (and yes they are time-varying) which give
the output (at the receiver) due to an input impulse (from the transmitter).
"You bang on the box and see what comes out!"  For "narrowband", nearly
"time-invariant" systems, these complex transfer functions can be
approximated by complex numbers  For non-narrowband, yet still (slowly)
time-varying systems, the H-matrix can be approximated (as shown below) by a
time-invariant transfer (Green's) function whose Fourier transform (aka the
spectrum) can be calculated (and plotted as shown below . although as noted
the phase is missing!)  Each point in the spectral domain is actually a
complex number (amplitude and phase as a function of frequency if you will)
again as noted below.  FWIW, the understanding that the ability to quickly
and accurately obtain estimates of the entries of the H-matrix (aka the
spectral response) under these "almost time-invariant" assumptions is
crucially important to achieving anything near channel capacity is what
makes the choice of an OFDM PHY "optimal" (aka really good . and there is
the issue of "water-pouring", but that's another story for another day).  

 

That said, it is really important to remember that a (relatively) stationary
STA and AP does NOT mean that the channel is time-invariant.  It's not.  The
magnitude of the variations depend on how fast the environment around them
is changing (remember Maxwell's equations and the boundary conditions)!
This really matters in the vehicular (aka transportation) environment.  The
ability of a pedestrian in a cross-walk to connect to an AP in the
Starbuck's on the other side of the street depends on how many cars are in
the vicinity and how fast they are moving!

 

As for using expensive phase-shifters cabled together to make Butler
matrices at $2.5k per pop, I guess I'm in the wrong business:^)))))

 

RR

 

  _____  

From: Bob McMahon [mailto:bob.mcmahon@broadcom.com] 
Sent: Tuesday, August 10, 2021 10:07 AM
To: dickroy@alum.mit.edu
Cc: Rodney W. Grimes; Cake List; Make-Wifi-fast;
starlink@lists.bufferbloat.net; codel; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
Internet Quality workshop CFP for the internet architecture board

 

The slides show that for WiFi every transmission produces a complex
frequency response, aka the h-matrix. This is valid for that one
transmission only.  The slides show an amplitude plot for a 3 radio device
hence the 9 elements per the h-matrix. It's assumed that the WiFi STA/AP is
stationary such that doppler effects aren't a consideration. WiFi isn't a
car trying to connect to a cell tower.  The plot doesn't show the phase
effects but they are included as the output of the channel estimate is a
complex frequency response. Each RX produces the h-matrix ahead of the MAC.
These may not be symmetric in the real world but that's ok as transmission
and reception is one way only, i.e. the treating them as repcripocol and the
matrix as hollows symmetric isn't going to be a "test blocker" as the goal
is to be able to use software and programmable devices to change them in
near real time. The current approach used by many using butler matrices to
produce off-diagonal effects  is woefully inadequate. And we're paying about
$2.5K per each butler.
 
<https://lh3.googleusercontent.com/WqWMFHFPo3ltkxkpoyvgPxgdFxmnZpVvpw0NcCTFh
GiOTjolvKbP4NugcE-vw1Q3vk9Z7R04YA1k3kQMvyiR5RhcHOjbXbsRMfjLBY-RYML2tFxovzMpT
www5UZiu0Xgxzhi8fFru_g> 
Bob

 

On Tue, Aug 10, 2021 at 9:13 AM Dick Roy <dickroy@alum.mit.edu> wrote:

Well, I hesitate to drag this out, however Maxwell's equations and the
invariance of the laws of physics ensure that all path loss matrices are
reciprocal.  What that means is that at any for any given set of fixed
boundary conditions (nothing moving/changing!), the propagation loss between
any two points in the domain is the same in both directions. The
"multipathing" in one direction is the same in the other because the
two-parameter (angle1,angle2) scattering cross sections of all objects
(remember they are fixed here) are independent of the ordering of the
angles.  

Very importantly, path loss is NOT the same as the link loss (aka link
budget) which involves tx power and rx noise figure (and in the case of
smart antennas, there is a link per spatial stream and how those links are
managed/controlled really matters, but let's just keep it simple for this
discussion) and these generally are different on both ends of a link for a
variety of reasons. The other very important issue is that of the
""measurement plane", or "where tx power and rx noise figure are being
measured/referenced to and how well the interface at that plane is
"matched".  We generally assume that the matching is perfect, however it
never is. All of these effects contribute to the link loss which determines
the strength of the signal coming out of the receiver (not the receive
antenna, the receiver) for a given signal strength coming out of the
transmitter (not the transmit antenna, the tx output port).   

In the real world, things change.  Sources and sinks move as do many of the
objects around them.  This creates a time-varying RF environment, and now
the path loss matrix is a function of time and a few others things, so it
matters WHEN something is transmitted, and WHEN it is received, and the two
WHEN's are generally separated by "the speed of light" which is a ft/ns
roughly. As important is the fact that it's no longer really a path loss
matrix containing a single scalar because among other things, the time
varying environment induces change in the transmitted waveform on its way to
the receiver most commonly referred to as the Doppler effect which means
there is a frequency translation/shift for each (multi-)path of which there
are in general an uncountably infinite number because this is a continuous
world in which we live (the space quantization experiment being conducted in
the central US aside:^)). As a consequence of these physical laws, the
entries in the path loss matrix become complex functions of a number of
variables including time. These functions are quite often characterized in
terms of Doppler and delay-spread, terms used to describe in just a few
parameters the amount of "distortion" a complex function causes. 

Hope this helps ... probably a bit more than you really wanted to know as
queuing theorists, but ...

-----Original Message-----
From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
Rodney W. Grimes
Sent: Tuesday, August 10, 2021 7:10 AM
To: Bob McMahon
Cc: Cake List; Make-Wifi-fast; starlink@lists.bufferbloat.net;
codel@lists.bufferbloat.net; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
Internet Quality workshop CFP for the internet architecture board

> The distance matrix defines signal attenuations/loss between pairs.  It's
> straightforward to create a distance matrix that has hidden nodes because
> all "signal  loss" between pairs is defined.  Let's say a 120dB
attenuation
> path will cause a node to be hidden as an example.
> 
>      A    B     C    D
> A   -   35   120   65
> B         -      65   65
> C               -       65
> D                         -
> 
> So in the above, AC are hidden from each other but nobody else is. It does
> assume symmetry between pairs but that's typically true.

That is not correct, symmetry in the RF world, especially wifi, is rare
due to topology issues.  A high transmitter, A,  and a low receiver, B,
has a good path A - > B, but a very weak path B -> A.   Multipathing
is another major issue that causes assymtry.

> 
> The RF device takes these distance matrices as settings and calculates the
> five branch tree values (as demonstrated in the video). There are
> limitations to solutions though but I've found those not to be an issue to
> date. I've been able to produce hidden nodes quite readily. Add the phase
> shifters and spatial stream powers can also be affected, but this isn't
> shown in this simple example.
> 
> Bob
> 
> On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
> 
> > I guess it depends on what you are intending to test. If you are not
going
> > to
> > tinker with any of the over-the-air settings (including the number of
> > packets
> > transmitted in one aggregate), the details of what happen over the air
> > don't
> > matter much.
> >
> > But if you are going to be doing any tinkering with what is getting
sent,
> > and
> > you ignore the hidden transmitter type problems, you will create a
> > solution that
> > seems to work really well in the lab and falls on it's face out in the
> > wild
> > where spectrum overload and hidden transmitters are the norm (at least
in
> > urban
> > areas), not rare corner cases.
> >
> > you don't need to include them in every test, but you need to have a way
> > to
> > configure your lab to include them before you consider any
> > settings/algorithm
> > ready to try in the wild.
> >
> > David Lang
> >
> > On Mon, 2 Aug 2021, Bob McMahon wrote:
> >
> > > We find four nodes, a primary BSS and an adjunct one quite good for
lots
> > of
> > > testing.  The six nodes allows for a primary BSS and two adjacent
ones.
> > We
> > > want to minimize complexity to necessary and sufficient.
> > >
> > > The challenge we find is having variability (e.g. montecarlos) that's
> > > reproducible and has relevant information. Basically, the distance
> > matrices
> > > have h-matrices as their elements. Our chips can provide these
> > h-matrices.
> > >
> > > The parts for solid state programmable attenuators and phase shifters
> > > aren't very expensive. A device that supports a five branch tree and
2x2
> > > MIMO seems a very good starting point.
> > >
> > > Bob
> > >
> > > On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> > wrote:
> > >
> > >> On 8/2/21 4:16 PM, David Lang wrote:
> > >>> If you are going to setup a test environment for wifi, you need to
> > >> include the ability to make a fe cases that only happen with RF, not
> > with
> > >> wired networks and
> > >>> are commonly overlooked
> > >>>
> > >>> 1. station A can hear station B and C but they cannot hear each
other
> > >>> 2. station A can hear station B but station B cannot hear station A
3.
> > >> station A can hear that station B is transmitting, but not with a
strong
> > >> enough signal to
> > >>> decode the signal (yes in theory you can work around interference,
but
> > >> in practice interference is still a real thing)
> > >>>
> > >>> David Lang
> > >>>
> > >>
> > >> To add to this, I think you need lots of different station devices,
> > >> different capabilities (/n, /ac, /ax, etc)
> > >> different numbers of spatial streams, and different distances from
the
> > >> AP.  From download queueing perspective, changing
> > >> the capabilities may be sufficient while keeping all stations at same
> > >> distance.  This assumes you are not
> > >> actually testing the wifi rate-ctrl alg. itself, so different
throughput
> > >> levels for different stations would be enough.
> > >>
> > >> So, a good station emulator setup (and/or pile of real stations) and
a
> > few
> > >> RF chambers and
> > >> programmable attenuators and you can test that setup...
> > >>
> > >>  From upload perspective, I guess same setup would do the job.
> > >> Queuing/fairness might depend a bit more on the
> > >> station devices, emulated or otherwise, but I guess a clever AP could
> > >> enforce fairness in upstream direction
> > >> too by implementing per-sta queues.
> > >>
> > >> Thanks,
> > >> Ben
> > >>
> > >> --
> > >> Ben Greear <greearb@candelatech.com>
> > >> Candela Technologies Inc  http://www.candelatech.com
> > >>
> > >
> > >
> >
> 
> -- 
> This electronic communication and the information and any files
transmitted 
> with it, or attached to it, are confidential and are intended solely for 
> the use of the individual or entity to whom it is addressed and may
contain 
> information that is confidential, legally privileged, protected by privacy

> laws, or otherwise restricted from disclosure to anyone else. If you are 
> not the intended recipient or the person responsible for delivering the 
> e-mail to the intended recipient, you are hereby notified that any use, 
> copying, distributing, dissemination, forwarding, printing, or copying of 
> this e-mail is strictly prohibited. If you received this e-mail in error, 
> please return the e-mail to the sender, delete it from your computer, and 
> destroy any printed copy of it.

[ Charset UTF-8 unsupported, converting... ]
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
> 
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for the
use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are not
the intended recipient or the person responsible for delivering the e-mail
to the intended recipient, you are hereby notified that any use, copying,
distributing, dissemination, forwarding, printing, or copying of this e-mail
is strictly prohibited. If you received this e-mail in error, please return
the e-mail to the sender, delete it from your computer, and destroy any
printed copy of it.


[-- Attachment #2: Type: text/html, Size: 23007 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-10 17:06                               ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
  2021-08-10 17:56                                 ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
@ 2021-08-10 18:11                                 ` Dick Roy
  2021-08-10 19:21                                   ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
  2021-09-02 17:36                                   ` [Cerowrt-devel] [Cake] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board David P. Reed
  1 sibling, 2 replies; 108+ messages in thread
From: Dick Roy @ 2021-08-10 18:11 UTC (permalink / raw)
  To: 'Bob McMahon'
  Cc: 'Rodney W. Grimes', 'Cake List',
	'Make-Wifi-fast', starlink, 'codel',
	'cerowrt-devel', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 13050 bytes --]

To add a bit more, as is easily seen below, the amplitudes of each of the
transfer functions between the three transmit and three receive antennas are
extremely similar.  This is to be expected, of course, since the "aperture"
of each array is very small compared to the distance between them.  What is
much more interesting and revealing is the relative phases.  Obviously this
requires coherent receivers, and ultimately if you want to control the
spatial distribution of power (aka SDMA (or MIMO in some circles) coherent
transmitters. It turns out that just knowing the amplitude of the transfer
functions is not really all that useful for anything other than detecting a
broken solder joint:^)))

 

Also, do not forget that depending how these experiments were conducted, the
estimates are either of the RF channel itself (aka path loss),or of the RF
channel in combination with the transfer functions of the transmitters
and//or receivers.  What this means is the CALIBRATION is CRUCIAL!  Those
who do not calibrate, are doomed to fail!!!!   I suspect that it is in
calibration where the major difference in performance between vendors''
products can be found :^))))

 

It's complicated . 

 

  _____  

From: Bob McMahon [mailto:bob.mcmahon@broadcom.com] 
Sent: Tuesday, August 10, 2021 10:07 AM
To: dickroy@alum.mit.edu
Cc: Rodney W. Grimes; Cake List; Make-Wifi-fast;
starlink@lists.bufferbloat.net; codel; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
Internet Quality workshop CFP for the internet architecture board

 

The slides show that for WiFi every transmission produces a complex
frequency response, aka the h-matrix. This is valid for that one
transmission only.  The slides show an amplitude plot for a 3 radio device
hence the 9 elements per the h-matrix. It's assumed that the WiFi STA/AP is
stationary such that doppler effects aren't a consideration. WiFi isn't a
car trying to connect to a cell tower.  The plot doesn't show the phase
effects but they are included as the output of the channel estimate is a
complex frequency response. Each RX produces the h-matrix ahead of the MAC.
These may not be symmetric in the real world but that's ok as transmission
and reception is one way only, i.e. the treating them as repcripocol and the
matrix as hollows symmetric isn't going to be a "test blocker" as the goal
is to be able to use software and programmable devices to change them in
near real time. The current approach used by many using butler matrices to
produce off-diagonal effects  is woefully inadequate. And we're paying about
$2.5K per each butler.
 
<https://lh3.googleusercontent.com/WqWMFHFPo3ltkxkpoyvgPxgdFxmnZpVvpw0NcCTFh
GiOTjolvKbP4NugcE-vw1Q3vk9Z7R04YA1k3kQMvyiR5RhcHOjbXbsRMfjLBY-RYML2tFxovzMpT
www5UZiu0Xgxzhi8fFru_g> 
Bob

 

On Tue, Aug 10, 2021 at 9:13 AM Dick Roy <dickroy@alum.mit.edu> wrote:

Well, I hesitate to drag this out, however Maxwell's equations and the
invariance of the laws of physics ensure that all path loss matrices are
reciprocal.  What that means is that at any for any given set of fixed
boundary conditions (nothing moving/changing!), the propagation loss between
any two points in the domain is the same in both directions. The
"multipathing" in one direction is the same in the other because the
two-parameter (angle1,angle2) scattering cross sections of all objects
(remember they are fixed here) are independent of the ordering of the
angles.  

Very importantly, path loss is NOT the same as the link loss (aka link
budget) which involves tx power and rx noise figure (and in the case of
smart antennas, there is a link per spatial stream and how those links are
managed/controlled really matters, but let's just keep it simple for this
discussion) and these generally are different on both ends of a link for a
variety of reasons. The other very important issue is that of the
""measurement plane", or "where tx power and rx noise figure are being
measured/referenced to and how well the interface at that plane is
"matched".  We generally assume that the matching is perfect, however it
never is. All of these effects contribute to the link loss which determines
the strength of the signal coming out of the receiver (not the receive
antenna, the receiver) for a given signal strength coming out of the
transmitter (not the transmit antenna, the tx output port).   

In the real world, things change.  Sources and sinks move as do many of the
objects around them.  This creates a time-varying RF environment, and now
the path loss matrix is a function of time and a few others things, so it
matters WHEN something is transmitted, and WHEN it is received, and the two
WHEN's are generally separated by "the speed of light" which is a ft/ns
roughly. As important is the fact that it's no longer really a path loss
matrix containing a single scalar because among other things, the time
varying environment induces change in the transmitted waveform on its way to
the receiver most commonly referred to as the Doppler effect which means
there is a frequency translation/shift for each (multi-)path of which there
are in general an uncountably infinite number because this is a continuous
world in which we live (the space quantization experiment being conducted in
the central US aside:^)). As a consequence of these physical laws, the
entries in the path loss matrix become complex functions of a number of
variables including time. These functions are quite often characterized in
terms of Doppler and delay-spread, terms used to describe in just a few
parameters the amount of "distortion" a complex function causes. 

Hope this helps ... probably a bit more than you really wanted to know as
queuing theorists, but ...

-----Original Message-----
From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
Rodney W. Grimes
Sent: Tuesday, August 10, 2021 7:10 AM
To: Bob McMahon
Cc: Cake List; Make-Wifi-fast; starlink@lists.bufferbloat.net;
codel@lists.bufferbloat.net; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
Internet Quality workshop CFP for the internet architecture board

> The distance matrix defines signal attenuations/loss between pairs.  It's
> straightforward to create a distance matrix that has hidden nodes because
> all "signal  loss" between pairs is defined.  Let's say a 120dB
attenuation
> path will cause a node to be hidden as an example.
> 
>      A    B     C    D
> A   -   35   120   65
> B         -      65   65
> C               -       65
> D                         -
> 
> So in the above, AC are hidden from each other but nobody else is. It does
> assume symmetry between pairs but that's typically true.

That is not correct, symmetry in the RF world, especially wifi, is rare
due to topology issues.  A high transmitter, A,  and a low receiver, B,
has a good path A - > B, but a very weak path B -> A.   Multipathing
is another major issue that causes assymtry.

> 
> The RF device takes these distance matrices as settings and calculates the
> five branch tree values (as demonstrated in the video). There are
> limitations to solutions though but I've found those not to be an issue to
> date. I've been able to produce hidden nodes quite readily. Add the phase
> shifters and spatial stream powers can also be affected, but this isn't
> shown in this simple example.
> 
> Bob
> 
> On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
> 
> > I guess it depends on what you are intending to test. If you are not
going
> > to
> > tinker with any of the over-the-air settings (including the number of
> > packets
> > transmitted in one aggregate), the details of what happen over the air
> > don't
> > matter much.
> >
> > But if you are going to be doing any tinkering with what is getting
sent,
> > and
> > you ignore the hidden transmitter type problems, you will create a
> > solution that
> > seems to work really well in the lab and falls on it's face out in the
> > wild
> > where spectrum overload and hidden transmitters are the norm (at least
in
> > urban
> > areas), not rare corner cases.
> >
> > you don't need to include them in every test, but you need to have a way
> > to
> > configure your lab to include them before you consider any
> > settings/algorithm
> > ready to try in the wild.
> >
> > David Lang
> >
> > On Mon, 2 Aug 2021, Bob McMahon wrote:
> >
> > > We find four nodes, a primary BSS and an adjunct one quite good for
lots
> > of
> > > testing.  The six nodes allows for a primary BSS and two adjacent
ones.
> > We
> > > want to minimize complexity to necessary and sufficient.
> > >
> > > The challenge we find is having variability (e.g. montecarlos) that's
> > > reproducible and has relevant information. Basically, the distance
> > matrices
> > > have h-matrices as their elements. Our chips can provide these
> > h-matrices.
> > >
> > > The parts for solid state programmable attenuators and phase shifters
> > > aren't very expensive. A device that supports a five branch tree and
2x2
> > > MIMO seems a very good starting point.
> > >
> > > Bob
> > >
> > > On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> > wrote:
> > >
> > >> On 8/2/21 4:16 PM, David Lang wrote:
> > >>> If you are going to setup a test environment for wifi, you need to
> > >> include the ability to make a fe cases that only happen with RF, not
> > with
> > >> wired networks and
> > >>> are commonly overlooked
> > >>>
> > >>> 1. station A can hear station B and C but they cannot hear each
other
> > >>> 2. station A can hear station B but station B cannot hear station A
3.
> > >> station A can hear that station B is transmitting, but not with a
strong
> > >> enough signal to
> > >>> decode the signal (yes in theory you can work around interference,
but
> > >> in practice interference is still a real thing)
> > >>>
> > >>> David Lang
> > >>>
> > >>
> > >> To add to this, I think you need lots of different station devices,
> > >> different capabilities (/n, /ac, /ax, etc)
> > >> different numbers of spatial streams, and different distances from
the
> > >> AP.  From download queueing perspective, changing
> > >> the capabilities may be sufficient while keeping all stations at same
> > >> distance.  This assumes you are not
> > >> actually testing the wifi rate-ctrl alg. itself, so different
throughput
> > >> levels for different stations would be enough.
> > >>
> > >> So, a good station emulator setup (and/or pile of real stations) and
a
> > few
> > >> RF chambers and
> > >> programmable attenuators and you can test that setup...
> > >>
> > >>  From upload perspective, I guess same setup would do the job.
> > >> Queuing/fairness might depend a bit more on the
> > >> station devices, emulated or otherwise, but I guess a clever AP could
> > >> enforce fairness in upstream direction
> > >> too by implementing per-sta queues.
> > >>
> > >> Thanks,
> > >> Ben
> > >>
> > >> --
> > >> Ben Greear <greearb@candelatech.com>
> > >> Candela Technologies Inc  http://www.candelatech.com
> > >>
> > >
> > >
> >
> 
> -- 
> This electronic communication and the information and any files
transmitted 
> with it, or attached to it, are confidential and are intended solely for 
> the use of the individual or entity to whom it is addressed and may
contain 
> information that is confidential, legally privileged, protected by privacy

> laws, or otherwise restricted from disclosure to anyone else. If you are 
> not the intended recipient or the person responsible for delivering the 
> e-mail to the intended recipient, you are hereby notified that any use, 
> copying, distributing, dissemination, forwarding, printing, or copying of 
> this e-mail is strictly prohibited. If you received this e-mail in error, 
> please return the e-mail to the sender, delete it from your computer, and 
> destroy any printed copy of it.

[ Charset UTF-8 unsupported, converting... ]
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
> 
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for the
use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are not
the intended recipient or the person responsible for delivering the e-mail
to the intended recipient, you are hereby notified that any use, copying,
distributing, dissemination, forwarding, printing, or copying of this e-mail
is strictly prohibited. If you received this e-mail in error, please return
the e-mail to the sender, delete it from your computer, and destroy any
printed copy of it.


[-- Attachment #2: Type: text/html, Size: 21528 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-10 18:11                                 ` Dick Roy
@ 2021-08-10 19:21                                   ` Bob McMahon
  2021-08-10 20:16                                     ` [Cerowrt-devel] Anhyone have a spare couple a hundred million ... Elon may need to start a go-fund-me page! Dick Roy
  2021-09-02 17:36                                   ` [Cerowrt-devel] [Cake] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board David P. Reed
  1 sibling, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-08-10 19:21 UTC (permalink / raw)
  To: dickroy
  Cc: Rodney W. Grimes, Cake List, Make-Wifi-fast, starlink, codel,
	cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 16479 bytes --]

This amplitude only channel estimate shown was taken from radios connected
using conducted equipment or cables. It illustrates how non-ideal conducted
equipment based testing is, i.e. our signal processing and MCS rate
selection engineers aren't being sufficiently challenged!

The cost of $2.5K for a butler matrix is just one component. Each antenna
is connected to a programmable attenuator. Then the shielded cabling. Then
one of these per engineer and tens to low hundreds per each automated test
engineer. This doesn't include the cost of programmers to write the code.
The expenses grow quickly. Hence the idea to amortize a better design
across the industry (if viable.)

Modeling the distance matrix (suggestions for a better name?) and realizing
D1 path loss using a five branch tree and programmable attenuators has
proven to work for testing things like hidden nodes and for TX op
arbitrations. The next missing piece is to realize the mixing, the h(n,n)
below with programmability and at a reasonable price. That's where the
programmable phase shifters come in. Our chips will dump their chan
estimates relatively quickly so we can run monte carlos and calibrate the
equipment, producing the spatial stream eigen values or condition numbers
as well. Early prototyping showed that phase shifters will affect spatial
stream powers per the algorithms and this should work. Being able to affect
both the path loss and mixing within 10 ms of a command seems a reasonable
ask if using solid state parts. No need for roombas.


[image: CodeCogsEqn (2).png]

Of course, all of these RF effects affect network availability and, hence,
queueing too. We've done a lot of work with iperf 2 around latencies to
help qualify that. That's released as open source.

Complex indeed,

Bob

On Tue, Aug 10, 2021 at 11:11 AM Dick Roy <dickroy@alum.mit.edu> wrote:

> To add a bit more, as is easily seen below, the amplitudes of each of the
> transfer functions between the three transmit and three receive antennas
> are extremely similar.  This is to be expected, of course, since the
> “aperture” of each array is very small compared to the distance between
> them.  What is much more interesting and revealing is the relative phases.
> Obviously this requires coherent receivers, and ultimately if you want to
> control the spatial distribution of power (aka SDMA (or MIMO in some
> circles) coherent transmitters. It turns out that just knowing the
> amplitude of the transfer functions is not really all that useful for
> anything other than detecting a broken solder joint:^)))
>
>
>
> Also, do not forget that depending how these experiments were conducted,
> the estimates are either of the RF channel itself (aka path loss),or of the
> RF channel in combination with the transfer functions of the transmitters
> and//or receivers.  What this means is the CALIBRATION is CRUCIAL!  Those
> who do not calibrate, are doomed to fail!!!!   I suspect that it is in
> calibration where the major difference in performance between vendors’’
> products can be found :^))))
>
>
>
> It’s complicated …
>
>
> ------------------------------
>
> *From:* Bob McMahon [mailto:bob.mcmahon@broadcom.com]
> *Sent:* Tuesday, August 10, 2021 10:07 AM
> *To:* dickroy@alum.mit.edu
> *Cc:* Rodney W. Grimes; Cake List; Make-Wifi-fast;
> starlink@lists.bufferbloat.net; codel; cerowrt-devel; bloat
> *Subject:* Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug
> 2: Internet Quality workshop CFP for the internet architecture board
>
>
>
> The slides show that for WiFi every transmission produces a complex
> frequency response, aka the h-matrix. This is valid for that one
> transmission only.  The slides show an amplitude plot for a 3 radio device
> hence the 9 elements per the h-matrix. It's assumed that the WiFi STA/AP is
> stationary such that doppler effects aren't a consideration. WiFi isn't a
> car trying to connect to a cell tower.  The plot doesn't show the phase
> effects but they are included as the output of the channel estimate is a
> complex frequency response. Each RX produces the h-matrix ahead of the MAC.
> These may not be symmetric in the real world but that's ok as
> transmission and reception is one way only, i.e. the treating them as
> repcripocol and the matrix as hollows symmetric isn't going to be a "test
> blocker" as the goal is to be able to use software and programmable devices
> to change them in near real time. The current approach used by many using
> butler matrices to produce off-diagonal effects  is woefully inadequate.
> And we're paying about $2.5K per each butler.
>
> Bob
>
>
>
> On Tue, Aug 10, 2021 at 9:13 AM Dick Roy <dickroy@alum.mit.edu> wrote:
>
> Well, I hesitate to drag this out, however Maxwell's equations and the
> invariance of the laws of physics ensure that all path loss matrices are
> reciprocal.  What that means is that at any for any given set of fixed
> boundary conditions (nothing moving/changing!), the propagation loss
> between
> any two points in the domain is the same in both directions. The
> "multipathing" in one direction is the same in the other because the
> two-parameter (angle1,angle2) scattering cross sections of all objects
> (remember they are fixed here) are independent of the ordering of the
> angles.
>
> Very importantly, path loss is NOT the same as the link loss (aka link
> budget) which involves tx power and rx noise figure (and in the case of
> smart antennas, there is a link per spatial stream and how those links are
> managed/controlled really matters, but let's just keep it simple for this
> discussion) and these generally are different on both ends of a link for a
> variety of reasons. The other very important issue is that of the
> ""measurement plane", or "where tx power and rx noise figure are being
> measured/referenced to and how well the interface at that plane is
> "matched".  We generally assume that the matching is perfect, however it
> never is. All of these effects contribute to the link loss which determines
> the strength of the signal coming out of the receiver (not the receive
> antenna, the receiver) for a given signal strength coming out of the
> transmitter (not the transmit antenna, the tx output port).
>
> In the real world, things change.  Sources and sinks move as do many of the
> objects around them.  This creates a time-varying RF environment, and now
> the path loss matrix is a function of time and a few others things, so it
> matters WHEN something is transmitted, and WHEN it is received, and the two
> WHEN's are generally separated by "the speed of light" which is a ft/ns
> roughly. As important is the fact that it's no longer really a path loss
> matrix containing a single scalar because among other things, the time
> varying environment induces change in the transmitted waveform on its way
> to
> the receiver most commonly referred to as the Doppler effect which means
> there is a frequency translation/shift for each (multi-)path of which there
> are in general an uncountably infinite number because this is a continuous
> world in which we live (the space quantization experiment being conducted
> in
> the central US aside:^)). As a consequence of these physical laws, the
> entries in the path loss matrix become complex functions of a number of
> variables including time. These functions are quite often characterized in
> terms of Doppler and delay-spread, terms used to describe in just a few
> parameters the amount of "distortion" a complex function causes.
>
> Hope this helps ... probably a bit more than you really wanted to know as
> queuing theorists, but ...
>
> -----Original Message-----
> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf
> Of
> Rodney W. Grimes
> Sent: Tuesday, August 10, 2021 7:10 AM
> To: Bob McMahon
> Cc: Cake List; Make-Wifi-fast; starlink@lists.bufferbloat.net;
> codel@lists.bufferbloat.net; cerowrt-devel; bloat
> Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
> Internet Quality workshop CFP for the internet architecture board
>
> > The distance matrix defines signal attenuations/loss between pairs.  It's
> > straightforward to create a distance matrix that has hidden nodes because
> > all "signal  loss" between pairs is defined.  Let's say a 120dB
> attenuation
> > path will cause a node to be hidden as an example.
> >
> >      A    B     C    D
> > A   -   35   120   65
> > B         -      65   65
> > C               -       65
> > D                         -
> >
> > So in the above, AC are hidden from each other but nobody else is. It
> does
> > assume symmetry between pairs but that's typically true.
>
> That is not correct, symmetry in the RF world, especially wifi, is rare
> due to topology issues.  A high transmitter, A,  and a low receiver, B,
> has a good path A - > B, but a very weak path B -> A.   Multipathing
> is another major issue that causes assymtry.
>
> >
> > The RF device takes these distance matrices as settings and calculates
> the
> > five branch tree values (as demonstrated in the video). There are
> > limitations to solutions though but I've found those not to be an issue
> to
> > date. I've been able to produce hidden nodes quite readily. Add the phase
> > shifters and spatial stream powers can also be affected, but this isn't
> > shown in this simple example.
> >
> > Bob
> >
> > On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
> >
> > > I guess it depends on what you are intending to test. If you are not
> going
> > > to
> > > tinker with any of the over-the-air settings (including the number of
> > > packets
> > > transmitted in one aggregate), the details of what happen over the air
> > > don't
> > > matter much.
> > >
> > > But if you are going to be doing any tinkering with what is getting
> sent,
> > > and
> > > you ignore the hidden transmitter type problems, you will create a
> > > solution that
> > > seems to work really well in the lab and falls on it's face out in the
> > > wild
> > > where spectrum overload and hidden transmitters are the norm (at least
> in
> > > urban
> > > areas), not rare corner cases.
> > >
> > > you don't need to include them in every test, but you need to have a
> way
> > > to
> > > configure your lab to include them before you consider any
> > > settings/algorithm
> > > ready to try in the wild.
> > >
> > > David Lang
> > >
> > > On Mon, 2 Aug 2021, Bob McMahon wrote:
> > >
> > > > We find four nodes, a primary BSS and an adjunct one quite good for
> lots
> > > of
> > > > testing.  The six nodes allows for a primary BSS and two adjacent
> ones.
> > > We
> > > > want to minimize complexity to necessary and sufficient.
> > > >
> > > > The challenge we find is having variability (e.g. montecarlos) that's
> > > > reproducible and has relevant information. Basically, the distance
> > > matrices
> > > > have h-matrices as their elements. Our chips can provide these
> > > h-matrices.
> > > >
> > > > The parts for solid state programmable attenuators and phase shifters
> > > > aren't very expensive. A device that supports a five branch tree and
> 2x2
> > > > MIMO seems a very good starting point.
> > > >
> > > > Bob
> > > >
> > > > On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> > > wrote:
> > > >
> > > >> On 8/2/21 4:16 PM, David Lang wrote:
> > > >>> If you are going to setup a test environment for wifi, you need to
> > > >> include the ability to make a fe cases that only happen with RF, not
> > > with
> > > >> wired networks and
> > > >>> are commonly overlooked
> > > >>>
> > > >>> 1. station A can hear station B and C but they cannot hear each
> other
> > > >>> 2. station A can hear station B but station B cannot hear station A
> 3.
> > > >> station A can hear that station B is transmitting, but not with a
> strong
> > > >> enough signal to
> > > >>> decode the signal (yes in theory you can work around interference,
> but
> > > >> in practice interference is still a real thing)
> > > >>>
> > > >>> David Lang
> > > >>>
> > > >>
> > > >> To add to this, I think you need lots of different station devices,
> > > >> different capabilities (/n, /ac, /ax, etc)
> > > >> different numbers of spatial streams, and different distances from
> the
> > > >> AP.  From download queueing perspective, changing
> > > >> the capabilities may be sufficient while keeping all stations at
> same
> > > >> distance.  This assumes you are not
> > > >> actually testing the wifi rate-ctrl alg. itself, so different
> throughput
> > > >> levels for different stations would be enough.
> > > >>
> > > >> So, a good station emulator setup (and/or pile of real stations) and
> a
> > > few
> > > >> RF chambers and
> > > >> programmable attenuators and you can test that setup...
> > > >>
> > > >>  From upload perspective, I guess same setup would do the job.
> > > >> Queuing/fairness might depend a bit more on the
> > > >> station devices, emulated or otherwise, but I guess a clever AP
> could
> > > >> enforce fairness in upstream direction
> > > >> too by implementing per-sta queues.
> > > >>
> > > >> Thanks,
> > > >> Ben
> > > >>
> > > >> --
> > > >> Ben Greear <greearb@candelatech.com>
> > > >> Candela Technologies Inc  http://www.candelatech.com
> > > >>
> > > >
> > > >
> > >
> >
> > --
> > This electronic communication and the information and any files
> transmitted
> > with it, or attached to it, are confidential and are intended solely for
> > the use of the individual or entity to whom it is addressed and may
> contain
> > information that is confidential, legally privileged, protected by
> privacy
>
> > laws, or otherwise restricted from disclosure to anyone else. If you are
> > not the intended recipient or the person responsible for delivering the
> > e-mail to the intended recipient, you are hereby notified that any use,
> > copying, distributing, dissemination, forwarding, printing, or copying
> of
> > this e-mail is strictly prohibited. If you received this e-mail in
> error,
> > please return the e-mail to the sender, delete it from your computer,
> and
> > destroy any printed copy of it.
>
> [ Charset UTF-8 unsupported, converting... ]
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
> >
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 23056 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [Cerowrt-devel] Anhyone have a spare couple a hundred million ... Elon may need to start a go-fund-me page!
  2021-08-10 19:21                                   ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
@ 2021-08-10 20:16                                     ` Dick Roy
  2021-08-10 20:33                                       ` [Cerowrt-devel] [Starlink] " Jeremy Austin
  0 siblings, 1 reply; 108+ messages in thread
From: Dick Roy @ 2021-08-10 20:16 UTC (permalink / raw)
  To: 'Bob McMahon'
  Cc: 'Rodney W. Grimes', 'Cake List',
	'Make-Wifi-fast', starlink, 'codel',
	'cerowrt-devel', 'bloat'

[-- Attachment #1: Type: text/plain, Size: 199 bytes --]

You may find this of some relevance!

 

https://arstechnica.com/tech-policy/2021/07/ajit-pai-apparently-mismanaged-9
-billion-fund-new-fcc-boss-starts-cleanup/

 

Cheers (or whatever!),

 

RR

 


[-- Attachment #2: Type: text/html, Size: 2625 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] Anhyone have a spare couple a hundred million ... Elon may need to start a go-fund-me page!
  2021-08-10 20:16                                     ` [Cerowrt-devel] Anhyone have a spare couple a hundred million ... Elon may need to start a go-fund-me page! Dick Roy
@ 2021-08-10 20:33                                       ` Jeremy Austin
  2021-08-10 20:44                                         ` David Lang
  0 siblings, 1 reply; 108+ messages in thread
From: Jeremy Austin @ 2021-08-10 20:33 UTC (permalink / raw)
  To: dickroy
  Cc: Bob McMahon, starlink, Make-Wifi-fast, Cake List, codel,
	cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 1316 bytes --]

A 5.7% reduction in funded locations for StarLink is… not dramatic. If the
project falls on that basis, they've got bigger problems. Much of that
discrepancy falls squarely on the shoulders of the FCC and incumbent ISPs
filing form 477, as well as the RDOF auction being held before improving
mapping — as Rosenworcel pointed out. The state of broadband mapping is
still dire.

If I felt like the reallocation of funds would be 100% guaranteed to
benefit the end Internet user… I'd cheer too.

If.

JHA

On Tue, Aug 10, 2021 at 12:16 PM Dick Roy <dickroy@alum.mit.edu> wrote:

> You may find this of some relevance!
>
>
>
>
> https://arstechnica.com/tech-policy/2021/07/ajit-pai-apparently-mismanaged-9-billion-fund-new-fcc-boss-starts-cleanup/
>
>
>
> Cheers (or whatever!),
>
>
>
> RR
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>


-- 
--
Jeremy Austin
Sr. Product Manager
Preseem | Aterlo Networks
preseem.com

Book a Call: https://app.hubspot.com/meetings/jeremy548
Phone: 1-833-733-7336 x718
Email: jeremy@preseem.com

Stay Connected with Newsletters & More:
*https://preseem.com/stay-connected/* <https://preseem.com/stay-connected/>

[-- Attachment #2: Type: text/html, Size: 4227 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] Anhyone have a spare couple a hundred million ... Elon may need to start a go-fund-me page!
  2021-08-10 20:33                                       ` [Cerowrt-devel] [Starlink] " Jeremy Austin
@ 2021-08-10 20:44                                         ` David Lang
  2021-08-10 22:54                                           ` Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: David Lang @ 2021-08-10 20:44 UTC (permalink / raw)
  To: Jeremy Austin
  Cc: dickroy, Cake List, Make-Wifi-fast, Bob McMahon, starlink, codel,
	cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 1979 bytes --]

the biggest problem starlink faces is shipping enough devices (and launching the 
satellites to support them), not demand. There are enough people interested in 
paying full price that if the broadband subsities did not exist, it wouldn't 
reduce the demand noticably.

but if the feds are handing out money, SpaceX is foolish not to apply for it.

David Lang

On Tue, 10 Aug 2021, Jeremy Austin wrote:

> Date: Tue, 10 Aug 2021 12:33:11 -0800
> From: Jeremy Austin <jeremy@aterlo.com>
> To: dickroy@alum.mit.edu
> Cc: Cake List <cake@lists.bufferbloat.net>,
>     Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>,
>     Bob McMahon <bob.mcmahon@broadcom.com>, starlink@lists.bufferbloat.net,
>     codel <codel@lists.bufferbloat.net>,
>     cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>,
>     bloat <bloat@lists.bufferbloat.net>
> Subject: Re: [Starlink] Anhyone have a spare couple a hundred million ... Elon
>      may need to start a go-fund-me page!
> 
> A 5.7% reduction in funded locations for StarLink is… not dramatic. If the
> project falls on that basis, they've got bigger problems. Much of that
> discrepancy falls squarely on the shoulders of the FCC and incumbent ISPs
> filing form 477, as well as the RDOF auction being held before improving
> mapping — as Rosenworcel pointed out. The state of broadband mapping is
> still dire.
>
> If I felt like the reallocation of funds would be 100% guaranteed to
> benefit the end Internet user… I'd cheer too.
>
> If.
>
> JHA
>
> On Tue, Aug 10, 2021 at 12:16 PM Dick Roy <dickroy@alum.mit.edu> wrote:
>
>> You may find this of some relevance!
>>
>>
>>
>>
>> https://arstechnica.com/tech-policy/2021/07/ajit-pai-apparently-mismanaged-9-billion-fund-new-fcc-boss-starts-cleanup/
>>
>>
>>
>> Cheers (or whatever!),
>>
>>
>>
>> RR
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
>
>

[-- Attachment #2: Type: text/plain, Size: 149 bytes --]

_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] Anhyone have a spare couple a hundred million ... Elon may need to start a go-fund-me page!
  2021-08-10 20:44                                         ` David Lang
@ 2021-08-10 22:54                                           ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-08-10 22:54 UTC (permalink / raw)
  To: David Lang
  Cc: Jeremy Austin, dickroy, Cake List, Make-Wifi-fast, starlink,
	codel, cerowrt-devel, bloat


[-- Attachment #1.1: Type: text/plain, Size: 10838 bytes --]

<diatribe on> sorry about that

The below was written two decades ago and we're still fiddling around with
fraudband. Hey, today in 2021, comcast will sell a select few 2 Gb/s
symmetric over a fiber strand using a juniper switch, leased of course,
designed in 2011. Talk about not keeping up with modern mfg of ASICs, and
associated energy efficiencies. In the meantime we continue on destroying
the planet and Musk wants our offspring to live on Mars, while Bezos thinks
he's creating a new industry in space tourism. To the writeup about why we
need to rethink broadband. Eli Noam is also quite prescient written in 1994
(http://www.columbia.edu/dlc/wp/citi/citinoam11.html )

Rather than realistic, I think you are instead being 'reasonable.' There is
a big difference. I am reminded of a quote:

"A reasonable man adapts himself to suit his environment. An unreasonable
man persists in attempting to adapt his environment to suit himself.
Therefore, all progress depends on the unreasonable man."
--George Bernard Shaw

Most CEO's, excluding start-ups, fall into the reasonable man camp (though
they were unreasonable once). Make the most of your current surroundings
while expending the minimal effort. It's good business sense for short term
results, but ignores the long term. It dictates a forward strategy of
incrementalism. It ignores the cumulative costs of the incremental
improvements and the diminishing returns of each successive increment.
Therefore each new increment has to be spaced farther out in time, which is
not desirable from a long-term business point of view. That business case
deteriorates longer term, but is easier to see shorter term. Short-term
thinking, driven by Wall Street, seems to be the only mode corporate
America can operate in.

This incrementalism mentality with 18 month upgrade cycles is fine for
consumer gadgets where the consumer knows they will have an accounting loss
on the gadget from the day they buy it. The purchaser of the gadget never
expects to make money on it. That's why they are called "consumers." That's
one of the only environments where 18 month upgrade cycles can persist.

Network infrastructure deployment is an entirely different food chain.
Under the current models, the purchaser of equipment (e.g. a service
provider) is not a consumer. It is a business that has to make a net profit
off selling services enabled by the equipment. This defies 18 month upgrade
cycles of "consumer" goods. A couple thousand bucks per subscriber takes a
long time for a network operator to recover, when you rely on a couple
percent of that, in NET income not revenue, per month. It is not conducive
to Wall Street driven companies. Thus, the next step has to be a 10-year
step.

Yet, consumers spend thousands every couple years on consumables they will
lose money on (essentially a 100% loss). Many even finance these purchases
at the ridiculous rates of credit cards, adding further to their accounting
loss. The value of these goods and services to the consumer is
justified/rationalized in non-accounting-based ways. In that light,
customer-owned networks are not such a stretch. In fact they would be an
asset that can be written off for tax purposes. The main difference is it
isn't in your physical possession, in your home, so you can't show people
your new gadget. Not directly anyway.

The "realistic" view of network infrastructure deployment (as opposed to
the reasonable view) is that today's access network infrastructure is the
wrong platform to grow from, and the wrong business model to operate under.
It can't grow like a consumer product (CD players, DVD players, PC's, etc)
because it is not a consumer product and the consumer does not have the
freedom of choice in content and applications providers (which was an
important part of the growth of those consumer markets).

Piling new money into the old infrastructure and its operating model
failure is not a realistic approach, because of diminishing returns. It was
never intended to provide real broadband connectivity, to each user, and
the operating costs are way too high. Besides, real broadband undermines
the legacy businesses of the monopoly owners.

A 100x increase in the base platform is needed in order to have a platform
that accommodates future growth in services and applications. That way it
doesn't require yet another infrastructure incremental upgrade each step of
the way. This connectivity platform also must be decoupled from the content
and services.

Access network growth cannot progress in small increments or on 18 month
upgrade cycles. It can't be small increments because these increments
enable nothing new and add little if any tangible value. They simply make
the present-day excuses for applications less annoying to use. This
approach will never make it through the next increment, and is arguably the
chasm where we sit today. It can't be 18 month cycles because the
equipment's accounting life is much longer than that. It will be neither
paid off nor written off after 18 months.

The equipment for 100Mbps FTTH is very nearly the same cost as the
equipment used for 256kbps DSL service. It is cheaper than that DSL
equipment was 2 years ago, but they are both moving targets. What costs too
much money is the deployment labor in the US (but not in many Asian
countries), the permits, the right-of-way, and the hassles thereof. Those
aren't getting cheaper with time. Within the next year or two, it will cost
more to wait than to deploy now. The equipment cost decreases will be less
than the construction cost increases per year. The business case gets
harder by waiting. Yes, it requires taking risk that boils down to
believing the "build it and they will come" mantra.

Contrary to popular belief, it is not a chicken-or-egg problem. The
connectivity has to be there first, before the application development
resources will be allocated. Imagine a start-up trying to get funding for
an application that requires 100Mbps peak network connections. They'd be
laughed out of the VC's office. Same holds for trying to get resources to
develop the same application within a company's R&D budget. No support for
its development because there's no platform or marketplace to sell it into.
There are plenty of startups working on FTTH equipment and infrastructure
products though.

Would any of today's PC applications have had a snowball's chance in hell
of getting funding if they were pitched back in the days when the PC
platform was based on CPU speeds in the tens of MHz, 2Meg of RAM, a 50Meg
hard drive, and a 16 bit ISA bus? No. Few saw any reason we'd ever need
more than that. After the next 10x improvement, people said the same thing:
Why do we need more? Only after about 100x improvement did people finally
stop saying that, though I've noticed it has returned of-late. The cost
declines were a vital part of this too, along with the performance
increases.

The lack of profitability in today's consumer data services, and the low
subscription percentages even thought there is fairly wide availability
does not mean that the "build it and they will come" mantra has failed for
broadband. There is no broadband, so the mantra has not even been tested
yet. Build it, and operate it under an open access model, with content and
connectivity as separate entities, and they will come. Without that second
condition, I have serious doubts about the possibility for successful
PC-like growth in broadband, even if it is built.

On Tue, Aug 10, 2021 at 1:44 PM David Lang <david@lang.hm> wrote:

> the biggest problem starlink faces is shipping enough devices (and
> launching the
> satellites to support them), not demand. There are enough people
> interested in
> paying full price that if the broadband subsities did not exist, it
> wouldn't
> reduce the demand noticably.
>
> but if the feds are handing out money, SpaceX is foolish not to apply for
> it.
>
> David Lang
>
> On Tue, 10 Aug 2021, Jeremy Austin wrote:
>
> > Date: Tue, 10 Aug 2021 12:33:11 -0800
> > From: Jeremy Austin <jeremy@aterlo.com>
> > To: dickroy@alum.mit.edu
> > Cc: Cake List <cake@lists.bufferbloat.net>,
> >     Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>,
> >     Bob McMahon <bob.mcmahon@broadcom.com>,
> starlink@lists.bufferbloat.net,
> >     codel <codel@lists.bufferbloat.net>,
> >     cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>,
> >     bloat <bloat@lists.bufferbloat.net>
> > Subject: Re: [Starlink] Anhyone have a spare couple a hundred million
> ... Elon
> >      may need to start a go-fund-me page!
> >
> > A 5.7% reduction in funded locations for StarLink is… not dramatic. If
> the
> > project falls on that basis, they've got bigger problems. Much of that
> > discrepancy falls squarely on the shoulders of the FCC and incumbent ISPs
> > filing form 477, as well as the RDOF auction being held before improving
> > mapping — as Rosenworcel pointed out. The state of broadband mapping is
> > still dire.
> >
> > If I felt like the reallocation of funds would be 100% guaranteed to
> > benefit the end Internet user… I'd cheer too.
> >
> > If.
> >
> > JHA
> >
> > On Tue, Aug 10, 2021 at 12:16 PM Dick Roy <dickroy@alum.mit.edu> wrote:
> >
> >> You may find this of some relevance!
> >>
> >>
> >>
> >>
> >>
> https://arstechnica.com/tech-policy/2021/07/ajit-pai-apparently-mismanaged-9-billion-fund-new-fcc-boss-starts-cleanup/
> >>
> >>
> >>
> >> Cheers (or whatever!),
> >>
> >>
> >>
> >> RR
> >>
> >>
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/starlink
> >>
> >
> >
> >_______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 16565 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4206 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Cake] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-08-10 18:11                                 ` Dick Roy
  2021-08-10 19:21                                   ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
@ 2021-09-02 17:36                                   ` David P. Reed
  2021-09-03 14:35                                     ` [Bloat] [Cake] [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Matt Mathis
  1 sibling, 1 reply; 108+ messages in thread
From: David P. Reed @ 2021-09-02 17:36 UTC (permalink / raw)
  To: dickroy
  Cc: 'Bob McMahon', starlink, 'Make-Wifi-fast',
	'Cake List', 'codel', 'cerowrt-devel',
	'bloat', 'Rodney W. Grimes'

[-- Attachment #1: Type: text/plain, Size: 17037 bytes --]


I just want to thank Dick Roy for backing up the arguments I've been making about physical RF communications for many years, and clarifying terminology here. I'm not the expert - Dick is an expert with real practical and theoretical experience - but what I've found over the years is that many who consider themselves "experts" say things that are actually nonsense about radio systems.
 
It seems to me that Starlink is based on a propagation model that is quite simplistic, and probably far enough from correct that what seems "obvious" will turn out not to be true. That doesn't stop Musk and cronies from asserting these things as absolute truths (backed by actual professors, especially professors of Economics like Coase, but also CS professors, network protocol experts, etc. who aren't physicists or practicing RF engineers).
 
The fact is that we don't really know how to build a scalable LEO system. Models can be useful, but a model can be a trap that causes even engineers to be cocky. Or as the saying goes, a Clear View doesn't mean a Short Distance.
 
If there are 40 satellites serving 10,000 ground terminals simultaneously, exactly what is the propagation environment like? I can tell you one thing: if the phased array is digitized at some sample rate and some equalization and some quantization, the propagation REALLY matters in serving those 10,000 ground terminals scattered randomly on terrain that is not optically flat and not fully absorbent.
 
So how will Starlink scale? I think we literally don't know. And the modeling matters.
 
Recently a real propagation expert (Ted Rapaport and his students) did a study of how well 70 GHz RF signals propagate in an urban environment - Brooklyn.  The standard model would say that coverage would be terrible! Why? Because supposedly 70 GHz is like visible light - line of sight is required or nothing works.
 
But in fact, Ted, whom I've known from being on the FCC Technological Advisory Committee (TAC) together when it was actually populated with engineers and scientists, not lobbyists, discovered that scattering and diffraction at 70 GHz in an urban environment significantly expands coverage of a single transmitter. Remarkably so. Enough that "cellular architecture" doesn't make sense in that propagation environment.
 
So all the professional experts are starting from the wrong place, and amateurs perhaps even more so.
 
I hope Starlink views itself as a "research project". I'm afraid it doesn't - partly driven by Musk, but equally driven by the FCC itself, which demands that before a system is deployed that the entire plan be shown to work (which would require a "model" that is actually unknowable because something like this has never been tried). This is a problem with today's regulation of spectrum - experiments are barred, both by law, and by competitors who can claim your system will destroy theirs and not work.
 
But it is also a problem when "fans" start setting expectations way too high. Like claiming that Starlink will eliminate any need for fiber. We don't know that at all!
 
 
 
 
 
 
 
On Tuesday, August 10, 2021 2:11pm, "Dick Roy" <dickroy@alum.mit.edu> said:




To add a bit more, as is easily seen below, the amplitudes of each of the transfer functions between the three transmit and three receive antennas are extremely similar.  This is to be expected, of course, since the “aperture” of each array is very small compared to the distance between them.  What is much more interesting and revealing is the relative phases.  Obviously this requires coherent receivers, and ultimately if you want to control the spatial distribution of power (aka SDMA (or MIMO in some circles) coherent transmitters. It turns out that just knowing the amplitude of the transfer functions is not really all that useful for anything other than detecting a broken solder joint:^)))
 
Also, do not forget that depending how these experiments were conducted, the estimates are either of the RF channel itself (aka path loss),or of the RF channel in combination with the transfer functions of the transmitters and//or receivers.  What this means is the CALIBRATION is CRUCIAL!  Those who do not calibrate, are doomed to fail!!!!   I suspect that it is in calibration where the major difference in performance between vendors’’ products can be found :^))))
 
It’s complicated … 
 


From: Bob McMahon [mailto:bob.mcmahon@broadcom.com] 
Sent: Tuesday, August 10, 2021 10:07 AM
To: dickroy@alum.mit.edu
Cc: Rodney W. Grimes; Cake List; Make-Wifi-fast; starlink@lists.bufferbloat.net; codel; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
 

The slides show that for WiFi every transmission produces a complex frequency response, aka the h-matrix. This is valid for that one transmission only.  The slides show an amplitude plot for a 3 radio device hence the 9 elements per the h-matrix. It's assumed that the WiFi STA/AP is stationary such that doppler effects aren't a consideration. WiFi isn't a car trying to connect to a cell tower.  The plot doesn't show the phase effects but they are included as the output of the channel estimate is a complex frequency response. Each RX produces the h-matrix ahead of the MAC. These may not be symmetric in the real world but that's ok as transmission and reception is one way only, i.e. the treating them as repcripocol and the matrix as hollows symmetric isn't going to be a "test blocker" as the goal is to be able to use software and programmable devices to change them in near real time. The current approach used by many using butler matrices to produce off-diagonal effects  is woefully inadequate. And we're paying about $2.5K per each butler.

Bob
 


On Tue, Aug 10, 2021 at 9:13 AM Dick Roy <[ dickroy@alum.mit.edu ]( mailto:dickroy@alum.mit.edu )> wrote:
Well, I hesitate to drag this out, however Maxwell's equations and the
 invariance of the laws of physics ensure that all path loss matrices are
 reciprocal.  What that means is that at any for any given set of fixed
 boundary conditions (nothing moving/changing!), the propagation loss between
 any two points in the domain is the same in both directions. The
 "multipathing" in one direction is the same in the other because the
 two-parameter (angle1,angle2) scattering cross sections of all objects
 (remember they are fixed here) are independent of the ordering of the
 angles.  

 Very importantly, path loss is NOT the same as the link loss (aka link
 budget) which involves tx power and rx noise figure (and in the case of
 smart antennas, there is a link per spatial stream and how those links are
 managed/controlled really matters, but let's just keep it simple for this
 discussion) and these generally are different on both ends of a link for a
 variety of reasons. The other very important issue is that of the
 ""measurement plane", or "where tx power and rx noise figure are being
 measured/referenced to and how well the interface at that plane is
 "matched".  We generally assume that the matching is perfect, however it
 never is. All of these effects contribute to the link loss which determines
 the strength of the signal coming out of the receiver (not the receive
 antenna, the receiver) for a given signal strength coming out of the
 transmitter (not the transmit antenna, the tx output port).   

 In the real world, things change.  Sources and sinks move as do many of the
 objects around them.  This creates a time-varying RF environment, and now
 the path loss matrix is a function of time and a few others things, so it
 matters WHEN something is transmitted, and WHEN it is received, and the two
 WHEN's are generally separated by "the speed of light" which is a ft/ns
 roughly. As important is the fact that it's no longer really a path loss
 matrix containing a single scalar because among other things, the time
 varying environment induces change in the transmitted waveform on its way to
 the receiver most commonly referred to as the Doppler effect which means
 there is a frequency translation/shift for each (multi-)path of which there
 are in general an uncountably infinite number because this is a continuous
 world in which we live (the space quantization experiment being conducted in
 the central US aside:^)). As a consequence of these physical laws, the
 entries in the path loss matrix become complex functions of a number of
 variables including time. These functions are quite often characterized in
 terms of Doppler and delay-spread, terms used to describe in just a few
 parameters the amount of "distortion" a complex function causes. 

 Hope this helps ... probably a bit more than you really wanted to know as
 queuing theorists, but ...

 -----Original Message-----
 From: Starlink [mailto:[ starlink-bounces@lists.bufferbloat.net ]( mailto:starlink-bounces@lists.bufferbloat.net )] On Behalf Of
 Rodney W. Grimes
 Sent: Tuesday, August 10, 2021 7:10 AM
 To: Bob McMahon
 Cc: Cake List; Make-Wifi-fast; [ starlink@lists.bufferbloat.net ]( mailto:starlink@lists.bufferbloat.net );
[ codel@lists.bufferbloat.net ]( mailto:codel@lists.bufferbloat.net ); cerowrt-devel; bloat
 Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
 Internet Quality workshop CFP for the internet architecture board

 > The distance matrix defines signal attenuations/loss between pairs.  It's
 > straightforward to create a distance matrix that has hidden nodes because
 > all "signal  loss" between pairs is defined.  Let's say a 120dB
 attenuation
 > path will cause a node to be hidden as an example.
 > 
 >      A    B     C    D
 > A   -   35   120   65
 > B         -      65   65
 > C               -       65
 > D                         -
 > 
 > So in the above, AC are hidden from each other but nobody else is. It does
 > assume symmetry between pairs but that's typically true.

 That is not correct, symmetry in the RF world, especially wifi, is rare
 due to topology issues.  A high transmitter, A,  and a low receiver, B,
 has a good path A - > B, but a very weak path B -> A.   Multipathing
 is another major issue that causes assymtry.

 > 
 > The RF device takes these distance matrices as settings and calculates the
 > five branch tree values (as demonstrated in the video). There are
 > limitations to solutions though but I've found those not to be an issue to
 > date. I've been able to produce hidden nodes quite readily. Add the phase
 > shifters and spatial stream powers can also be affected, but this isn't
 > shown in this simple example.
 > 
 > Bob
 > 
 > On Mon, Aug 2, 2021 at 8:12 PM David Lang <[ david@lang.hm ]( mailto:david@lang.hm )> wrote:
 > 
 > > I guess it depends on what you are intending to test. If you are not
 going
 > > to
 > > tinker with any of the over-the-air settings (including the number of
 > > packets
 > > transmitted in one aggregate), the details of what happen over the air
 > > don't
 > > matter much.
 > >
 > > But if you are going to be doing any tinkering with what is getting
 sent,
 > > and
 > > you ignore the hidden transmitter type problems, you will create a
 > > solution that
 > > seems to work really well in the lab and falls on it's face out in the
 > > wild
 > > where spectrum overload and hidden transmitters are the norm (at least
 in
 > > urban
 > > areas), not rare corner cases.
 > >
 > > you don't need to include them in every test, but you need to have a way
 > > to
 > > configure your lab to include them before you consider any
 > > settings/algorithm
 > > ready to try in the wild.
 > >
 > > David Lang
 > >
 > > On Mon, 2 Aug 2021, Bob McMahon wrote:
 > >
 > > > We find four nodes, a primary BSS and an adjunct one quite good for
 lots
 > > of
 > > > testing.  The six nodes allows for a primary BSS and two adjacent
 ones.
 > > We
 > > > want to minimize complexity to necessary and sufficient.
 > > >
 > > > The challenge we find is having variability (e.g. montecarlos) that's
 > > > reproducible and has relevant information. Basically, the distance
 > > matrices
 > > > have h-matrices as their elements. Our chips can provide these
 > > h-matrices.
 > > >
 > > > The parts for solid state programmable attenuators and phase shifters
 > > > aren't very expensive. A device that supports a five branch tree and
 2x2
 > > > MIMO seems a very good starting point.
 > > >
 > > > Bob
 > > >
 > > > On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <[ greearb@candelatech.com ]( mailto:greearb@candelatech.com )>
 > > wrote:
 > > >
 > > >> On 8/2/21 4:16 PM, David Lang wrote:
 > > >>> If you are going to setup a test environment for wifi, you need to
 > > >> include the ability to make a fe cases that only happen with RF, not
 > > with
 > > >> wired networks and
 > > >>> are commonly overlooked
 > > >>>
 > > >>> 1. station A can hear station B and C but they cannot hear each
 other
 > > >>> 2. station A can hear station B but station B cannot hear station A
 3.
 > > >> station A can hear that station B is transmitting, but not with a
 strong
 > > >> enough signal to
 > > >>> decode the signal (yes in theory you can work around interference,
 but
 > > >> in practice interference is still a real thing)
 > > >>>
 > > >>> David Lang
 > > >>>
 > > >>
 > > >> To add to this, I think you need lots of different station devices,
 > > >> different capabilities (/n, /ac, /ax, etc)
 > > >> different numbers of spatial streams, and different distances from
 the
 > > >> AP.  From download queueing perspective, changing
 > > >> the capabilities may be sufficient while keeping all stations at same
 > > >> distance.  This assumes you are not
 > > >> actually testing the wifi rate-ctrl alg. itself, so different
 throughput
 > > >> levels for different stations would be enough.
 > > >>
 > > >> So, a good station emulator setup (and/or pile of real stations) and
 a
 > > few
 > > >> RF chambers and
 > > >> programmable attenuators and you can test that setup...
 > > >>
 > > >>  From upload perspective, I guess same setup would do the job.
 > > >> Queuing/fairness might depend a bit more on the
 > > >> station devices, emulated or otherwise, but I guess a clever AP could
 > > >> enforce fairness in upstream direction
 > > >> too by implementing per-sta queues.
 > > >>
 > > >> Thanks,
 > > >> Ben
 > > >>
 > > >> --
 > > >> Ben Greear <[ greearb@candelatech.com ]( mailto:greearb@candelatech.com )>
 > > >> Candela Technologies Inc  [ http://www.candelatech.com ]( http://www.candelatech.com )
 > > >>
 > > >
 > > >
 > >
 > 
 > -- 
 > This electronic communication and the information and any files
 transmitted 
 > with it, or attached to it, are confidential and are intended solely for 
 > the use of the individual or entity to whom it is addressed and may
 contain 
 > information that is confidential, legally privileged, protected by privacy

 > laws, or otherwise restricted from disclosure to anyone else. If you are 
 > not the intended recipient or the person responsible for delivering the 
 > e-mail to the intended recipient, you are hereby notified that any use, 
 > copying, distributing, dissemination, forwarding, printing, or copying of 
 > this e-mail is strictly prohibited. If you received this e-mail in error, 
 > please return the e-mail to the sender, delete it from your computer, and 
 > destroy any printed copy of it.

 [ Charset UTF-8 unsupported, converting... ]
 > _______________________________________________
 > Starlink mailing list
 > [ Starlink@lists.bufferbloat.net ]( mailto:Starlink@lists.bufferbloat.net )
 > [ https://lists.bufferbloat.net/listinfo/starlink ]( https://lists.bufferbloat.net/listinfo/starlink )
 > 
 _______________________________________________
 Starlink mailing list
[ Starlink@lists.bufferbloat.net ]( mailto:Starlink@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/starlink ]( https://lists.bufferbloat.net/listinfo/starlink )

This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 27824 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] [Cake] [Starlink] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-09-02 17:36                                   ` [Cerowrt-devel] [Cake] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board David P. Reed
@ 2021-09-03 14:35                                     ` Matt Mathis
  2021-09-03 18:33                                       ` [Cerowrt-devel] [Bloat] [Cake] [Starlink] [Make-wifi-fast] " David P. Reed
  0 siblings, 1 reply; 108+ messages in thread
From: Matt Mathis @ 2021-09-03 14:35 UTC (permalink / raw)
  To: David P. Reed
  Cc: dickroy, Cake List, Make-Wifi-fast, Bob McMahon, starlink, codel,
	cerowrt-devel, bloat, Rodney W. Grimes

[-- Attachment #1: Type: text/plain, Size: 18653 bytes --]

I am very wary of a generalization of this problem: software engineers who
believe that they can code around arbitrary idosynchronies of network
hardware.  They often succeed, but generally at a severe performance
penalty.

How much do we know about the actual hardware?   As far as I understand the
math, some of the prime calculations used in Machine Learning are
isomorphic to multidimensional correlators and convolutions, which are
the same computations as needed to do phased array beam steering.   One can
imagine scenarios where Tesla (plans to) substantially overbuild the
computational HW by recycling some ML technology, and then beefing up the
SW over time as they better understand reality.

Also note that the problem really only needs to be solved in areas where
they will eventually have high density.   Most of the early deployment will
never have this problem.

Thanks,
--MM--
The best way to predict the future is to create it.  - Alan Kay

We must not tolerate intolerance;
       however our response must be carefully measured:
            too strong would be hypocritical and risks spiraling out of
control;
            too weak risks being mistaken for tacit approval.


On Thu, Sep 2, 2021 at 10:36 AM David P. Reed <dpreed@deepplum.com> wrote:

> I just want to thank Dick Roy for backing up the arguments I've been
> making about physical RF communications for many years, and clarifying
> terminology here. I'm not the expert - Dick is an expert with real
> practical and theoretical experience - but what I've found over the years
> is that many who consider themselves "experts" say things that are actually
> nonsense about radio systems.
>
>
>
> It seems to me that Starlink is based on a propagation model that is quite
> simplistic, and probably far enough from correct that what seems "obvious"
> will turn out not to be true. That doesn't stop Musk and cronies from
> asserting these things as absolute truths (backed by actual professors,
> especially professors of Economics like Coase, but also CS professors,
> network protocol experts, etc. who aren't physicists or practicing RF
> engineers).
>
>
>
> The fact is that we don't really know how to build a scalable LEO system.
> Models can be useful, but a model can be a trap that causes even engineers
> to be cocky. Or as the saying goes, a Clear View doesn't mean a Short
> Distance.
>
>
>
> If there are 40 satellites serving 10,000 ground terminals simultaneously,
> exactly what is the propagation environment like? I can tell you one thing:
> if the phased array is digitized at some sample rate and some equalization
> and some quantization, the propagation REALLY matters in serving those
> 10,000 ground terminals scattered randomly on terrain that is not optically
> flat and not fully absorbent.
>
>
>
> So how will Starlink scale? I think we literally don't know. And the
> modeling matters.
>
>
>
> Recently a real propagation expert (Ted Rapaport and his students) did a
> study of how well 70 GHz RF signals propagate in an urban environment -
> Brooklyn.  The standard model would say that coverage would be terrible!
> Why? Because supposedly 70 GHz is like visible light - line of sight is
> required or nothing works.
>
>
>
> But in fact, Ted, whom I've known from being on the FCC Technological
> Advisory Committee (TAC) together when it was actually populated with
> engineers and scientists, not lobbyists, discovered that scattering and
> diffraction at 70 GHz in an urban environment significantly expands
> coverage of a single transmitter. Remarkably so. Enough that "cellular
> architecture" doesn't make sense in that propagation environment.
>
>
>
> So all the professional experts are starting from the wrong place, and
> amateurs perhaps even more so.
>
>
>
> I hope Starlink views itself as a "research project". I'm afraid it
> doesn't - partly driven by Musk, but equally driven by the FCC itself,
> which demands that before a system is deployed that the entire plan be
> shown to work (which would require a "model" that is actually unknowable
> because something like this has never been tried). This is a problem with
> today's regulation of spectrum - experiments are barred, both by law, and
> by competitors who can claim your system will destroy theirs and not work.
>
>
>
> But it is also a problem when "fans" start setting expectations way too
> high. Like claiming that Starlink will eliminate any need for fiber. We
> don't know that at all!
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Tuesday, August 10, 2021 2:11pm, "Dick Roy" <dickroy@alum.mit.edu>
> said:
>
> To add a bit more, as is easily seen below, the amplitudes of each of the
> transfer functions between the three transmit and three receive antennas
> are extremely similar.  This is to be expected, of course, since the
> “aperture” of each array is very small compared to the distance between
> them.  What is much more interesting and revealing is the relative phases.
> Obviously this requires coherent receivers, and ultimately if you want to
> control the spatial distribution of power (aka SDMA (or MIMO in some
> circles) coherent transmitters. It turns out that just knowing the
> amplitude of the transfer functions is not really all that useful for
> anything other than detecting a broken solder joint:^)))
>
>
>
> Also, do not forget that depending how these experiments were conducted,
> the estimates are either of the RF channel itself (aka path loss),or of the
> RF channel in combination with the transfer functions of the transmitters
> and//or receivers.  What this means is the CALIBRATION is CRUCIAL!  Those
> who do not calibrate, are doomed to fail!!!!   I suspect that it is in
> calibration where the major difference in performance between vendors’’
> products can be found :^))))
>
>
>
> It’s complicated …
>
>
> ------------------------------
>
> *From:* Bob McMahon [mailto:bob.mcmahon@broadcom.com]
> *Sent:* Tuesday, August 10, 2021 10:07 AM
> *To:* dickroy@alum.mit.edu
> *Cc:* Rodney W. Grimes; Cake List; Make-Wifi-fast;
> starlink@lists.bufferbloat.net; codel; cerowrt-devel; bloat
> *Subject:* Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug
> 2: Internet Quality workshop CFP for the internet architecture board
>
>
>
> The slides show that for WiFi every transmission produces a complex
> frequency response, aka the h-matrix. This is valid for that one
> transmission only.  The slides show an amplitude plot for a 3 radio device
> hence the 9 elements per the h-matrix. It's assumed that the WiFi STA/AP is
> stationary such that doppler effects aren't a consideration. WiFi isn't a
> car trying to connect to a cell tower.  The plot doesn't show the phase
> effects but they are included as the output of the channel estimate is a
> complex frequency response. Each RX produces the h-matrix ahead of the MAC.
> These may not be symmetric in the real world but that's ok as
> transmission and reception is one way only, i.e. the treating them as
> repcripocol and the matrix as hollows symmetric isn't going to be a "test
> blocker" as the goal is to be able to use software and programmable devices
> to change them in near real time. The current approach used by many using
> butler matrices to produce off-diagonal effects  is woefully inadequate.
> And we're paying about $2.5K per each butler.
>
> Bob
>
>
>
> On Tue, Aug 10, 2021 at 9:13 AM Dick Roy <dickroy@alum.mit.edu> wrote:
>
> Well, I hesitate to drag this out, however Maxwell's equations and the
> invariance of the laws of physics ensure that all path loss matrices are
> reciprocal.  What that means is that at any for any given set of fixed
> boundary conditions (nothing moving/changing!), the propagation loss
> between
> any two points in the domain is the same in both directions. The
> "multipathing" in one direction is the same in the other because the
> two-parameter (angle1,angle2) scattering cross sections of all objects
> (remember they are fixed here) are independent of the ordering of the
> angles.
>
> Very importantly, path loss is NOT the same as the link loss (aka link
> budget) which involves tx power and rx noise figure (and in the case of
> smart antennas, there is a link per spatial stream and how those links are
> managed/controlled really matters, but let's just keep it simple for this
> discussion) and these generally are different on both ends of a link for a
> variety of reasons. The other very important issue is that of the
> ""measurement plane", or "where tx power and rx noise figure are being
> measured/referenced to and how well the interface at that plane is
> "matched".  We generally assume that the matching is perfect, however it
> never is. All of these effects contribute to the link loss which determines
> the strength of the signal coming out of the receiver (not the receive
> antenna, the receiver) for a given signal strength coming out of the
> transmitter (not the transmit antenna, the tx output port).
>
> In the real world, things change.  Sources and sinks move as do many of the
> objects around them.  This creates a time-varying RF environment, and now
> the path loss matrix is a function of time and a few others things, so it
> matters WHEN something is transmitted, and WHEN it is received, and the two
> WHEN's are generally separated by "the speed of light" which is a ft/ns
> roughly. As important is the fact that it's no longer really a path loss
> matrix containing a single scalar because among other things, the time
> varying environment induces change in the transmitted waveform on its way
> to
> the receiver most commonly referred to as the Doppler effect which means
> there is a frequency translation/shift for each (multi-)path of which there
> are in general an uncountably infinite number because this is a continuous
> world in which we live (the space quantization experiment being conducted
> in
> the central US aside:^)). As a consequence of these physical laws, the
> entries in the path loss matrix become complex functions of a number of
> variables including time. These functions are quite often characterized in
> terms of Doppler and delay-spread, terms used to describe in just a few
> parameters the amount of "distortion" a complex function causes.
>
> Hope this helps ... probably a bit more than you really wanted to know as
> queuing theorists, but ...
>
> -----Original Message-----
> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf
> Of
> Rodney W. Grimes
> Sent: Tuesday, August 10, 2021 7:10 AM
> To: Bob McMahon
> Cc: Cake List; Make-Wifi-fast; starlink@lists.bufferbloat.net;
> codel@lists.bufferbloat.net; cerowrt-devel; bloat
> Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
> Internet Quality workshop CFP for the internet architecture board
>
> > The distance matrix defines signal attenuations/loss between pairs.  It's
> > straightforward to create a distance matrix that has hidden nodes because
> > all "signal  loss" between pairs is defined.  Let's say a 120dB
> attenuation
> > path will cause a node to be hidden as an example.
> >
> >      A    B     C    D
> > A   -   35   120   65
> > B         -      65   65
> > C               -       65
> > D                         -
> >
> > So in the above, AC are hidden from each other but nobody else is. It
> does
> > assume symmetry between pairs but that's typically true.
>
> That is not correct, symmetry in the RF world, especially wifi, is rare
> due to topology issues.  A high transmitter, A,  and a low receiver, B,
> has a good path A - > B, but a very weak path B -> A.   Multipathing
> is another major issue that causes assymtry.
>
> >
> > The RF device takes these distance matrices as settings and calculates
> the
> > five branch tree values (as demonstrated in the video). There are
> > limitations to solutions though but I've found those not to be an issue
> to
> > date. I've been able to produce hidden nodes quite readily. Add the phase
> > shifters and spatial stream powers can also be affected, but this isn't
> > shown in this simple example.
> >
> > Bob
> >
> > On Mon, Aug 2, 2021 at 8:12 PM David Lang <david@lang.hm> wrote:
> >
> > > I guess it depends on what you are intending to test. If you are not
> going
> > > to
> > > tinker with any of the over-the-air settings (including the number of
> > > packets
> > > transmitted in one aggregate), the details of what happen over the air
> > > don't
> > > matter much.
> > >
> > > But if you are going to be doing any tinkering with what is getting
> sent,
> > > and
> > > you ignore the hidden transmitter type problems, you will create a
> > > solution that
> > > seems to work really well in the lab and falls on it's face out in the
> > > wild
> > > where spectrum overload and hidden transmitters are the norm (at least
> in
> > > urban
> > > areas), not rare corner cases.
> > >
> > > you don't need to include them in every test, but you need to have a
> way
> > > to
> > > configure your lab to include them before you consider any
> > > settings/algorithm
> > > ready to try in the wild.
> > >
> > > David Lang
> > >
> > > On Mon, 2 Aug 2021, Bob McMahon wrote:
> > >
> > > > We find four nodes, a primary BSS and an adjunct one quite good for
> lots
> > > of
> > > > testing.  The six nodes allows for a primary BSS and two adjacent
> ones.
> > > We
> > > > want to minimize complexity to necessary and sufficient.
> > > >
> > > > The challenge we find is having variability (e.g. montecarlos) that's
> > > > reproducible and has relevant information. Basically, the distance
> > > matrices
> > > > have h-matrices as their elements. Our chips can provide these
> > > h-matrices.
> > > >
> > > > The parts for solid state programmable attenuators and phase shifters
> > > > aren't very expensive. A device that supports a five branch tree and
> 2x2
> > > > MIMO seems a very good starting point.
> > > >
> > > > Bob
> > > >
> > > > On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <greearb@candelatech.com>
> > > wrote:
> > > >
> > > >> On 8/2/21 4:16 PM, David Lang wrote:
> > > >>> If you are going to setup a test environment for wifi, you need to
> > > >> include the ability to make a fe cases that only happen with RF, not
> > > with
> > > >> wired networks and
> > > >>> are commonly overlooked
> > > >>>
> > > >>> 1. station A can hear station B and C but they cannot hear each
> other
> > > >>> 2. station A can hear station B but station B cannot hear station A
> 3.
> > > >> station A can hear that station B is transmitting, but not with a
> strong
> > > >> enough signal to
> > > >>> decode the signal (yes in theory you can work around interference,
> but
> > > >> in practice interference is still a real thing)
> > > >>>
> > > >>> David Lang
> > > >>>
> > > >>
> > > >> To add to this, I think you need lots of different station devices,
> > > >> different capabilities (/n, /ac, /ax, etc)
> > > >> different numbers of spatial streams, and different distances from
> the
> > > >> AP.  From download queueing perspective, changing
> > > >> the capabilities may be sufficient while keeping all stations at
> same
> > > >> distance.  This assumes you are not
> > > >> actually testing the wifi rate-ctrl alg. itself, so different
> throughput
> > > >> levels for different stations would be enough.
> > > >>
> > > >> So, a good station emulator setup (and/or pile of real stations) and
> a
> > > few
> > > >> RF chambers and
> > > >> programmable attenuators and you can test that setup...
> > > >>
> > > >>  From upload perspective, I guess same setup would do the job.
> > > >> Queuing/fairness might depend a bit more on the
> > > >> station devices, emulated or otherwise, but I guess a clever AP
> could
> > > >> enforce fairness in upstream direction
> > > >> too by implementing per-sta queues.
> > > >>
> > > >> Thanks,
> > > >> Ben
> > > >>
> > > >> --
> > > >> Ben Greear <greearb@candelatech.com>
> > > >> Candela Technologies Inc  http://www.candelatech.com
> > > >>
> > > >
> > > >
> > >
> >
> > --
> > This electronic communication and the information and any files
> transmitted
> > with it, or attached to it, are confidential and are intended solely for
> > the use of the individual or entity to whom it is addressed and may
> contain
> > information that is confidential, legally privileged, protected by
> privacy
>
> > laws, or otherwise restricted from disclosure to anyone else. If you are
> > not the intended recipient or the person responsible for delivering the
> > e-mail to the intended recipient, you are hereby notified that any use,
> > copying, distributing, dissemination, forwarding, printing, or copying
> of
> > this e-mail is strictly prohibited. If you received this e-mail in
> error,
> > please return the e-mail to the sender, delete it from your computer,
> and
> > destroy any printed copy of it.
>
> [ Charset UTF-8 unsupported, converting... ]
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
> >
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 27603 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] [Cake] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
  2021-09-03 14:35                                     ` [Bloat] [Cake] [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Matt Mathis
@ 2021-09-03 18:33                                       ` David P. Reed
  0 siblings, 0 replies; 108+ messages in thread
From: David P. Reed @ 2021-09-03 18:33 UTC (permalink / raw)
  To: Matt Mathis
  Cc: dickroy, Cake List, Make-Wifi-fast, Bob McMahon, starlink, codel,
	cerowrt-devel, bloat, Rodney W. Grimes

[-- Attachment #1: Type: text/plain, Size: 20521 bytes --]


Regarding "only needs to be solved ... high density" - Musk has gone on record as saying that Starlink probably will never support dense subscriber areas. Which of course contradicts many other statements by Starlink and Starfans that they can scale up to full coverage of the world. My point in this regard is that "armchair theorizing" is not going to discover how scalable Starlink technology (or LEO technology) can be, because there are many, many physical factors besides constellation size that will likely limit scaling.
 
It really does bug me that Musk and crew have promised very low latency as a definite feature of Starlink, but then couldn't seem to even bother to get congestion control in their early trial deployments.
That one should be solvable.
 
But they are declaring victory and claiming they have solved every problem, so they should get FCC permission to roll out more of their unproven technology, right now. Reminds me of ATT deploying the iPhone. As soon as it stopped working very well after the early raving reviews from early adopters, ATT's top technology guy (a John Donavan) went on a full on rampage against Apple for having a "defective product" when in fact it was ATT's HSPA network that was getting severely congested due to its extreme bufferbloat design. (It wasn't ATT, it was actually Alcatel Lucent that did the terrible design, but ATT continued to blame Apple.)
 
Since some on this list want to believe that Starlink is the savior, but others are technically wise, I'm not sure where the discussion will go. I hope that there will be some feedback to Starlink rather than just a fan club or user-support group.
 
 
On Friday, September 3, 2021 10:35am, "Matt Mathis" <mattmathis@google.com> said:



I am very wary of a generalization of this problem: software engineers who believe that they can code around arbitrary idosynchronies of network hardware.  They often succeed, but generally at a severe performance penalty.
How much do we know about the actual hardware?   As far as I understand the math, some of the prime calculations used in Machine Learning are isomorphic to multidimensional correlators and convolutions, which are the same computations as needed to do phased array beam steering.   One can imagine scenarios where Tesla (plans to) substantially overbuild the computational HW by recycling some ML technology, and then beefing up the SW over time as they better understand reality.
Also note that the problem really only needs to be solved in areas where they will eventually have high density.   Most of the early deployment will never have this problem.









Thanks,--MM--
The best way to predict the future is to create it.  - Alan Kay

We must not tolerate intolerance;
       however our response must be carefully measured: 
            too strong would be hypocritical and risks spiraling out of control;
            too weak risks being mistaken for tacit approval.


On Thu, Sep 2, 2021 at 10:36 AM David P. Reed <[ dpreed@deepplum.com ]( mailto:dpreed@deepplum.com )> wrote:
I just want to thank Dick Roy for backing up the arguments I've been making about physical RF communications for many years, and clarifying terminology here. I'm not the expert - Dick is an expert with real practical and theoretical experience - but what I've found over the years is that many who consider themselves "experts" say things that are actually nonsense about radio systems.
 
It seems to me that Starlink is based on a propagation model that is quite simplistic, and probably far enough from correct that what seems "obvious" will turn out not to be true. That doesn't stop Musk and cronies from asserting these things as absolute truths (backed by actual professors, especially professors of Economics like Coase, but also CS professors, network protocol experts, etc. who aren't physicists or practicing RF engineers).
 
The fact is that we don't really know how to build a scalable LEO system. Models can be useful, but a model can be a trap that causes even engineers to be cocky. Or as the saying goes, a Clear View doesn't mean a Short Distance.
 
If there are 40 satellites serving 10,000 ground terminals simultaneously, exactly what is the propagation environment like? I can tell you one thing: if the phased array is digitized at some sample rate and some equalization and some quantization, the propagation REALLY matters in serving those 10,000 ground terminals scattered randomly on terrain that is not optically flat and not fully absorbent.
 
So how will Starlink scale? I think we literally don't know. And the modeling matters.
 
Recently a real propagation expert (Ted Rapaport and his students) did a study of how well 70 GHz RF signals propagate in an urban environment - Brooklyn.  The standard model would say that coverage would be terrible! Why? Because supposedly 70 GHz is like visible light - line of sight is required or nothing works.
 
But in fact, Ted, whom I've known from being on the FCC Technological Advisory Committee (TAC) together when it was actually populated with engineers and scientists, not lobbyists, discovered that scattering and diffraction at 70 GHz in an urban environment significantly expands coverage of a single transmitter. Remarkably so. Enough that "cellular architecture" doesn't make sense in that propagation environment.
 
So all the professional experts are starting from the wrong place, and amateurs perhaps even more so.
 
I hope Starlink views itself as a "research project". I'm afraid it doesn't - partly driven by Musk, but equally driven by the FCC itself, which demands that before a system is deployed that the entire plan be shown to work (which would require a "model" that is actually unknowable because something like this has never been tried). This is a problem with today's regulation of spectrum - experiments are barred, both by law, and by competitors who can claim your system will destroy theirs and not work.
 
But it is also a problem when "fans" start setting expectations way too high. Like claiming that Starlink will eliminate any need for fiber. We don't know that at all!
 
 
 
 
 
 
 
On Tuesday, August 10, 2021 2:11pm, "Dick Roy" <[ dickroy@alum.mit.edu ]( mailto:dickroy@alum.mit.edu )> said:




To add a bit more, as is easily seen below, the amplitudes of each of the transfer functions between the three transmit and three receive antennas are extremely similar.  This is to be expected, of course, since the “aperture” of each array is very small compared to the distance between them.  What is much more interesting and revealing is the relative phases.  Obviously this requires coherent receivers, and ultimately if you want to control the spatial distribution of power (aka SDMA (or MIMO in some circles) coherent transmitters. It turns out that just knowing the amplitude of the transfer functions is not really all that useful for anything other than detecting a broken solder joint:^)))
 
Also, do not forget that depending how these experiments were conducted, the estimates are either of the RF channel itself (aka path loss),or of the RF channel in combination with the transfer functions of the transmitters and//or receivers.  What this means is the CALIBRATION is CRUCIAL!  Those who do not calibrate, are doomed to fail!!!!   I suspect that it is in calibration where the major difference in performance between vendors’’ products can be found :^))))
 
It’s complicated … 
 


From: Bob McMahon [mailto:[ bob.mcmahon@broadcom.com ]( mailto:bob.mcmahon@broadcom.com )] 
Sent: Tuesday, August 10, 2021 10:07 AM
To: [ dickroy@alum.mit.edu ]( mailto:dickroy@alum.mit.edu )
Cc: Rodney W. Grimes; Cake List; Make-Wifi-fast; [ starlink@lists.bufferbloat.net ]( mailto:starlink@lists.bufferbloat.net ); codel; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board
 

The slides show that for WiFi every transmission produces a complex frequency response, aka the h-matrix. This is valid for that one transmission only.  The slides show an amplitude plot for a 3 radio device hence the 9 elements per the h-matrix. It's assumed that the WiFi STA/AP is stationary such that doppler effects aren't a consideration. WiFi isn't a car trying to connect to a cell tower.  The plot doesn't show the phase effects but they are included as the output of the channel estimate is a complex frequency response. Each RX produces the h-matrix ahead of the MAC. These may not be symmetric in the real world but that's ok as transmission and reception is one way only, i.e. the treating them as repcripocol and the matrix as hollows symmetric isn't going to be a "test blocker" as the goal is to be able to use software and programmable devices to change them in near real time. The current approach used by many using butler matrices to produce off-diagonal effects  is woefully inadequate. And we're paying about $2.5K per each butler.

Bob
 


On Tue, Aug 10, 2021 at 9:13 AM Dick Roy <[ dickroy@alum.mit.edu ]( mailto:dickroy@alum.mit.edu )> wrote:
Well, I hesitate to drag this out, however Maxwell's equations and the
 invariance of the laws of physics ensure that all path loss matrices are
 reciprocal.  What that means is that at any for any given set of fixed
 boundary conditions (nothing moving/changing!), the propagation loss between
 any two points in the domain is the same in both directions. The
 "multipathing" in one direction is the same in the other because the
 two-parameter (angle1,angle2) scattering cross sections of all objects
 (remember they are fixed here) are independent of the ordering of the
 angles.  

 Very importantly, path loss is NOT the same as the link loss (aka link
 budget) which involves tx power and rx noise figure (and in the case of
 smart antennas, there is a link per spatial stream and how those links are
 managed/controlled really matters, but let's just keep it simple for this
 discussion) and these generally are different on both ends of a link for a
 variety of reasons. The other very important issue is that of the
 ""measurement plane", or "where tx power and rx noise figure are being
 measured/referenced to and how well the interface at that plane is
 "matched".  We generally assume that the matching is perfect, however it
 never is. All of these effects contribute to the link loss which determines
 the strength of the signal coming out of the receiver (not the receive
 antenna, the receiver) for a given signal strength coming out of the
 transmitter (not the transmit antenna, the tx output port).   

 In the real world, things change.  Sources and sinks move as do many of the
 objects around them.  This creates a time-varying RF environment, and now
 the path loss matrix is a function of time and a few others things, so it
 matters WHEN something is transmitted, and WHEN it is received, and the two
 WHEN's are generally separated by "the speed of light" which is a ft/ns
 roughly. As important is the fact that it's no longer really a path loss
 matrix containing a single scalar because among other things, the time
 varying environment induces change in the transmitted waveform on its way to
 the receiver most commonly referred to as the Doppler effect which means
 there is a frequency translation/shift for each (multi-)path of which there
 are in general an uncountably infinite number because this is a continuous
 world in which we live (the space quantization experiment being conducted in
 the central US aside:^)). As a consequence of these physical laws, the
 entries in the path loss matrix become complex functions of a number of
 variables including time. These functions are quite often characterized in
 terms of Doppler and delay-spread, terms used to describe in just a few
 parameters the amount of "distortion" a complex function causes. 

 Hope this helps ... probably a bit more than you really wanted to know as
 queuing theorists, but ...

 -----Original Message-----
 From: Starlink [mailto:[ starlink-bounces@lists.bufferbloat.net ]( mailto:starlink-bounces@lists.bufferbloat.net )] On Behalf Of
 Rodney W. Grimes
 Sent: Tuesday, August 10, 2021 7:10 AM
 To: Bob McMahon
 Cc: Cake List; Make-Wifi-fast; [ starlink@lists.bufferbloat.net ]( mailto:starlink@lists.bufferbloat.net );
[ codel@lists.bufferbloat.net ]( mailto:codel@lists.bufferbloat.net ); cerowrt-devel; bloat
 Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2:
 Internet Quality workshop CFP for the internet architecture board

 > The distance matrix defines signal attenuations/loss between pairs.  It's
 > straightforward to create a distance matrix that has hidden nodes because
 > all "signal  loss" between pairs is defined.  Let's say a 120dB
 attenuation
 > path will cause a node to be hidden as an example.
 > 
 >      A    B     C    D
 > A   -   35   120   65
 > B         -      65   65
 > C               -       65
 > D                         -
 > 
 > So in the above, AC are hidden from each other but nobody else is. It does
 > assume symmetry between pairs but that's typically true.

 That is not correct, symmetry in the RF world, especially wifi, is rare
 due to topology issues.  A high transmitter, A,  and a low receiver, B,
 has a good path A - > B, but a very weak path B -> A.   Multipathing
 is another major issue that causes assymtry.

 > 
 > The RF device takes these distance matrices as settings and calculates the
 > five branch tree values (as demonstrated in the video). There are
 > limitations to solutions though but I've found those not to be an issue to
 > date. I've been able to produce hidden nodes quite readily. Add the phase
 > shifters and spatial stream powers can also be affected, but this isn't
 > shown in this simple example.
 > 
 > Bob
 > 
 > On Mon, Aug 2, 2021 at 8:12 PM David Lang <[ david@lang.hm ]( mailto:david@lang.hm )> wrote:
 > 
 > > I guess it depends on what you are intending to test. If you are not
 going
 > > to
 > > tinker with any of the over-the-air settings (including the number of
 > > packets
 > > transmitted in one aggregate), the details of what happen over the air
 > > don't
 > > matter much.
 > >
 > > But if you are going to be doing any tinkering with what is getting
 sent,
 > > and
 > > you ignore the hidden transmitter type problems, you will create a
 > > solution that
 > > seems to work really well in the lab and falls on it's face out in the
 > > wild
 > > where spectrum overload and hidden transmitters are the norm (at least
 in
 > > urban
 > > areas), not rare corner cases.
 > >
 > > you don't need to include them in every test, but you need to have a way
 > > to
 > > configure your lab to include them before you consider any
 > > settings/algorithm
 > > ready to try in the wild.
 > >
 > > David Lang
 > >
 > > On Mon, 2 Aug 2021, Bob McMahon wrote:
 > >
 > > > We find four nodes, a primary BSS and an adjunct one quite good for
 lots
 > > of
 > > > testing.  The six nodes allows for a primary BSS and two adjacent
 ones.
 > > We
 > > > want to minimize complexity to necessary and sufficient.
 > > >
 > > > The challenge we find is having variability (e.g. montecarlos) that's
 > > > reproducible and has relevant information. Basically, the distance
 > > matrices
 > > > have h-matrices as their elements. Our chips can provide these
 > > h-matrices.
 > > >
 > > > The parts for solid state programmable attenuators and phase shifters
 > > > aren't very expensive. A device that supports a five branch tree and
 2x2
 > > > MIMO seems a very good starting point.
 > > >
 > > > Bob
 > > >
 > > > On Mon, Aug 2, 2021 at 4:55 PM Ben Greear <[ greearb@candelatech.com ]( mailto:greearb@candelatech.com )>
 > > wrote:
 > > >
 > > >> On 8/2/21 4:16 PM, David Lang wrote:
 > > >>> If you are going to setup a test environment for wifi, you need to
 > > >> include the ability to make a fe cases that only happen with RF, not
 > > with
 > > >> wired networks and
 > > >>> are commonly overlooked
 > > >>>
 > > >>> 1. station A can hear station B and C but they cannot hear each
 other
 > > >>> 2. station A can hear station B but station B cannot hear station A
 3.
 > > >> station A can hear that station B is transmitting, but not with a
 strong
 > > >> enough signal to
 > > >>> decode the signal (yes in theory you can work around interference,
 but
 > > >> in practice interference is still a real thing)
 > > >>>
 > > >>> David Lang
 > > >>>
 > > >>
 > > >> To add to this, I think you need lots of different station devices,
 > > >> different capabilities (/n, /ac, /ax, etc)
 > > >> different numbers of spatial streams, and different distances from
 the
 > > >> AP.  From download queueing perspective, changing
 > > >> the capabilities may be sufficient while keeping all stations at same
 > > >> distance.  This assumes you are not
 > > >> actually testing the wifi rate-ctrl alg. itself, so different
 throughput
 > > >> levels for different stations would be enough.
 > > >>
 > > >> So, a good station emulator setup (and/or pile of real stations) and
 a
 > > few
 > > >> RF chambers and
 > > >> programmable attenuators and you can test that setup...
 > > >>
 > > >>  From upload perspective, I guess same setup would do the job.
 > > >> Queuing/fairness might depend a bit more on the
 > > >> station devices, emulated or otherwise, but I guess a clever AP could
 > > >> enforce fairness in upstream direction
 > > >> too by implementing per-sta queues.
 > > >>
 > > >> Thanks,
 > > >> Ben
 > > >>
 > > >> --
 > > >> Ben Greear <[ greearb@candelatech.com ]( mailto:greearb@candelatech.com )>
 > > >> Candela Technologies Inc  [ http://www.candelatech.com ]( http://www.candelatech.com )
 > > >>
 > > >
 > > >
 > >
 > 
 > -- 
 > This electronic communication and the information and any files
 transmitted 
 > with it, or attached to it, are confidential and are intended solely for 
 > the use of the individual or entity to whom it is addressed and may
 contain 
 > information that is confidential, legally privileged, protected by privacy

 > laws, or otherwise restricted from disclosure to anyone else. If you are 
 > not the intended recipient or the person responsible for delivering the 
 > e-mail to the intended recipient, you are hereby notified that any use, 
 > copying, distributing, dissemination, forwarding, printing, or copying of 
 > this e-mail is strictly prohibited. If you received this e-mail in error, 
 > please return the e-mail to the sender, delete it from your computer, and 
 > destroy any printed copy of it.

 [ Charset UTF-8 unsupported, converting... ]
 > _______________________________________________
 > Starlink mailing list
 > [ Starlink@lists.bufferbloat.net ]( mailto:Starlink@lists.bufferbloat.net )
 > [ https://lists.bufferbloat.net/listinfo/starlink ]( https://lists.bufferbloat.net/listinfo/starlink )
 > 
 _______________________________________________
 Starlink mailing list
[ Starlink@lists.bufferbloat.net ]( mailto:Starlink@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/starlink ]( https://lists.bufferbloat.net/listinfo/starlink )

This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it._______________________________________________
 Bloat mailing list
[ Bloat@lists.bufferbloat.net ]( mailto:Bloat@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/bloat ]( https://lists.bufferbloat.net/listinfo/bloat )

[-- Attachment #2: Type: text/html, Size: 32418 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-07-09 19:31               ` [Cerowrt-devel] Little's Law mea culpa, but not invalidating my main point David P. Reed
                                   ` (3 preceding siblings ...)
  2021-07-12 13:46                 ` [Bloat] " Livingood, Jason
@ 2021-09-20  1:21                 ` Dave Taht
  2021-09-20  4:00                   ` Valdis Klētnieks
  4 siblings, 1 reply; 108+ messages in thread
From: Dave Taht @ 2021-09-20  1:21 UTC (permalink / raw)
  To: David P. Reed
  Cc: Luca Muscariello, Cake List, Make-Wifi-fast, Leonard Kleinrock,
	Bob McMahon, starlink, codel, cerowrt-devel, bloat, Ben Greear

I just wanted to comment on how awesome this thread was, and how few
people outside this group deeply grok what was discussed here. I would
so like
to somehow construct an educational TV series explaining "How the
Internet really works" to a wider, and new audience, consisting of
animations, anecdotes,
and interviews with the key figures of its evolution.

While I deeply understood Len Kleinrock's work in the period
2011-2015, and tried to pass on analogies and intuition without using
the math since, inspired by van jacobson's
analogies and Radia Perlman's poetry, it's hard for me now to follow
the argument. Queue theory in particular, is not well known or taught
anymore, despite its obvious applications to things like the Covid
crisis.

But that would be just one thing! The end to end argument, the side
effects of spitting postscript into a lego robot, what actually
happens during a web page load, how a cpu actually works, are all
things that are increasingly lost in multiple mental models, and in my
mind many could be taught in kindergarden, if we worked at explaining
it hard enough.

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-09-20  1:21                 ` [Cerowrt-devel] " Dave Taht
@ 2021-09-20  4:00                   ` Valdis Klētnieks
  2021-09-20  4:09                     ` David Lang
  2021-09-20 12:57                     ` [Cerowrt-devel] [Starlink] " Steve Crocker
  0 siblings, 2 replies; 108+ messages in thread
From: Valdis Klētnieks @ 2021-09-20  4:00 UTC (permalink / raw)
  To: Dave Taht
  Cc: David P. Reed, starlink, Make-Wifi-fast, Leonard Kleinrock,
	Bob McMahon, Cake List, Luca Muscariello, codel, cerowrt-devel,
	bloat, Ben Greear

[-- Attachment #1: Type: text/plain, Size: 889 bytes --]

On Sun, 19 Sep 2021 18:21:56 -0700, Dave Taht said:
> what actually happens during a web page load,

I'm pretty sure that nobody actually understands that anymore, in any
more than handwaving levels.

I have a nice Chrome extension called IPvFoo that actually tracks the IP
addresses contacted during the load of the displayed page. I'll let you make
a guess as to how many unique IP addresses were contacted during a load
of https://www.cnn.com

...


...


...


145, at least half of which appeared to be analytics.  And that's only the
hosts that were contacted by my laptop for HTTP, and doesn't count DNS, or
load-balancing front ends, or all the back-end boxes.  As I commented over on
NANOG, we've gotten to a point similar to that of AT&T long distance, where 60%
of the effort of connecting a long distance phone call was the cost of
accounting and billing for the call.









[-- Attachment #2: Type: application/pgp-signature, Size: 494 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat]   Little's Law mea culpa, but not invalidating my main point
  2021-09-20  4:00                   ` Valdis Klētnieks
@ 2021-09-20  4:09                     ` David Lang
  2021-09-20 21:30                       ` David P. Reed
  2021-09-20 12:57                     ` [Cerowrt-devel] [Starlink] " Steve Crocker
  1 sibling, 1 reply; 108+ messages in thread
From: David Lang @ 2021-09-20  4:09 UTC (permalink / raw)
  To: Valdis Klētnieks
  Cc: Dave Taht, Cake List, Make-Wifi-fast, Leonard Kleinrock,
	Bob McMahon, David P. Reed, starlink, codel, cerowrt-devel,
	bloat, Ben Greear

[-- Attachment #1: Type: text/plain, Size: 421 bytes --]

On Mon, 20 Sep 2021, Valdis Klētnieks wrote:

> On Sun, 19 Sep 2021 18:21:56 -0700, Dave Taht said:
>> what actually happens during a web page load,
>
> I'm pretty sure that nobody actually understands that anymore, in any
> more than handwaving levels.

This is my favorite interview question, it's amazing and saddning at the answers 
that I get, even from supposedly senior security and networking people.

David Lang

[-- Attachment #2: Type: text/plain, Size: 140 bytes --]

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink]  [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-09-20  4:00                   ` Valdis Klētnieks
  2021-09-20  4:09                     ` David Lang
@ 2021-09-20 12:57                     ` Steve Crocker
  2021-09-20 16:36                       ` [Cerowrt-devel] [Cake] " John Sager
  2021-09-21  2:40                       ` [Starlink] [Cerowrt-devel] " Vint Cerf
  1 sibling, 2 replies; 108+ messages in thread
From: Steve Crocker @ 2021-09-20 12:57 UTC (permalink / raw)
  To: Valdis Klētnieks
  Cc: Dave Taht, Cake List, Make-Wifi-fast, Bob McMahon, David P. Reed,
	starlink, codel, cerowrt-devel, bloat, Steve Crocker


[-- Attachment #1.1: Type: text/plain, Size: 1425 bytes --]

Related but slightly different: Attached is a slide some of my colleagues
put together a decade ago showing the number of DNS lookups involved in
displaying CNN's front page.

Steve


On Mon, Sep 20, 2021 at 8:18 AM Valdis Klētnieks <valdis.kletnieks@vt.edu>
wrote:

> On Sun, 19 Sep 2021 18:21:56 -0700, Dave Taht said:
> > what actually happens during a web page load,
>
> I'm pretty sure that nobody actually understands that anymore, in any
> more than handwaving levels.
>
> I have a nice Chrome extension called IPvFoo that actually tracks the IP
> addresses contacted during the load of the displayed page. I'll let you
> make
> a guess as to how many unique IP addresses were contacted during a load
> of https://www.cnn.com
>
> ...
>
>
> ...
>
>
> ...
>
>
> 145, at least half of which appeared to be analytics.  And that's only the
> hosts that were contacted by my laptop for HTTP, and doesn't count DNS, or
> load-balancing front ends, or all the back-end boxes.  As I commented over
> on
> NANOG, we've gotten to a point similar to that of AT&T long distance,
> where 60%
> of the effort of connecting a long distance phone call was the cost of
> accounting and billing for the call.
>
>
>
>
>
>
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>

[-- Attachment #1.2: Type: text/html, Size: 2447 bytes --]

[-- Attachment #2: DNS lookups CNN from 2011.pdf --]
[-- Type: application/pdf, Size: 1201690 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Cake] [Starlink] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-09-20 12:57                     ` [Cerowrt-devel] [Starlink] " Steve Crocker
@ 2021-09-20 16:36                       ` John Sager
  2021-09-21  2:40                       ` [Starlink] [Cerowrt-devel] " Vint Cerf
  1 sibling, 0 replies; 108+ messages in thread
From: John Sager @ 2021-09-20 16:36 UTC (permalink / raw)
  To: cake; +Cc: starlink, Make-Wifi-fast, codel, cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 1635 bytes --]

You guys made the Internet too easy to use :-)


On 20 September 2021 13:57:33 BST, Steve Crocker <steve@shinkuro.com> wrote:
>Related but slightly different: Attached is a slide some of my colleagues
>put together a decade ago showing the number of DNS lookups involved in
>displaying CNN's front page.
>
>Steve
>
>
>On Mon, Sep 20, 2021 at 8:18 AM Valdis Klētnieks <valdis.kletnieks@vt.edu>
>wrote:
>
>> On Sun, 19 Sep 2021 18:21:56 -0700, Dave Taht said:
>> > what actually happens during a web page load,
>>
>> I'm pretty sure that nobody actually understands that anymore, in any
>> more than handwaving levels.
>>
>> I have a nice Chrome extension called IPvFoo that actually tracks the IP
>> addresses contacted during the load of the displayed page. I'll let you
>> make
>> a guess as to how many unique IP addresses were contacted during a load
>> of https://www.cnn.com
>>
>> ...
>>
>>
>> ...
>>
>>
>> ...
>>
>>
>> 145, at least half of which appeared to be analytics.  And that's only the
>> hosts that were contacted by my laptop for HTTP, and doesn't count DNS, or
>> load-balancing front ends, or all the back-end boxes.  As I commented over
>> on
>> NANOG, we've gotten to a point similar to that of AT&T long distance,
>> where 60%
>> of the effort of connecting a long distance phone call was the cost of
>> accounting and billing for the call.
>>
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>

-- 
Sent from the Aether.

[-- Attachment #2: Type: text/html, Size: 2890 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat]  Little's Law mea culpa, but not invalidating my main point
  2021-09-20  4:09                     ` David Lang
@ 2021-09-20 21:30                       ` David P. Reed
  2021-09-20 21:44                         ` [Cerowrt-devel] [Cake] " David P. Reed
  0 siblings, 1 reply; 108+ messages in thread
From: David P. Reed @ 2021-09-20 21:30 UTC (permalink / raw)
  To: David Lang
  Cc: Valdis Klētnieks, Dave Taht, Cake List, Make-Wifi-fast,
	Leonard Kleinrock, Bob McMahon, starlink, codel, cerowrt-devel,
	bloat, Ben Greear

[-- Attachment #1: Type: text/plain, Size: 5407 bytes --]


I use the example all the time, but not for interviewing. What's sad is that the answers seem to be quoting from some set of textbooks or popular explanations of the Internet that really have got it all wrong, but which many professionals seem to believe is true.
 
The same phenomenon appears in the various subfields of the design of radio communications at the physical and front end electronics level. The examples of mental models that are truly broken that are repeated by "experts" are truly incredible, and cover all fields. Two or three:
 
1. why do the AM commercial broadcast band (540-1600 kHz) signals you receive in your home travel farther than VHF band TV signals and UHF band TV signals?  How does this explanation relate to the fact that we can see stars a million light-years away using receivers that respond to 500 Terahertz radio (visible light antennas)?
 
2. What is the "aperture" of an antenna system? Does it depend on frequency of the radiation? How does this relate to the idea of the size of an RF photon, and the mass of an RF photon? How big must a cellphone be to contain the antenna needed to receive and transmit signals in the 3G phone frequencies?
 
3. We can digitize the entire FM broadcast frequency band into a sequence of 14-bit digital samples at the Nyquist sampling rate of about 40 Mega-samples per second, which covers the 20 Mhz bandwidth of the FM band. Does this allow a receiver to use a digital receiver to tune into any FM station that can be received with an "analog FM radio" using the same antenna? Why or why not?
 
I'm sure Dick Roy understands all three of these questions, and what is going on. But I'm equally sure that the designers of WiFi radios or broadcast radios or even the base stations of cellular data systems include few who understand.
 
And literally no one at the FCC or CTIA understand how to answer these questions.  But the problem is that they are *confident* that they know the answers, and that they are right.
 
The same is true about the packet layers and routing layers of the Internet. Very few engineers, much less lay people realize that what they have been told by "experts" is like how Einstein explained how radio works to a teenaged kid:
 
  "Imagine a cat whose tail is in New York and his head is in Los Angeles. If you pinch his tail in NY, he howls in Los Angeles. Except there is no cat."
 
Though others have missed it, Einstein was not making a joke. The non-cat is the laws of quantum electrodynamics (or classically, the laws of Maxwell's Equations). The "cat" would be all the stories people talk about how radio works - beams of energy (or puffs of energy), modulated by some analog waveform, bouncing off of hard materials, going through less dense materials, "hugging the ground", "far field" and "near field" effects, etc.
 
Einstein's point was that there is no cat - that is, all the metaphors and models aren't accurate or equivalent to how radio actually works. But the underlying physical phenomenon supporting radio is real, and scientists do understand it pretty deeply.
 
Same with how packet networks work. There are no "streams" that behave like water in pipes, the connection you have to a shared network has no "speed" in megabits per second built in to it, A "website" isn't coming from one place in the world, and bits don't have inherent meaning.
 
There is NO CAT (not even a metaphorical one that behaves like the Internet actually works).
 
But in the case of the Internet, unlike radio communications, there is no deep mystery that requires new discoveries to understand it, because it's been built by humans. We don't need metaphors like "streams of water" or "sites in a place". We do it a disservice by making up these metaphors, which are only apt in a narrow context.
 
For example, congestion in a shared network is just unnecessary queuing delay caused by multiplexing the capacity of a particular link among different users. It can be cured by slowing down all the different packet sources in some more or less fair way. The simplest approach is just to discard from the queue excess packets that make that queue longer than can fit through the link Then there can't be any congestion. However, telling the sources to slow down somehow would be an improvement, hopefully before any discards are needed.
 
There is no "back pressure", because there is no "pressure" at all in a packet network. There are just queues and links that empty queues of packets at a certain rate. Thinking about back pressure comes from thinking about sessions and pipes. But 90% of the Internet has no sessions and no pipes. Just as there is "no cat" in real radio systems.
 
On Monday, September 20, 2021 12:09am, "David Lang" <david@lang.hm> said:



> On Mon, 20 Sep 2021, Valdis Klētnieks wrote:
> 
> > On Sun, 19 Sep 2021 18:21:56 -0700, Dave Taht said:
> >> what actually happens during a web page load,
> >
> > I'm pretty sure that nobody actually understands that anymore, in any
> > more than handwaving levels.
> 
> This is my favorite interview question, it's amazing and saddning at the answers
> that I get, even from supposedly senior security and networking people.
> 
> David Lang_______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> 

[-- Attachment #2: Type: text/html, Size: 9046 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Cake] [Bloat]  Little's Law mea culpa, but not invalidating my main point
  2021-09-20 21:30                       ` David P. Reed
@ 2021-09-20 21:44                         ` David P. Reed
  0 siblings, 0 replies; 108+ messages in thread
From: David P. Reed @ 2021-09-20 21:44 UTC (permalink / raw)
  To: David P. Reed
  Cc: David Lang, starlink, Valdis Klētnieks, Make-Wifi-fast,
	Leonard Kleinrock, Bob McMahon, Cake List, codel, cerowrt-devel,
	bloat, Ben Greear

[-- Attachment #1: Type: text/plain, Size: 5943 bytes --]


The top posting may be confusing, but "the example" here is the example of the > 100 TCP destinations and dozens of DNS queries that are needed (unless cached) to display the front page of CNN today.
That's "one website" home page. If you look at the JavaScript resource loading code, and now the "service worker" javascript code, the idea that it is like fetching a file using FTP is just wrong. Do NANOG members understand this? I doubt it.
 
On Monday, September 20, 2021 5:30pm, "David P. Reed" <dpreed@deepplum.com> said:



I use the example all the time, but not for interviewing. What's sad is that the answers seem to be quoting from some set of textbooks or popular explanations of the Internet that really have got it all wrong, but which many professionals seem to believe is true.
 
The same phenomenon appears in the various subfields of the design of radio communications at the physical and front end electronics level. The examples of mental models that are truly broken that are repeated by "experts" are truly incredible, and cover all fields. Two or three:
 
1. why do the AM commercial broadcast band (540-1600 kHz) signals you receive in your home travel farther than VHF band TV signals and UHF band TV signals?  How does this explanation relate to the fact that we can see stars a million light-years away using receivers that respond to 500 Terahertz radio (visible light antennas)?
 
2. What is the "aperture" of an antenna system? Does it depend on frequency of the radiation? How does this relate to the idea of the size of an RF photon, and the mass of an RF photon? How big must a cellphone be to contain the antenna needed to receive and transmit signals in the 3G phone frequencies?
 
3. We can digitize the entire FM broadcast frequency band into a sequence of 14-bit digital samples at the Nyquist sampling rate of about 40 Mega-samples per second, which covers the 20 Mhz bandwidth of the FM band. Does this allow a receiver to use a digital receiver to tune into any FM station that can be received with an "analog FM radio" using the same antenna? Why or why not?
 
I'm sure Dick Roy understands all three of these questions, and what is going on. But I'm equally sure that the designers of WiFi radios or broadcast radios or even the base stations of cellular data systems include few who understand.
 
And literally no one at the FCC or CTIA understand how to answer these questions.  But the problem is that they are *confident* that they know the answers, and that they are right.
 
The same is true about the packet layers and routing layers of the Internet. Very few engineers, much less lay people realize that what they have been told by "experts" is like how Einstein explained how radio works to a teenaged kid:
 
  "Imagine a cat whose tail is in New York and his head is in Los Angeles. If you pinch his tail in NY, he howls in Los Angeles. Except there is no cat."
 
Though others have missed it, Einstein was not making a joke. The non-cat is the laws of quantum electrodynamics (or classically, the laws of Maxwell's Equations). The "cat" would be all the stories people talk about how radio works - beams of energy (or puffs of energy), modulated by some analog waveform, bouncing off of hard materials, going through less dense materials, "hugging the ground", "far field" and "near field" effects, etc.
 
Einstein's point was that there is no cat - that is, all the metaphors and models aren't accurate or equivalent to how radio actually works. But the underlying physical phenomenon supporting radio is real, and scientists do understand it pretty deeply.
 
Same with how packet networks work. There are no "streams" that behave like water in pipes, the connection you have to a shared network has no "speed" in megabits per second built in to it, A "website" isn't coming from one place in the world, and bits don't have inherent meaning.
 
There is NO CAT (not even a metaphorical one that behaves like the Internet actually works).
 
But in the case of the Internet, unlike radio communications, there is no deep mystery that requires new discoveries to understand it, because it's been built by humans. We don't need metaphors like "streams of water" or "sites in a place". We do it a disservice by making up these metaphors, which are only apt in a narrow context.
 
For example, congestion in a shared network is just unnecessary queuing delay caused by multiplexing the capacity of a particular link among different users. It can be cured by slowing down all the different packet sources in some more or less fair way. The simplest approach is just to discard from the queue excess packets that make that queue longer than can fit through the link Then there can't be any congestion. However, telling the sources to slow down somehow would be an improvement, hopefully before any discards are needed.
 
There is no "back pressure", because there is no "pressure" at all in a packet network. There are just queues and links that empty queues of packets at a certain rate. Thinking about back pressure comes from thinking about sessions and pipes. But 90% of the Internet has no sessions and no pipes. Just as there is "no cat" in real radio systems.
 
On Monday, September 20, 2021 12:09am, "David Lang" <david@lang.hm> said:



> On Mon, 20 Sep 2021, Valdis Klētnieks wrote:
> 
> > On Sun, 19 Sep 2021 18:21:56 -0700, Dave Taht said:
> >> what actually happens during a web page load,
> >
> > I'm pretty sure that nobody actually understands that anymore, in any
> > more than handwaving levels.
> 
> This is my favorite interview question, it's amazing and saddning at the answers
> that I get, even from supposedly senior security and networking people.
> 
> David Lang_______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 10817 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-09-20 12:57                     ` [Cerowrt-devel] [Starlink] " Steve Crocker
  2021-09-20 16:36                       ` [Cerowrt-devel] [Cake] " John Sager
@ 2021-09-21  2:40                       ` Vint Cerf
  2021-09-23 17:46                         ` Bob McMahon
  1 sibling, 1 reply; 108+ messages in thread
From: Vint Cerf @ 2021-09-21  2:40 UTC (permalink / raw)
  To: Steve Crocker
  Cc: Valdis Klētnieks, starlink, Make-Wifi-fast, Bob McMahon,
	David P. Reed, Cake List, codel, cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 1907 bytes --]

see https://mediatrust.com/
v


On Mon, Sep 20, 2021 at 10:28 AM Steve Crocker <steve@shinkuro.com> wrote:

> Related but slightly different: Attached is a slide some of my colleagues
> put together a decade ago showing the number of DNS lookups involved in
> displaying CNN's front page.
>
> Steve
>
>
> On Mon, Sep 20, 2021 at 8:18 AM Valdis Klētnieks <valdis.kletnieks@vt.edu>
> wrote:
>
>> On Sun, 19 Sep 2021 18:21:56 -0700, Dave Taht said:
>> > what actually happens during a web page load,
>>
>> I'm pretty sure that nobody actually understands that anymore, in any
>> more than handwaving levels.
>>
>> I have a nice Chrome extension called IPvFoo that actually tracks the IP
>> addresses contacted during the load of the displayed page. I'll let you
>> make
>> a guess as to how many unique IP addresses were contacted during a load
>> of https://www.cnn.com
>>
>> ...
>>
>>
>> ...
>>
>>
>> ...
>>
>>
>> 145, at least half of which appeared to be analytics.  And that's only the
>> hosts that were contacted by my laptop for HTTP, and doesn't count DNS, or
>> load-balancing front ends, or all the back-end boxes.  As I commented
>> over on
>> NANOG, we've gotten to a point similar to that of AT&T long distance,
>> where 60%
>> of the effort of connecting a long distance phone call was the cost of
>> accounting and billing for the call.
>>
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>


-- 
Please send any postal/overnight deliveries to:
Vint Cerf
1435 Woodhurst Blvd
McLean, VA 22102
703-448-0965

until further notice

[-- Attachment #2: Type: text/html, Size: 3610 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-09-21  2:40                       ` [Starlink] [Cerowrt-devel] " Vint Cerf
@ 2021-09-23 17:46                         ` Bob McMahon
  2021-09-26 18:24                           ` [Cerowrt-devel] [Starlink] " David P. Reed
  0 siblings, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-09-23 17:46 UTC (permalink / raw)
  To: Vint Cerf
  Cc: Steve Crocker, Valdis Klētnieks, starlink, Make-Wifi-fast,
	David P. Reed, Cake List, codel, cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 5056 bytes --]

Hi All,

I do appreciate this thread as well. As a test & measurement guy here are
my conclusions around network performance. Thanks in advance for any
comments.

Congestion can be mitigated the following ways
o) Size queues properly to minimize/negate bloat (easier said than done
with tech like WiFi)
o) Use faster links on the service side such that a queues' service rates
exceeds the arrival rate, no congestion even in bursts, if possible
o) Drop entries during oversubscribed states (queue processing can't "speed
up" like water flow through a constricted pipe, must drop)
o) Identify aggressor flows per congestion if possible
o) Forwarding planes can signal back the the sources "earlier" to minimize
queue build ups per a "control loop request" asking sources to pace their
writes
o) transport layers use techniques a la BBR
o) Use "home gateways" that support tech like FQ_CODEL

Latency can be mitigated the following ways
o) Mitigate or eliminate congestion, particularly around queueing delays
o) End host apps can use TCP_NOTSENT_LOWAT along with write()/select() to
reduce host sends of "better never than late" messages
o) Move servers closer to the clients per fundamental limit of the speed of
light (i.e. propagation delay of energy over the wave guides), a la CDNs
(Except if you're a HFT, separate servers across geography and make sure to
have exclusive user rights over the lowest latency links)

Transport control loop(s)
o) Transport layer control loops are non linear systems so network tooling
will struggle to emulate "end user experience"
o) 1/2 RTT does not equal OWD used to compute the bandwidth delay product,
imbalance and effects need to be measured
o) forwarding planes signaling congestion to sources wasn't designed in TCP
originally but the industry trend seems to be to moving towards this per
things like L4S

Photons, radio & antenna design
o) Find experts who have experience & knowledge, e.g. many do here
o) Photons don't really have mass nor size, at least per my limited
understanding of particle physics and QED though, I must admit, came from
reading things on the internet

Bob

On Mon, Sep 20, 2021 at 7:40 PM Vint Cerf <vint@google.com> wrote:

> see https://mediatrust.com/
> v
>
>
> On Mon, Sep 20, 2021 at 10:28 AM Steve Crocker <steve@shinkuro.com> wrote:
>
>> Related but slightly different: Attached is a slide some of my colleagues
>> put together a decade ago showing the number of DNS lookups involved in
>> displaying CNN's front page.
>>
>> Steve
>>
>>
>> On Mon, Sep 20, 2021 at 8:18 AM Valdis Klētnieks <valdis.kletnieks@vt.edu>
>> wrote:
>>
>>> On Sun, 19 Sep 2021 18:21:56 -0700, Dave Taht said:
>>> > what actually happens during a web page load,
>>>
>>> I'm pretty sure that nobody actually understands that anymore, in any
>>> more than handwaving levels.
>>>
>>> I have a nice Chrome extension called IPvFoo that actually tracks the IP
>>> addresses contacted during the load of the displayed page. I'll let you
>>> make
>>> a guess as to how many unique IP addresses were contacted during a load
>>> of https://www.cnn.com
>>>
>>> ...
>>>
>>>
>>> ...
>>>
>>>
>>> ...
>>>
>>>
>>> 145, at least half of which appeared to be analytics.  And that's only
>>> the
>>> hosts that were contacted by my laptop for HTTP, and doesn't count DNS,
>>> or
>>> load-balancing front ends, or all the back-end boxes.  As I commented
>>> over on
>>> NANOG, we've gotten to a point similar to that of AT&T long distance,
>>> where 60%
>>> of the effort of connecting a long distance phone call was the cost of
>>> accounting and billing for the call.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
>
> --
> Please send any postal/overnight deliveries to:
> Vint Cerf
> 1435 Woodhurst Blvd
> McLean, VA 22102
> 703-448-0965
>
> until further notice
>
>
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 7295 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink]  [Bloat] Little's Law mea culpa, but not invalidating my main point
  2021-09-23 17:46                         ` Bob McMahon
@ 2021-09-26 18:24                           ` David P. Reed
  2021-10-22  0:51                             ` TCP_NOTSENT_LOWAT applied to e2e TCP msg latency Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: David P. Reed @ 2021-09-26 18:24 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Vint Cerf, Steve Crocker, Valdis Klētnieks, starlink,
	Make-Wifi-fast, Cake List, codel, cerowrt-devel, bloat

[-- Attachment #1: Type: text/plain, Size: 7668 bytes --]


Pretty good list, thanks for putting this together.
 
The only thing I'd add, and I'm not able to formulate it very elegantly, is this personal insight: One that I would research, because it can be a LOT more useful in the end-to-end control loop than stuff like ECN, L4S, RED, ...
 
Fact: Detecting congestion by allowing a queue to build up is a very lagging indicator of incipient congestion in the forwarding system. The delay added to all paths by that queue buildup slows down the control loop's ability to respond by slowing the sources. It's the control loop delay that creates both instability and continued congestion growth.
Observation: current forwarders forget what they have forwarded as soon as it is transmitted. This loses all the information about incipient congestion and "fairness" among multiple sources. Yet, there is no need to forget recent history at all after the packets have been transmitted.
 
An idea I keep proposing is the idea of remembering the last K seconds of packets, their flow ids (source and destination), the arrival time and departure time, and their channel occupancy on the outbound shared link. Then using this information to reflect incipient congestion information to the flows that need controlling, to be used in their control loops.
 
So far, no one has taken me up on doing the research to try this in the field. Note: the signalling can be simple (sending ECN flags on all flows that transit the queue, even though there is no backlog, yet, when the queue is empty but transient overload seems likely), but the key thing is that we already assume that  recent history of packets is predictive of future overflow.
This can be implemented locally on any routing path that tends to be a bottleneck link. Such as the uplink of a home network. It should work with TCP as is if the signalling causes window reduction (at first, just signal by dropping packets prematurely, but if TCP will handle ECN aggressively - a single ECN mark causing window reduction, then it will help that, too).
 
The insight is that from an "information and control theory" perspective, the packets that have already been forwarded are incredibly valuable for congestion prediction.
 
Please, if possible, if anyone actually works on this and publishes, give me credit for suggesting this.
Just because I've been suggesting it for about 15 years now, and being ignored. It would be a mitzvah.
 
 
On Thursday, September 23, 2021 1:46pm, "Bob McMahon" <bob.mcmahon@broadcom.com> said:



Hi All,
I do appreciate this thread as well. As a test & measurement guy here are my conclusions around network performance. Thanks in advance for any comments.

Congestion can be mitigated the following ways
o) Size queues properly to minimize/negate bloat (easier said than done with tech like WiFi)
o) Use faster links on the service side such that a queues' service rates exceeds the arrival rate, no congestion even in bursts, if possible
o) Drop entries during oversubscribed states (queue processing can't "speed up" like water flow through a constricted pipe, must drop)
o) Identify aggressor flows per congestion if possible
o) Forwarding planes can signal back the the sources "earlier" to minimize queue build ups per a "control loop request" asking sources to pace their writes
o) transport layers use techniques a la BBR
o) Use "home gateways" that support tech like FQ_CODEL
Latency can be mitigated the following ways
o) Mitigate or eliminate congestion, particularly around queueing delays
o) End host apps can use TCP_NOTSENT_LOWAT along with write()/select() to reduce host sends of "better never than late" messages 
o) Move servers closer to the clients per fundamental limit of the speed of light (i.e. propagation delay of energy over the wave guides), a la CDNs
(Except if you're a HFT, separate servers across geography and make sure to have exclusive user rights over the lowest latency links)

Transport control loop(s)
o) Transport layer control loops are non linear systems so network tooling will struggle to emulate "end user experience"
o) 1/2 RTT does not equal OWD used to compute the bandwidth delay product, imbalance and effects need to be measured
o) forwarding planes signaling congestion to sources wasn't designed in TCP originally but the industry trend seems to be to moving towards this per things like L4S
Photons, radio & antenna design
o) Find experts who have experience & knowledge, e.g. many do here
o) Photons don't really have mass nor size, at least per my limited understanding of particle physics and QED though, I must admit, came from reading things on the internet 

Bob


On Mon, Sep 20, 2021 at 7:40 PM Vint Cerf <[ vint@google.com ]( mailto:vint@google.com )> wrote:
see [ https://mediatrust.com/ ]( https://mediatrust.com/ )
v


On Mon, Sep 20, 2021 at 10:28 AM Steve Crocker <[ steve@shinkuro.com ]( mailto:steve@shinkuro.com )> wrote:

Related but slightly different: Attached is a slide some of my colleagues put together a decade ago showing the number of DNS lookups involved in displaying CNN's front page.
Steve


On Mon, Sep 20, 2021 at 8:18 AM Valdis Klētnieks <[ valdis.kletnieks@vt.edu ]( mailto:valdis.kletnieks@vt.edu )> wrote:On Sun, 19 Sep 2021 18:21:56 -0700, Dave Taht said:
 > what actually happens during a web page load,

 I'm pretty sure that nobody actually understands that anymore, in any
 more than handwaving levels.

 I have a nice Chrome extension called IPvFoo that actually tracks the IP
 addresses contacted during the load of the displayed page. I'll let you make
 a guess as to how many unique IP addresses were contacted during a load
 of [ https://www.cnn.com ]( https://www.cnn.com )

 ...


 ...


 ...


 145, at least half of which appeared to be analytics.  And that's only the
 hosts that were contacted by my laptop for HTTP, and doesn't count DNS, or
 load-balancing front ends, or all the back-end boxes.  As I commented over on
 NANOG, we've gotten to a point similar to that of AT&T long distance, where 60%
 of the effort of connecting a long distance phone call was the cost of
 accounting and billing for the call.








 _______________________________________________
 Starlink mailing list
[ Starlink@lists.bufferbloat.net ]( mailto:Starlink@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/starlink ]( https://lists.bufferbloat.net/listinfo/starlink )_______________________________________________
 Starlink mailing list
[ Starlink@lists.bufferbloat.net ]( mailto:Starlink@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/starlink ]( https://lists.bufferbloat.net/listinfo/starlink )
-- 



Please send any postal/overnight deliveries to:
Vint Cerf
1435 Woodhurst Blvd 
McLean, VA 22102
703-448-0965
until further notice
This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 11546 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-09-26 18:24                           ` [Cerowrt-devel] [Starlink] " David P. Reed
@ 2021-10-22  0:51                             ` Bob McMahon
  2021-10-26  3:11                               ` [Make-wifi-fast] " Stuart Cheshire
  0 siblings, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-10-22  0:51 UTC (permalink / raw)
  To: David P. Reed
  Cc: Vint Cerf, Steve Crocker, Valdis Klētnieks, starlink,
	Make-Wifi-fast, Cake List, codel, cerowrt-devel, bloat,
	Neal Cardwell, Matt Mathis

[-- Attachment #1: Type: text/plain, Size: 10154 bytes --]

Hi All,

Sorry for the spam. I'm trying to support a meaningful TCP message latency
w/iperf 2 from the sender side w/o requiring e2e clock synchronization. I
thought I'd try to use the TCP_NOTSENT_LOWAT event to help with this. It
seems that this event goes off when the bytes are in flight vs have reached
the destination network stack. If that's the case, then iperf 2
<https://sourceforge.net/projects/iperf2/>client (sender) may be able to
produce the message latency by adding the drain time (write start to
TCP_NOTSENT_LOWAT) and the sampled RTT.

Does this seem reasonable?

Below are some sample outputs of a 10G wired sending to a 1G wired. These
systems do have e2e clock sync so the server side message latency is
correct. The RTT + Drain does approximately equal the server side e2e msg
latency

First with BBR

[root@ryzen3950 iperf2-code]# iperf -c 192.168.1.156 -i 1 -e --tcp-drain
--realtime -Z bbr --trip-times -l 1M
------------------------------------------------------------
Client connecting to 192.168.1.156, TCP port 5001 with pid 206299 (1 flows)
Write buffer size: 1048576 Byte (drain-enabled)
TCP congestion control set to bbr
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.133%enp4s0 port 60684 connected with 192.168.1.156
port 5001 (MSS=1448) (trip-times) (sock=3) (ct=0.26 ms) on 2021-10-21
17:44:10 (PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
Cwnd/RTT        NetPwr  Drain avg/min/max/stdev (cnt)
[  1] 0.00-1.00 sec   112 MBytes   940 Mbits/sec  113/0          0
 263K/1906 us  61616  8.947/8.322/13.465/0.478 ms (112)
[  1] 1.00-2.00 sec   112 MBytes   940 Mbits/sec  112/0          0
 260K/1987 us  59104  8.911/8.229/9.569/0.229 ms (112)
[  1] 2.00-3.00 sec   113 MBytes   948 Mbits/sec  113/0          0
 254K/2087 us  56775  8.910/8.311/9.564/0.221 ms (113)
[  1] 3.00-4.00 sec   112 MBytes   940 Mbits/sec  112/0          0
 263K/1710 us  68679  8.911/8.297/9.618/0.217 ms (112)
[  1] 4.00-5.00 sec   112 MBytes   940 Mbits/sec  112/0          0
 254K/2024 us  58024  8.907/8.470/9.641/0.197 ms (112)
[  1] 5.00-6.00 sec   112 MBytes   940 Mbits/sec  112/0          0
 263K/2124 us  55292  8.911/8.291/9.325/0.198 ms (112)
[  1] 6.00-7.00 sec   113 MBytes   948 Mbits/sec  113/0          0
 265K/2012 us  58891  8.913/8.226/9.569/0.229 ms (113)
[  1] 7.00-8.00 sec   112 MBytes   940 Mbits/sec  112/0          0
 265K/1989 us  59045  8.908/8.313/9.366/0.194 ms (112)
[  1] 8.00-9.00 sec   112 MBytes   940 Mbits/sec  112/0          0
 263K/1999 us  58750  8.908/8.212/9.402/0.211 ms (112)
[  1] 9.00-10.00 sec   112 MBytes   940 Mbits/sec  112/0          0
 5K/242 us  485291  8.947/8.319/12.754/0.414 ms (112)
[  1] 0.00-10.06 sec  1.10 GBytes   937 Mbits/sec  1125/0          0
 5K/242 us  483764  8.950/8.212/45.293/1.120 ms (1123)

[root@localhost rjmcmahon]# iperf -s -e -B 192.168.1.156%enp4s0f0 -i 1
--realtime
------------------------------------------------------------
Server listening on TCP port 5001 with pid 53099
Binding to local address 192.168.1.156 and iface enp4s0f0
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.156%enp4s0f0 port 5001 connected with 192.168.1.133
port 60684 (MSS=1448) (trip-times) (sock=4) (peer 2.1.4-master) on
2021-10-21 20:44:10 (EDT)
[ ID] Interval        Transfer    Bandwidth    Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr  Reads=Dist
[  1] 0.00-1.00 sec   112 MBytes   936 Mbits/sec  10.629/9.890/14.998/1.507
ms (111/1053964) 1.20 MByte 11007  4347=412:3927:7:0:1:0:0:0
[  1] 1.00-2.00 sec   112 MBytes   942 Mbits/sec  10.449/9.736/10.740/0.237
ms (112/1050799) 1.18 MByte 11263  4403=465:3938:0:0:0:0:0:0
[  1] 2.00-3.00 sec   112 MBytes   942 Mbits/sec  10.426/9.873/10.698/0.246
ms (113/1041489) 1.16 MByte 11288  4382=420:3962:0:0:0:0:0:0
[  1] 3.00-4.00 sec   112 MBytes   941 Mbits/sec  10.485/9.724/10.716/0.208
ms (112/1050541) 1.18 MByte 11221  4393=446:3946:1:0:0:0:0:0
[  1] 4.00-5.00 sec   112 MBytes   942 Mbits/sec  10.487/9.902/10.736/0.216
ms (112/1050786) 1.18 MByte 11222  4392=448:3944:0:0:0:0:0:0
[  1] 5.00-6.00 sec   112 MBytes   942 Mbits/sec  10.484/9.758/10.748/0.236
ms (112/1050799) 1.18 MByte 11226  4397=456:3940:0:1:0:0:0:0
[  1] 6.00-7.00 sec   112 MBytes   941 Mbits/sec  10.475/9.756/10.753/0.248
ms (112/1050515) 1.18 MByte 11232  4403=473:3930:0:0:0:0:0:0
[  1] 7.00-8.00 sec   112 MBytes   942 Mbits/sec  10.435/9.759/10.757/0.288
ms (113/1041502) 1.16 MByte 11278  4414=480:3934:0:0:0:0:0:0
[  1] 8.00-9.00 sec   112 MBytes   942 Mbits/sec  10.485/9.762/10.759/0.277
ms (112/1050799) 1.18 MByte 11225  4409=470:3939:0:0:0:0:0:0
[  1] 9.00-10.00 sec   112 MBytes   942 Mbits/sec
 10.550/10.000/10.759/0.191 ms (112/1050786) 1.19 MByte 11155
 4399=455:3944:0:0:0:0:0:0
[  1] 0.00-10.05 sec  1.10 GBytes   937 Mbits/sec
 10.524/9.724/45.519/1.173 ms (1123/1048576) 1.18 MByte 11132
 44149=4725:39414:8:1:1:0:0:0

Now with CUBIC

[root@ryzen3950 iperf2-code]# iperf -c 192.168.1.156 -i 1 -e --tcp-drain
--realtime -Z cubic --trip-times -l 1M
------------------------------------------------------------
Client connecting to 192.168.1.156, TCP port 5001 with pid 206487 (1 flows)
Write buffer size: 1048576 Byte (drain-enabled)
TCP congestion control set to cubic
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.133%enp4s0 port 60686 connected with 192.168.1.156
port 5001 (MSS=1448) (trip-times) (sock=3) (ct=0.49 ms) on 2021-10-21
17:47:02 (PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
Cwnd/RTT        NetPwr  Drain avg/min/max/stdev (cnt)
[  1] 0.00-1.00 sec   113 MBytes   948 Mbits/sec  114/0         66
1527K/13168 us  8998  8.855/4.757/15.949/0.995 ms (113)
[  1] 1.00-2.00 sec   113 MBytes   948 Mbits/sec  113/0          0
1668K/14380 us  8240  8.899/8.450/9.425/0.270 ms (113)
[  1] 2.00-3.00 sec   112 MBytes   940 Mbits/sec  112/0          0
1781K/15335 us  7658  8.904/8.446/9.314/0.258 ms (112)
[  1] 3.00-4.00 sec   112 MBytes   940 Mbits/sec  112/0          0
1867K/16127 us  7282  8.900/8.570/9.313/0.252 ms (112)
[  1] 4.00-5.00 sec   113 MBytes   948 Mbits/sec  113/0          0
1931K/16537 us  7165  8.908/8.330/9.431/0.290 ms (113)
[  1] 5.00-6.00 sec   111 MBytes   931 Mbits/sec  111/0          1
1439K/12342 us  9431  8.945/4.303/18.970/1.091 ms (111)
[  1] 6.00-7.00 sec   113 MBytes   948 Mbits/sec  113/0          0
1515K/12845 us  9225  8.904/8.451/9.432/0.298 ms (113)
[  1] 7.00-8.00 sec   112 MBytes   940 Mbits/sec  112/0          0
1569K/13353 us  8795  8.907/8.569/9.314/0.283 ms (112)
[  1] 8.00-9.00 sec   112 MBytes   940 Mbits/sec  112/0          0
1606K/13718 us  8561  8.909/8.571/9.312/0.275 ms (112)
[  1] 9.00-10.00 sec   113 MBytes   948 Mbits/sec  113/0          0
1630K/13930 us  8506  8.906/8.569/9.316/0.298 ms (113)
[  1] 0.00-10.04 sec  1.10 GBytes   940 Mbits/sec  1127/0         67
1630K/13930 us  8431  8.904/4.303/18.970/0.526 ms (1125)

[root@localhost rjmcmahon]# iperf -s -e -B 192.168.1.156%enp4s0f0 -i 1
--realtime
------------------------------------------------------------
Server listening on TCP port 5001 with pid 53121
Binding to local address 192.168.1.156 and iface enp4s0f0
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.156%enp4s0f0 port 5001 connected with 192.168.1.133
port 60686 (MSS=1448) (trip-times) (sock=4) (peer 2.1.4-master) on
2021-10-21 20:47:02 (EDT)
[ ID] Interval        Transfer    Bandwidth    Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr  Reads=Dist
[  1] 0.00-1.00 sec   111 MBytes   935 Mbits/sec
 20.327/10.445/39.920/4.341 ms (111/1053090) 2.33 MByte 5751
 4344=521:3791:7:2:1:9:0:11
[  1] 1.00-2.00 sec   112 MBytes   942 Mbits/sec
 22.492/21.768/23.254/0.397 ms (112/1050799) 2.53 MByte 5233
 4487=594:3893:0:0:0:0:0:0
[  1] 2.00-3.00 sec   112 MBytes   941 Mbits/sec
 23.624/22.987/24.248/0.327 ms (112/1050502) 2.66 MByte 4980
 4462=548:3912:1:1:0:0:0:0
[  1] 3.00-4.00 sec   112 MBytes   941 Mbits/sec
 24.475/23.741/24.971/0.287 ms (113/1041476) 2.73 MByte 4808
 4483=575:3908:0:0:0:0:0:0
[  1] 4.00-5.00 sec   112 MBytes   942 Mbits/sec
 25.146/24.597/25.459/0.254 ms (112/1050799) 2.83 MByte 4680
 4523=642:3880:0:1:0:0:0:0
[  1] 5.00-6.00 sec   112 MBytes   942 Mbits/sec
 21.592/15.549/36.567/2.358 ms (112/1050786) 2.42 MByte 5450
 4373=489:3868:0:1:0:0:1:12
[  1] 6.00-7.00 sec   112 MBytes   941 Mbits/sec
 21.447/20.800/22.024/0.275 ms (112/1050528) 2.41 MByte 5486
 4464=559:3904:0:1:0:0:0:0
[  1] 7.00-8.00 sec   112 MBytes   942 Mbits/sec
 22.021/21.536/22.519/0.216 ms (113/1041502) 2.46 MByte 5344
 4475=557:3918:0:0:0:0:0:0
[  1] 8.00-9.00 sec   112 MBytes   942 Mbits/sec
 22.445/22.023/22.774/0.209 ms (112/1050799) 2.53 MByte 5243
 4407=474:3932:0:1:0:0:0:0
[  1] 9.00-10.00 sec   112 MBytes   941 Mbits/sec
 22.680/22.269/23.024/0.184 ms (112/1050541) 2.55 MByte 5188
 4511=635:3875:1:0:0:0:0:0
[  1] 0.00-10.03 sec  1.10 GBytes   941 Mbits/sec
 22.629/10.445/39.920/2.083 ms (1125/1048576) 2.54 MByte 5197
 44659=5598:39007:9:7:1:9:1:23

Thanks,
Bob

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 11218 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-22  0:51                             ` TCP_NOTSENT_LOWAT applied to e2e TCP msg latency Bob McMahon
@ 2021-10-26  3:11                               ` Stuart Cheshire
  2021-10-26  4:24                                 ` [Cerowrt-devel] [Bloat] " Eric Dumazet
  2021-10-26  5:32                                 ` Bob McMahon
  0 siblings, 2 replies; 108+ messages in thread
From: Stuart Cheshire @ 2021-10-26  3:11 UTC (permalink / raw)
  To: Bob McMahon
  Cc: David P. Reed, Cake List, Valdis Klētnieks, Make-Wifi-fast,
	starlink, codel, Matt Mathis, cerowrt-devel, bloat,
	Steve Crocker, Vint Cerf, Neal Cardwell

On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:

> Hi All,
> 
> Sorry for the spam. I'm trying to support a meaningful TCP message latency w/iperf 2 from the sender side w/o requiring e2e clock synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to help with this. It seems that this event goes off when the bytes are in flight vs have reached the destination network stack. If that's the case, then iperf 2 client (sender) may be able to produce the message latency by adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled RTT.
> 
> Does this seem reasonable?

I’m not 100% sure what you’re asking, but I will try to help.

When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report your endpoint as writable (e.g., via kqueue or epoll) until less than that threshold of data remains unsent. It won’t stop you writing more bytes if you want to, up to the socket send buffer size, but it won’t *ask* you for more data until the TCP_NOTSENT_LOWAT threshold is reached. In other words, the TCP implementation attempts to keep BDP bytes in flight + TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes in flight is necessary to fill the network pipe and get good throughput. The TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give the source software some advance notice that the TCP implementation will soon be looking for more bytes to send, so that the buffer doesn’t run dry, thereby lowering throughput. (The old SO_SNDBUF option conflates both “bytes in flight” and “bytes buffered and ready to go” into the same number.)

If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification, that will tell you roughly how long it took n bytes to depart the machine. You won’t know why, though. The bytes could depart the machine in response for acks indicating that the same number of bytes have been accepted at the receiver. But the bytes can also depart the machine because CWND is growing. Of course, both of those things are usually happening at the same time.

How to use TCP_NOTSENT_LOWAT is explained in this video:

<https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>

Later in the same video is a two-minute demo (time offset 42:00 to time offset 44:00) showing a “before and after” demo illustrating the dramatic difference this makes for screen sharing responsiveness.

<https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>

Stuart Cheshire

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-26  3:11                               ` [Make-wifi-fast] " Stuart Cheshire
@ 2021-10-26  4:24                                 ` Eric Dumazet
  2021-10-26 18:45                                   ` Christoph Paasch
  2021-10-26  5:32                                 ` Bob McMahon
  1 sibling, 1 reply; 108+ messages in thread
From: Eric Dumazet @ 2021-10-26  4:24 UTC (permalink / raw)
  To: Stuart Cheshire, Bob McMahon
  Cc: starlink, Valdis Klētnieks, Make-Wifi-fast, David P. Reed,
	Cake List, codel, cerowrt-devel, bloat, Steve Crocker, Vint Cerf



On 10/25/21 8:11 PM, Stuart Cheshire via Bloat wrote:
> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
> 
>> Hi All,
>>
>> Sorry for the spam. I'm trying to support a meaningful TCP message latency w/iperf 2 from the sender side w/o requiring e2e clock synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to help with this. It seems that this event goes off when the bytes are in flight vs have reached the destination network stack. If that's the case, then iperf 2 client (sender) may be able to produce the message latency by adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled RTT.
>>
>> Does this seem reasonable?
> 
> I’m not 100% sure what you’re asking, but I will try to help.
> 
> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report your endpoint as writable (e.g., via kqueue or epoll) until less than that threshold of data remains unsent. It won’t stop you writing more bytes if you want to, up to the socket send buffer size, but it won’t *ask* you for more data until the TCP_NOTSENT_LOWAT threshold is reached.


When I implemented TCP_NOTSENT_LOWAT back in 2013 [1], I made sure that sendmsg() would actually
stop feeding more bytes in TCP transmit queue if the current amount of unsent bytes
was above the threshold.

So it looks like Apple implementation is different, based on your description ?

[1] https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36

netperf does not use epoll(), but rather a loop over sendmsg().

One of the point of TCP_NOTSENT_LOWAT for Google was to be able to considerably increase
max number of bytes in transmit queues (3rd column of /proc/sys/net/ipv4/tcp_wmem)
by 10x, allowing for autotune to increase BDP for big RTT flows, this without
increasing memory needs for flows with small RTT.

 In other words, the TCP implementation attempts to keep BDP bytes in flight + TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes in flight is necessary to fill the network pipe and get good throughput. The TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give the source software some advance notice that the TCP implementation will soon be looking for more bytes to send, so that the buffer doesn’t run dry, thereby lowering throughput. (The old SO_SNDBUF option conflates both “bytes in flight” and “bytes buffered and ready to go” into the same number.)
> 
> If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification, that will tell you roughly how long it took n bytes to depart the machine. You won’t know why, though. The bytes could depart the machine in response for acks indicating that the same number of bytes have been accepted at the receiver. But the bytes can also depart the machine because CWND is growing. Of course, both of those things are usually happening at the same time.
> 
> How to use TCP_NOTSENT_LOWAT is explained in this video:
> 
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>
> 
> Later in the same video is a two-minute demo (time offset 42:00 to time offset 44:00) showing a “before and after” demo illustrating the dramatic difference this makes for screen sharing responsiveness.
> 
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>
> 
> Stuart Cheshire
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-26  3:11                               ` [Make-wifi-fast] " Stuart Cheshire
  2021-10-26  4:24                                 ` [Cerowrt-devel] [Bloat] " Eric Dumazet
@ 2021-10-26  5:32                                 ` Bob McMahon
  2021-10-26 10:04                                   ` [Cerowrt-devel] [Starlink] " Bjørn Ivar Teigen
  1 sibling, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-10-26  5:32 UTC (permalink / raw)
  To: Stuart Cheshire
  Cc: David P. Reed, Cake List, Valdis Klētnieks, Make-Wifi-fast,
	starlink, codel, Matt Mathis, cerowrt-devel, bloat,
	Steve Crocker, Vint Cerf, Neal Cardwell

[-- Attachment #1: Type: text/plain, Size: 5055 bytes --]

Thanks Stuart this is helpful. I'm measuring the time just before the first
write() (of potentially a burst of writes to achieve a burst size) per a
socket fd's select event occurring when TCP_NOT_SENT_LOWAT being set to a
small value, then sampling the RTT and CWND and providing histograms for
all three, all on that event. I'm not sure the correctness of RTT and CWND
at this sample point. This is a controlled test over 802.11ax and OFDMA
where the TCP acks per the WiFi clients are being scheduled by the AP using
802.11ax trigger frames so the AP is affecting the end/end BDP per
scheduling the transmits and the acks. The AP can grow the BDP or shrink it
based on these scheduling decisions.  From there we're trying to maximize
network power (throughput/delay) for elephant flows and just latency for
mouse flows. (We also plan some RF frequency stuff to per OFDMA) Anyway,
the AP based scheduling along with aggregation and OFDMA makes WiFi
scheduling optimums non-obvious - at least to me - and I'm trying to
provide insights into how an AP is affecting end/end performance.

The more direct approach for e2e TCP latency and network power has been to
measure first write() to final read() and compute the e2e delay. This
requires clock sync on the ends. (We're using ptp4l with GPS OCXO
atomic references for that but this is typically only available in some
labs.)

Bob


On Mon, Oct 25, 2021 at 8:11 PM Stuart Cheshire <cheshire@apple.com> wrote:

> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <
> make-wifi-fast@lists.bufferbloat.net> wrote:
>
> > Hi All,
> >
> > Sorry for the spam. I'm trying to support a meaningful TCP message
> latency w/iperf 2 from the sender side w/o requiring e2e clock
> synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to
> help with this. It seems that this event goes off when the bytes are in
> flight vs have reached the destination network stack. If that's the case,
> then iperf 2 client (sender) may be able to produce the message latency by
> adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled
> RTT.
> >
> > Does this seem reasonable?
>
> I’m not 100% sure what you’re asking, but I will try to help.
>
> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report your
> endpoint as writable (e.g., via kqueue or epoll) until less than that
> threshold of data remains unsent. It won’t stop you writing more bytes if
> you want to, up to the socket send buffer size, but it won’t *ask* you for
> more data until the TCP_NOTSENT_LOWAT threshold is reached. In other words,
> the TCP implementation attempts to keep BDP bytes in flight +
> TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes in
> flight is necessary to fill the network pipe and get good throughput. The
> TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give the
> source software some advance notice that the TCP implementation will soon
> be looking for more bytes to send, so that the buffer doesn’t run dry,
> thereby lowering throughput. (The old SO_SNDBUF option conflates both
> “bytes in flight” and “bytes buffered and ready to go” into the same
> number.)
>
> If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n
> bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification,
> that will tell you roughly how long it took n bytes to depart the machine.
> You won’t know why, though. The bytes could depart the machine in response
> for acks indicating that the same number of bytes have been accepted at the
> receiver. But the bytes can also depart the machine because CWND is
> growing. Of course, both of those things are usually happening at the same
> time.
>
> How to use TCP_NOTSENT_LOWAT is explained in this video:
>
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>
>
> Later in the same video is a two-minute demo (time offset 42:00 to time
> offset 44:00) showing a “before and after” demo illustrating the dramatic
> difference this makes for screen sharing responsiveness.
>
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>
>
> Stuart Cheshire

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 5722 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Starlink] [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-26  5:32                                 ` Bob McMahon
@ 2021-10-26 10:04                                   ` Bjørn Ivar Teigen
  2021-10-26 17:23                                     ` Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: Bjørn Ivar Teigen @ 2021-10-26 10:04 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Stuart Cheshire, starlink, Valdis Klētnieks, Make-Wifi-fast,
	David P. Reed, Cake List, codel, Matt Mathis, cerowrt-devel,
	bloat, Neal Cardwell

[-- Attachment #1: Type: text/plain, Size: 6163 bytes --]

Hi Bob,

My name is Bjørn Ivar Teigen and I'm working on modeling and measuring WiFi
MAC-layer protocol performance for my PhD.

Is it necessary to measure the latency using the TCP stream itself? I had a
similar problem in the past, and solved it by doing the latency
measurements using TWAMP running alongside the TCP traffic. The requirement
for this to work is that the TWAMP packets are placed in the same queue(s)
as the TCP traffic, and that the impact of measurement traffic is small
enough so as not to interfere too much with your TCP results.
Just my two cents, hope it's helpful.

Bjørn

On Tue, 26 Oct 2021 at 06:32, Bob McMahon <bob.mcmahon@broadcom.com> wrote:

> Thanks Stuart this is helpful. I'm measuring the time just before the
> first write() (of potentially a burst of writes to achieve a burst size)
> per a socket fd's select event occurring when TCP_NOT_SENT_LOWAT being set
> to a small value, then sampling the RTT and CWND and providing histograms
> for all three, all on that event. I'm not sure the correctness of RTT and
> CWND at this sample point. This is a controlled test over 802.11ax and
> OFDMA where the TCP acks per the WiFi clients are being scheduled by the AP
> using 802.11ax trigger frames so the AP is affecting the end/end BDP per
> scheduling the transmits and the acks. The AP can grow the BDP or shrink it
> based on these scheduling decisions.  From there we're trying to maximize
> network power (throughput/delay) for elephant flows and just latency for
> mouse flows. (We also plan some RF frequency stuff to per OFDMA) Anyway,
> the AP based scheduling along with aggregation and OFDMA makes WiFi
> scheduling optimums non-obvious - at least to me - and I'm trying to
> provide insights into how an AP is affecting end/end performance.
>
> The more direct approach for e2e TCP latency and network power has been to
> measure first write() to final read() and compute the e2e delay. This
> requires clock sync on the ends. (We're using ptp4l with GPS OCXO
> atomic references for that but this is typically only available in some
> labs.)
>
> Bob
>
>
> On Mon, Oct 25, 2021 at 8:11 PM Stuart Cheshire <cheshire@apple.com>
> wrote:
>
>> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <
>> make-wifi-fast@lists.bufferbloat.net> wrote:
>>
>> > Hi All,
>> >
>> > Sorry for the spam. I'm trying to support a meaningful TCP message
>> latency w/iperf 2 from the sender side w/o requiring e2e clock
>> synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to
>> help with this. It seems that this event goes off when the bytes are in
>> flight vs have reached the destination network stack. If that's the case,
>> then iperf 2 client (sender) may be able to produce the message latency by
>> adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled
>> RTT.
>> >
>> > Does this seem reasonable?
>>
>> I’m not 100% sure what you’re asking, but I will try to help.
>>
>> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report your
>> endpoint as writable (e.g., via kqueue or epoll) until less than that
>> threshold of data remains unsent. It won’t stop you writing more bytes if
>> you want to, up to the socket send buffer size, but it won’t *ask* you for
>> more data until the TCP_NOTSENT_LOWAT threshold is reached. In other words,
>> the TCP implementation attempts to keep BDP bytes in flight +
>> TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes in
>> flight is necessary to fill the network pipe and get good throughput. The
>> TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give the
>> source software some advance notice that the TCP implementation will soon
>> be looking for more bytes to send, so that the buffer doesn’t run dry,
>> thereby lowering throughput. (The old SO_SNDBUF option conflates both
>> “bytes in flight” and “bytes buffered and ready to go” into the same
>> number.)
>>
>> If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n
>> bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification,
>> that will tell you roughly how long it took n bytes to depart the machine.
>> You won’t know why, though. The bytes could depart the machine in response
>> for acks indicating that the same number of bytes have been accepted at the
>> receiver. But the bytes can also depart the machine because CWND is
>> growing. Of course, both of those things are usually happening at the same
>> time.
>>
>> How to use TCP_NOTSENT_LOWAT is explained in this video:
>>
>> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>
>>
>> Later in the same video is a two-minute demo (time offset 42:00 to time
>> offset 44:00) showing a “before and after” demo illustrating the dramatic
>> difference this makes for screen sharing responsiveness.
>>
>> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>
>>
>> Stuart Cheshire
>
>
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>


-- 
Bjørn Ivar Teigen
Head of Research
+47 47335952 | bjorn@domos.no <name@domos.no> | www.domos.no
WiFi Slicing by Domos

[-- Attachment #2: Type: text/html, Size: 9198 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Starlink] [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-26 10:04                                   ` [Cerowrt-devel] [Starlink] " Bjørn Ivar Teigen
@ 2021-10-26 17:23                                     ` Bob McMahon
  2021-10-27 14:29                                       ` [Cerowrt-devel] [Make-wifi-fast] [Starlink] " Sebastian Moeller
  0 siblings, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-10-26 17:23 UTC (permalink / raw)
  To: Bjørn Ivar Teigen
  Cc: Stuart Cheshire, starlink, Valdis Klētnieks, Make-Wifi-fast,
	David P. Reed, Cake List, codel, Matt Mathis, cerowrt-devel,
	bloat, Neal Cardwell

[-- Attachment #1: Type: text/plain, Size: 8537 bytes --]

Hi Bjørn,

I find, when possible, it's preferred to take telemetry data of actual
traffic (or reads and writes) vs a proxy. We had a case where TCP BE was
outperforming TCP w/VI because BE had the most engineering resources
assigned to it and engineers did a better job with BE. Using a proxy
protocol wouldn't have exercised the same logic paths (in this case it was
in the L2 driver) as TCP did. Hence, measuring actual TCP traffic (or
socket reads and socket writes) was needed to flush out the problem. Note:
I also find that network engineers tend to focus on the stack but it's the
e2e at the application level that impacts user experience. Send side bloat
can drive the OWD while the TCP stack's RTT may look fine. For WiFi test &
measurements, we've decided most testing should be using TCP_NOSENT_LOWAT
because it helps mitigate send side bloat which WiFi engineering doesn't
focus on per lack of ability to impact.

Also, I think OWD is under tested and two way based testing can give
incomplete and inaccurate information, particularly with respect to things
like an e2e transport's control loop.  A most obvious example is assuming
1/2 RTT is the same as OWD to/fro. For WiFi this assumption is most always
false. It also false for many residential internet connections where OWD
asymmetry is designed in.

Bob


On Tue, Oct 26, 2021 at 3:04 AM Bjørn Ivar Teigen <bjorn@domos.no> wrote:

> Hi Bob,
>
> My name is Bjørn Ivar Teigen and I'm working on modeling and measuring
> WiFi MAC-layer protocol performance for my PhD.
>
> Is it necessary to measure the latency using the TCP stream itself? I had
> a similar problem in the past, and solved it by doing the latency
> measurements using TWAMP running alongside the TCP traffic. The requirement
> for this to work is that the TWAMP packets are placed in the same queue(s)
> as the TCP traffic, and that the impact of measurement traffic is small
> enough so as not to interfere too much with your TCP results.
> Just my two cents, hope it's helpful.
>
> Bjørn
>
> On Tue, 26 Oct 2021 at 06:32, Bob McMahon <bob.mcmahon@broadcom.com>
> wrote:
>
>> Thanks Stuart this is helpful. I'm measuring the time just before the
>> first write() (of potentially a burst of writes to achieve a burst size)
>> per a socket fd's select event occurring when TCP_NOT_SENT_LOWAT being set
>> to a small value, then sampling the RTT and CWND and providing histograms
>> for all three, all on that event. I'm not sure the correctness of RTT and
>> CWND at this sample point. This is a controlled test over 802.11ax and
>> OFDMA where the TCP acks per the WiFi clients are being scheduled by the AP
>> using 802.11ax trigger frames so the AP is affecting the end/end BDP per
>> scheduling the transmits and the acks. The AP can grow the BDP or shrink it
>> based on these scheduling decisions.  From there we're trying to maximize
>> network power (throughput/delay) for elephant flows and just latency for
>> mouse flows. (We also plan some RF frequency stuff to per OFDMA) Anyway,
>> the AP based scheduling along with aggregation and OFDMA makes WiFi
>> scheduling optimums non-obvious - at least to me - and I'm trying to
>> provide insights into how an AP is affecting end/end performance.
>>
>> The more direct approach for e2e TCP latency and network power has been
>> to measure first write() to final read() and compute the e2e delay. This
>> requires clock sync on the ends. (We're using ptp4l with GPS OCXO
>> atomic references for that but this is typically only available in some
>> labs.)
>>
>> Bob
>>
>>
>> On Mon, Oct 25, 2021 at 8:11 PM Stuart Cheshire <cheshire@apple.com>
>> wrote:
>>
>>> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <
>>> make-wifi-fast@lists.bufferbloat.net> wrote:
>>>
>>> > Hi All,
>>> >
>>> > Sorry for the spam. I'm trying to support a meaningful TCP message
>>> latency w/iperf 2 from the sender side w/o requiring e2e clock
>>> synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to
>>> help with this. It seems that this event goes off when the bytes are in
>>> flight vs have reached the destination network stack. If that's the case,
>>> then iperf 2 client (sender) may be able to produce the message latency by
>>> adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled
>>> RTT.
>>> >
>>> > Does this seem reasonable?
>>>
>>> I’m not 100% sure what you’re asking, but I will try to help.
>>>
>>> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report your
>>> endpoint as writable (e.g., via kqueue or epoll) until less than that
>>> threshold of data remains unsent. It won’t stop you writing more bytes if
>>> you want to, up to the socket send buffer size, but it won’t *ask* you for
>>> more data until the TCP_NOTSENT_LOWAT threshold is reached. In other words,
>>> the TCP implementation attempts to keep BDP bytes in flight +
>>> TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes in
>>> flight is necessary to fill the network pipe and get good throughput. The
>>> TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give the
>>> source software some advance notice that the TCP implementation will soon
>>> be looking for more bytes to send, so that the buffer doesn’t run dry,
>>> thereby lowering throughput. (The old SO_SNDBUF option conflates both
>>> “bytes in flight” and “bytes buffered and ready to go” into the same
>>> number.)
>>>
>>> If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n
>>> bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification,
>>> that will tell you roughly how long it took n bytes to depart the machine.
>>> You won’t know why, though. The bytes could depart the machine in response
>>> for acks indicating that the same number of bytes have been accepted at the
>>> receiver. But the bytes can also depart the machine because CWND is
>>> growing. Of course, both of those things are usually happening at the same
>>> time.
>>>
>>> How to use TCP_NOTSENT_LOWAT is explained in this video:
>>>
>>> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>
>>>
>>> Later in the same video is a two-minute demo (time offset 42:00 to time
>>> offset 44:00) showing a “before and after” demo illustrating the dramatic
>>> difference this makes for screen sharing responsiveness.
>>>
>>> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>
>>>
>>> Stuart Cheshire
>>
>>
>> This electronic communication and the information and any files
>> transmitted with it, or attached to it, are confidential and are intended
>> solely for the use of the individual or entity to whom it is addressed and
>> may contain information that is confidential, legally privileged, protected
>> by privacy laws, or otherwise restricted from disclosure to anyone else. If
>> you are not the intended recipient or the person responsible for delivering
>> the e-mail to the intended recipient, you are hereby notified that any use,
>> copying, distributing, dissemination, forwarding, printing, or copying of
>> this e-mail is strictly prohibited. If you received this e-mail in error,
>> please return the e-mail to the sender, delete it from your computer, and
>> destroy any printed copy of it.
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
>
> --
> Bjørn Ivar Teigen
> Head of Research
> +47 47335952 | bjorn@domos.no <name@domos.no> | www.domos.no
> WiFi Slicing by Domos
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 11833 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-26  4:24                                 ` [Cerowrt-devel] [Bloat] " Eric Dumazet
@ 2021-10-26 18:45                                   ` Christoph Paasch
  2021-10-26 23:23                                     ` Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: Christoph Paasch @ 2021-10-26 18:45 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Stuart Cheshire, Bob McMahon, Cake List, Valdis Klētnieks,
	Make-Wifi-fast, David P. Reed, starlink, codel, cerowrt-devel,
	bloat, Steve Crocker, Vint Cerf

Hello,

> On Oct 25, 2021, at 9:24 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> 
> 
> 
> On 10/25/21 8:11 PM, Stuart Cheshire via Bloat wrote:
>> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
>> 
>>> Hi All,
>>> 
>>> Sorry for the spam. I'm trying to support a meaningful TCP message latency w/iperf 2 from the sender side w/o requiring e2e clock synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to help with this. It seems that this event goes off when the bytes are in flight vs have reached the destination network stack. If that's the case, then iperf 2 client (sender) may be able to produce the message latency by adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled RTT.
>>> 
>>> Does this seem reasonable?
>> 
>> I’m not 100% sure what you’re asking, but I will try to help.
>> 
>> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report your endpoint as writable (e.g., via kqueue or epoll) until less than that threshold of data remains unsent. It won’t stop you writing more bytes if you want to, up to the socket send buffer size, but it won’t *ask* you for more data until the TCP_NOTSENT_LOWAT threshold is reached.
> 
> 
> When I implemented TCP_NOTSENT_LOWAT back in 2013 [1], I made sure that sendmsg() would actually
> stop feeding more bytes in TCP transmit queue if the current amount of unsent bytes
> was above the threshold.
> 
> So it looks like Apple implementation is different, based on your description ?

Yes, TCP_NOTSENT_LOWAT only impacts the wakeup on iOS/macOS/...

An app can still fill the send-buffer if it does a sendmsg() with a large buffer or does repeated calls to sendmsg().

Fur Apple, the goal of TCP_NOTSENT_LOWAT was to allow an app to quickly change the data it "scheduled" to send. And thus allow the app to write the smallest "logical unit" it has. If that unit is 512KB large, the app is allowed to send that.
For example, in case of video-streaming one may want to skip ahead in the video. In that case the app still needs to transmit the remaining parts of the previous frame anyways, before it can send the new video frame.
That's the reason why the Apple implementation allows one to write more than just the lowat threshold.


That being said, I do think that Linux's way allows for an easier API because the app does not need to be careful at how much data it sends after an epoll/kqueue wakeup. So, the latency-benefits will be easier to get.


Christoph



> [1] https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36
> 
> netperf does not use epoll(), but rather a loop over sendmsg().
> 
> One of the point of TCP_NOTSENT_LOWAT for Google was to be able to considerably increase
> max number of bytes in transmit queues (3rd column of /proc/sys/net/ipv4/tcp_wmem)
> by 10x, allowing for autotune to increase BDP for big RTT flows, this without
> increasing memory needs for flows with small RTT.
> 
> In other words, the TCP implementation attempts to keep BDP bytes in flight + TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes in flight is necessary to fill the network pipe and get good throughput. The TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give the source software some advance notice that the TCP implementation will soon be looking for more bytes to send, so that the buffer doesn’t run dry, thereby lowering throughput. (The old SO_SNDBUF option conflates both “bytes in flight” and “bytes buffered and ready to go” into the same number.)
>> 
>> If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification, that will tell you roughly how long it took n bytes to depart the machine. You won’t know why, though. The bytes could depart the machine in response for acks indicating that the same number of bytes have been accepted at the receiver. But the bytes can also depart the machine because CWND is growing. Of course, both of those things are usually happening at the same time.
>> 
>> How to use TCP_NOTSENT_LOWAT is explained in this video:
>> 
>> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>
>> 
>> Later in the same video is a two-minute demo (time offset 42:00 to time offset 44:00) showing a “before and after” demo illustrating the dramatic difference this makes for screen sharing responsiveness.
>> 
>> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>
>> 
>> Stuart Cheshire
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-26 18:45                                   ` Christoph Paasch
@ 2021-10-26 23:23                                     ` Bob McMahon
  2021-10-26 23:38                                       ` Christoph Paasch
  0 siblings, 1 reply; 108+ messages in thread
From: Bob McMahon @ 2021-10-26 23:23 UTC (permalink / raw)
  To: Christoph Paasch
  Cc: Eric Dumazet, Stuart Cheshire, Cake List, Valdis Klētnieks,
	Make-Wifi-fast, David P. Reed, starlink, codel, cerowrt-devel,
	bloat, Steve Crocker, Vint Cerf

[-- Attachment #1: Type: text/plain, Size: 13983 bytes --]

I'm confused. I don't see any blocking nor partial writes per the write()
at the app level with TCP_NOTSENT_LOWAT set at 4 bytes. The burst is 40K,
the write size is 4K and the watermark is 4 bytes. There are ten writes per
burst.

The S8 histograms are the times waiting on the select().  The first value
is the bin number (multiplied by 100usec bin width) and second the bin
count. The worst case time is at the end and is timestamped per unix epoch.

The second run is over a controlled WiFi link where a 99.7% point of 4-8ms
for a WiFi TX op arbitration win is in the ballpark. The first is 1G wired
and is in the 600 usec range. (No media arbitration there.)

 [root@localhost iperf2-code]# src/iperf -c 10.19.87.9 --trip-times -i 1 -e
--tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
WARN: option of --burst-size without --burst-period defaults --burst-period
to 1 second
------------------------------------------------------------
Client connecting to 10.19.87.9, TCP port 5001 with pid 2124 (1 flows)
Write buffer size: 4096 Byte
Bursting: 40.0 KByte every 1.00 seconds
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 4 bytes)
Enabled select histograms bin-width=0.100 ms, bins=10000
------------------------------------------------------------
[  1] local 10.19.87.10%eth0 port 33166 connected with 10.19.87.9 port 5001
(MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=0.54 ms) on 2021-10-26
16:07:33 (PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
Cwnd/RTT        NetPwr
[  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0
14K/5368 us  8
[  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,2:5,3:2,4:1,11:1
(5.00/95.00/99.7%=1/11/11,Outliers=0,obl/obu=0/0) (1.089
ms/1635289653.928360)
[  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/569 us  72
[  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:1,3:4,4:1,7:1,8:1
(5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.736 ms/1635289654.928088)
[  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/312 us  131
[  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:2,5:2,6:1
(5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.548 ms/1635289655.927776)
[  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/302 us  136
[  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:2,3:5,6:1
(5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.584 ms/1635289656.927814)
[  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/316 us  130
[  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:2,4:2,5:2,6:1
(5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.572 ms/1635289657.927810)
[  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/253 us  162
[  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:4,5:1
(5.00/95.00/99.7%=1/5/5,Outliers=0,obl/obu=0/0) (0.417 ms/1635289658.927630)
[  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/290 us  141
[  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:3,4:3,6:1
(5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.573 ms/1635289659.927771)
[  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/359 us  114
[  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,3:4,4:3,6:1
(5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.570 ms/1635289660.927753)
[  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/349 us  117
[  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:5,4:1,7:1
(5.00/95.00/99.7%=1/7/7,Outliers=0,obl/obu=0/0) (0.608 ms/1635289661.927843)
[  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/347 us  118
[  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:1,3:5,8:1
(5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.725 ms/1635289662.927861)
[  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0
14K/1519 us  27
[  1] 0.00-10.01 sec S8(f)-PDF:
bin(w=100us):cnt(100)=1:25,2:13,3:36,4:11,5:5,6:5,7:2,8:2,11:1
(5.00/95.00/99.7%=1/7/11,Outliers=0,obl/obu=0/0) (1.089
ms/1635289653.928360)

[root@localhost iperf2-code]# src/iperf -c 192.168.1.1 --trip-times -i 1 -e
--tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
WARN: option of --burst-size without --burst-period defaults --burst-period
to 1 second
------------------------------------------------------------
Client connecting to 192.168.1.1, TCP port 5001 with pid 2131 (1 flows)
Write buffer size: 4096 Byte
Bursting: 40.0 KByte every 1.00 seconds
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 4 bytes)
Enabled select histograms bin-width=0.100 ms, bins=10000
------------------------------------------------------------
[  1] local 192.168.1.4%eth1 port 45518 connected with 192.168.1.1 port
5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=5.48 ms) on
2021-10-26 16:07:56 (PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
Cwnd/RTT        NetPwr
[  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0
14K/10339 us  4
[  1] 0.00-1.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:1,40:1,47:1,49:2,50:3,51:1,60:1
(5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.990
ms/1635289676.802143)
[  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/4853 us  8
[  1] 1.00-2.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:2,38:1,39:1,44:1,45:1,49:1,51:1,52:1,60:1
(5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.937
ms/1635289677.802274)
[  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/4991 us  8
[  1] 2.00-3.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:2,48:1,49:2,50:2,51:1,60:1,64:1
(5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.307
ms/1635289678.794326)
[  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/4610 us  9
[  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:3,50:3,56:1,64:1
(5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.362
ms/1635289679.794335)
[  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/5028 us  8
[  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:6,59:1,64:1
(5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.367
ms/1635289680.794399)
[  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/5113 us  8
[  1] 5.00-6.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:2,49:3,50:2,58:1,60:1,65:1
(5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.442
ms/1635289681.794392)
[  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/5054 us  8
[  1] 6.00-7.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:2,39:1,49:3,51:1,60:2,64:1
(5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.374
ms/1635289682.794335)
[  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/5138 us  8
[  1] 7.00-8.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:2,39:2,40:1,49:2,50:1,60:1,64:1
(5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.396
ms/1635289683.794338)
[  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/5329 us  8
[  1] 8.00-9.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:2,38:1,45:2,49:1,50:3,63:1
(5.00/95.00/99.7%=1/63/63,Outliers=0,obl/obu=0/0) (6.292
ms/1635289684.794262)
[  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/5329 us  8
[  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:3,50:3,84:1
(5.00/95.00/99.7%=1/84/84,Outliers=0,obl/obu=0/0) (8.306
ms/1635289685.796315)
[  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0
14K/6331 us  6
[  1] 0.00-10.01 sec S8(f)-PDF:
bin(w=100us):cnt(100)=1:19,38:2,39:5,40:2,44:1,45:3,47:1,48:1,49:26,50:17,51:4,52:1,56:1,58:1,59:1,60:7,63:1,64:5,65:1,84:1
(5.00/95.00/99.7%=1/64/84,Outliers=0,obl/obu=0/0) (8.306
ms/1635289685.796315)

Bob

On Tue, Oct 26, 2021 at 11:45 AM Christoph Paasch <cpaasch@apple.com> wrote:

> Hello,
>
> > On Oct 25, 2021, at 9:24 PM, Eric Dumazet <eric.dumazet@gmail.com>
> wrote:
> >
> >
> >
> > On 10/25/21 8:11 PM, Stuart Cheshire via Bloat wrote:
> >> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <
> make-wifi-fast@lists.bufferbloat.net> wrote:
> >>
> >>> Hi All,
> >>>
> >>> Sorry for the spam. I'm trying to support a meaningful TCP message
> latency w/iperf 2 from the sender side w/o requiring e2e clock
> synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to
> help with this. It seems that this event goes off when the bytes are in
> flight vs have reached the destination network stack. If that's the case,
> then iperf 2 client (sender) may be able to produce the message latency by
> adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled
> RTT.
> >>>
> >>> Does this seem reasonable?
> >>
> >> I’m not 100% sure what you’re asking, but I will try to help.
> >>
> >> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report
> your endpoint as writable (e.g., via kqueue or epoll) until less than that
> threshold of data remains unsent. It won’t stop you writing more bytes if
> you want to, up to the socket send buffer size, but it won’t *ask* you for
> more data until the TCP_NOTSENT_LOWAT threshold is reached.
> >
> >
> > When I implemented TCP_NOTSENT_LOWAT back in 2013 [1], I made sure that
> sendmsg() would actually
> > stop feeding more bytes in TCP transmit queue if the current amount of
> unsent bytes
> > was above the threshold.
> >
> > So it looks like Apple implementation is different, based on your
> description ?
>
> Yes, TCP_NOTSENT_LOWAT only impacts the wakeup on iOS/macOS/...
>
> An app can still fill the send-buffer if it does a sendmsg() with a large
> buffer or does repeated calls to sendmsg().
>
> Fur Apple, the goal of TCP_NOTSENT_LOWAT was to allow an app to quickly
> change the data it "scheduled" to send. And thus allow the app to write the
> smallest "logical unit" it has. If that unit is 512KB large, the app is
> allowed to send that.
> For example, in case of video-streaming one may want to skip ahead in the
> video. In that case the app still needs to transmit the remaining parts of
> the previous frame anyways, before it can send the new video frame.
> That's the reason why the Apple implementation allows one to write more
> than just the lowat threshold.
>
>
> That being said, I do think that Linux's way allows for an easier API
> because the app does not need to be careful at how much data it sends after
> an epoll/kqueue wakeup. So, the latency-benefits will be easier to get.
>
>
> Christoph
>
>
>
> > [1]
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36
> >
> > netperf does not use epoll(), but rather a loop over sendmsg().
> >
> > One of the point of TCP_NOTSENT_LOWAT for Google was to be able to
> considerably increase
> > max number of bytes in transmit queues (3rd column of
> /proc/sys/net/ipv4/tcp_wmem)
> > by 10x, allowing for autotune to increase BDP for big RTT flows, this
> without
> > increasing memory needs for flows with small RTT.
> >
> > In other words, the TCP implementation attempts to keep BDP bytes in
> flight + TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes
> in flight is necessary to fill the network pipe and get good throughput.
> The TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give
> the source software some advance notice that the TCP implementation will
> soon be looking for more bytes to send, so that the buffer doesn’t run dry,
> thereby lowering throughput. (The old SO_SNDBUF option conflates both
> “bytes in flight” and “bytes buffered and ready to go” into the same
> number.)
> >>
> >> If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n
> bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification,
> that will tell you roughly how long it took n bytes to depart the machine.
> You won’t know why, though. The bytes could depart the machine in response
> for acks indicating that the same number of bytes have been accepted at the
> receiver. But the bytes can also depart the machine because CWND is
> growing. Of course, both of those things are usually happening at the same
> time.
> >>
> >> How to use TCP_NOTSENT_LOWAT is explained in this video:
> >>
> >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>
> >>
> >> Later in the same video is a two-minute demo (time offset 42:00 to time
> offset 44:00) showing a “before and after” demo illustrating the dramatic
> difference this makes for screen sharing responsiveness.
> >>
> >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>
> >>
> >> Stuart Cheshire
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >>
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 16092 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-26 23:23                                     ` Bob McMahon
@ 2021-10-26 23:38                                       ` Christoph Paasch
  2021-10-27  1:12                                         ` [Cerowrt-devel] " Eric Dumazet
  0 siblings, 1 reply; 108+ messages in thread
From: Christoph Paasch @ 2021-10-26 23:38 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Eric Dumazet, Stuart Cheshire, Cake List, Valdis Klētnieks,
	Make-Wifi-fast, David P. Reed, starlink, codel, cerowrt-devel,
	bloat, Steve Crocker, Vint Cerf

[-- Attachment #1: Type: text/plain, Size: 14938 bytes --]

Hi Bob,

> On Oct 26, 2021, at 4:23 PM, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
> I'm confused. I don't see any blocking nor partial writes per the write() at the app level with TCP_NOTSENT_LOWAT set at 4 bytes. The burst is 40K, the write size is 4K and the watermark is 4 bytes. There are ten writes per burst.

You are on Linux here, right?

AFAICS, Linux will still accept whatever fits in an skb. And that is likely more than 4K (with GSO on by default).

However, do you go back to select() after each write() or do you loop over the write() calls?


Christoph

> The S8 histograms are the times waiting on the select().  The first value is the bin number (multiplied by 100usec bin width) and second the bin count. The worst case time is at the end and is timestamped per unix epoch.
> 
> The second run is over a controlled WiFi link where a 99.7% point of 4-8ms for a WiFi TX op arbitration win is in the ballpark. The first is 1G wired and is in the 600 usec range. (No media arbitration there.)
> 
>  [root@localhost iperf2-code]# src/iperf -c 10.19.87.9 --trip-times -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
> WARN: option of --burst-size without --burst-period defaults --burst-period to 1 second
> ------------------------------------------------------------
> Client connecting to 10.19.87.9, TCP port 5001 with pid 2124 (1 flows)
> Write buffer size: 4096 Byte
> Bursting: 40.0 KByte every 1.00 seconds
> TCP window size: 85.0 KByte (default)
> Event based writes (pending queue watermark at 4 bytes)
> Enabled select histograms bin-width=0.100 ms, bins=10000
> ------------------------------------------------------------
> [  1] local 10.19.87.10%eth0 port 33166 connected with 10.19.87.9 port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=0.54 ms) on 2021-10-26 16:07:33 (PDT)
> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0       14K/5368 us  8
> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,2:5,3:2,4:1,11:1 (5.00/95.00/99.7%=1/11/11,Outliers=0,obl/obu=0/0) (1.089 ms/1635289653.928360)
> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/569 us  72
> [  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:1,3:4,4:1,7:1,8:1 (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.736 ms/1635289654.928088)
> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/312 us  131
> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:2,5:2,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.548 ms/1635289655.927776)
> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/302 us  136
> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:2,3:5,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.584 ms/1635289656.927814)
> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/316 us  130
> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:2,4:2,5:2,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.572 ms/1635289657.927810)
> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/253 us  162
> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:4,5:1 (5.00/95.00/99.7%=1/5/5,Outliers=0,obl/obu=0/0) (0.417 ms/1635289658.927630)
> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/290 us  141
> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:3,4:3,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.573 ms/1635289659.927771)
> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/359 us  114
> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,3:4,4:3,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.570 ms/1635289660.927753)
> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/349 us  117
> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:5,4:1,7:1 (5.00/95.00/99.7%=1/7/7,Outliers=0,obl/obu=0/0) (0.608 ms/1635289661.927843)
> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/347 us  118
> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:1,3:5,8:1 (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.725 ms/1635289662.927861)
> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0       14K/1519 us  27
> [  1] 0.00-10.01 sec S8(f)-PDF: bin(w=100us):cnt(100)=1:25,2:13,3:36,4:11,5:5,6:5,7:2,8:2,11:1 (5.00/95.00/99.7%=1/7/11,Outliers=0,obl/obu=0/0) (1.089 ms/1635289653.928360)
> 
> [root@localhost iperf2-code]# src/iperf -c 192.168.1.1 --trip-times -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
> WARN: option of --burst-size without --burst-period defaults --burst-period to 1 second
> ------------------------------------------------------------
> Client connecting to 192.168.1.1, TCP port 5001 with pid 2131 (1 flows)
> Write buffer size: 4096 Byte
> Bursting: 40.0 KByte every 1.00 seconds
> TCP window size: 85.0 KByte (default)
> Event based writes (pending queue watermark at 4 bytes)
> Enabled select histograms bin-width=0.100 ms, bins=10000
> ------------------------------------------------------------
> [  1] local 192.168.1.4%eth1 port 45518 connected with 192.168.1.1 port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=5.48 ms) on 2021-10-26 16:07:56 (PDT)
> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0       14K/10339 us  4
> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,40:1,47:1,49:2,50:3,51:1,60:1 (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.990 ms/1635289676.802143)
> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4853 us  8
> [  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,38:1,39:1,44:1,45:1,49:1,51:1,52:1,60:1 (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.937 ms/1635289677.802274)
> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4991 us  8
> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,48:1,49:2,50:2,51:1,60:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.307 ms/1635289678.794326)
> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4610 us  9
> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:3,50:3,56:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.362 ms/1635289679.794335)
> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5028 us  8
> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:6,59:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.367 ms/1635289680.794399)
> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5113 us  8
> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:3,50:2,58:1,60:1,65:1 (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.442 ms/1635289681.794392)
> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5054 us  8
> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:3,51:1,60:2,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.374 ms/1635289682.794335)
> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5138 us  8
> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:2,40:1,49:2,50:1,60:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.396 ms/1635289683.794338)
> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5329 us  8
> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,38:1,45:2,49:1,50:3,63:1 (5.00/95.00/99.7%=1/63/63,Outliers=0,obl/obu=0/0) (6.292 ms/1635289684.794262)
> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5329 us  8
> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:3,50:3,84:1 (5.00/95.00/99.7%=1/84/84,Outliers=0,obl/obu=0/0) (8.306 ms/1635289685.796315)
> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0       14K/6331 us  6
> [  1] 0.00-10.01 sec S8(f)-PDF: bin(w=100us):cnt(100)=1:19,38:2,39:5,40:2,44:1,45:3,47:1,48:1,49:26,50:17,51:4,52:1,56:1,58:1,59:1,60:7,63:1,64:5,65:1,84:1 (5.00/95.00/99.7%=1/64/84,Outliers=0,obl/obu=0/0) (8.306 ms/1635289685.796315)
> 
> Bob
> 
> On Tue, Oct 26, 2021 at 11:45 AM Christoph Paasch <cpaasch@apple.com <mailto:cpaasch@apple.com>> wrote:
> Hello,
> 
> > On Oct 25, 2021, at 9:24 PM, Eric Dumazet <eric.dumazet@gmail.com <mailto:eric.dumazet@gmail.com>> wrote:
> > 
> > 
> > 
> > On 10/25/21 8:11 PM, Stuart Cheshire via Bloat wrote:
> >> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net <mailto:make-wifi-fast@lists.bufferbloat.net>> wrote:
> >> 
> >>> Hi All,
> >>> 
> >>> Sorry for the spam. I'm trying to support a meaningful TCP message latency w/iperf 2 from the sender side w/o requiring e2e clock synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to help with this. It seems that this event goes off when the bytes are in flight vs have reached the destination network stack. If that's the case, then iperf 2 client (sender) may be able to produce the message latency by adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled RTT.
> >>> 
> >>> Does this seem reasonable?
> >> 
> >> I’m not 100% sure what you’re asking, but I will try to help.
> >> 
> >> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report your endpoint as writable (e.g., via kqueue or epoll) until less than that threshold of data remains unsent. It won’t stop you writing more bytes if you want to, up to the socket send buffer size, but it won’t *ask* you for more data until the TCP_NOTSENT_LOWAT threshold is reached.
> > 
> > 
> > When I implemented TCP_NOTSENT_LOWAT back in 2013 [1], I made sure that sendmsg() would actually
> > stop feeding more bytes in TCP transmit queue if the current amount of unsent bytes
> > was above the threshold.
> > 
> > So it looks like Apple implementation is different, based on your description ?
> 
> Yes, TCP_NOTSENT_LOWAT only impacts the wakeup on iOS/macOS/...
> 
> An app can still fill the send-buffer if it does a sendmsg() with a large buffer or does repeated calls to sendmsg().
> 
> Fur Apple, the goal of TCP_NOTSENT_LOWAT was to allow an app to quickly change the data it "scheduled" to send. And thus allow the app to write the smallest "logical unit" it has. If that unit is 512KB large, the app is allowed to send that.
> For example, in case of video-streaming one may want to skip ahead in the video. In that case the app still needs to transmit the remaining parts of the previous frame anyways, before it can send the new video frame.
> That's the reason why the Apple implementation allows one to write more than just the lowat threshold.
> 
> 
> That being said, I do think that Linux's way allows for an easier API because the app does not need to be careful at how much data it sends after an epoll/kqueue wakeup. So, the latency-benefits will be easier to get.
> 
> 
> Christoph
> 
> 
> 
> > [1] https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36 <https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36>
> > 
> > netperf does not use epoll(), but rather a loop over sendmsg().
> > 
> > One of the point of TCP_NOTSENT_LOWAT for Google was to be able to considerably increase
> > max number of bytes in transmit queues (3rd column of /proc/sys/net/ipv4/tcp_wmem)
> > by 10x, allowing for autotune to increase BDP for big RTT flows, this without
> > increasing memory needs for flows with small RTT.
> > 
> > In other words, the TCP implementation attempts to keep BDP bytes in flight + TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes in flight is necessary to fill the network pipe and get good throughput. The TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give the source software some advance notice that the TCP implementation will soon be looking for more bytes to send, so that the buffer doesn’t run dry, thereby lowering throughput. (The old SO_SNDBUF option conflates both “bytes in flight” and “bytes buffered and ready to go” into the same number.)
> >> 
> >> If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification, that will tell you roughly how long it took n bytes to depart the machine. You won’t know why, though. The bytes could depart the machine in response for acks indicating that the same number of bytes have been accepted at the receiver. But the bytes can also depart the machine because CWND is growing. Of course, both of those things are usually happening at the same time.
> >> 
> >> How to use TCP_NOTSENT_LOWAT is explained in this video:
> >> 
> >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199 <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>>
> >> 
> >> Later in the same video is a two-minute demo (time offset 42:00 to time offset 44:00) showing a “before and after” demo illustrating the dramatic difference this makes for screen sharing responsiveness.
> >> 
> >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520 <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>>
> >> 
> >> Stuart Cheshire
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
> >> https://lists.bufferbloat.net/listinfo/bloat <https://lists.bufferbloat.net/listinfo/bloat>
> >> 
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
> > https://lists.bufferbloat.net/listinfo/bloat <https://lists.bufferbloat.net/listinfo/bloat>
> 
> 
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 20229 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-26 23:38                                       ` Christoph Paasch
@ 2021-10-27  1:12                                         ` Eric Dumazet
  2021-10-27  3:45                                           ` Bob McMahon
  0 siblings, 1 reply; 108+ messages in thread
From: Eric Dumazet @ 2021-10-27  1:12 UTC (permalink / raw)
  To: Christoph Paasch, Bob McMahon
  Cc: Stuart Cheshire, Cake List, Valdis Klētnieks,
	Make-Wifi-fast, David P. Reed, starlink, codel, cerowrt-devel,
	bloat, Steve Crocker, Vint Cerf



On 10/26/21 4:38 PM, Christoph Paasch wrote:
> Hi Bob,
> 
>> On Oct 26, 2021, at 4:23 PM, Bob McMahon <bob.mcmahon@broadcom.com <mailto:bob.mcmahon@broadcom.com>> wrote:
>> I'm confused. I don't see any blocking nor partial writes per the write() at the app level with TCP_NOTSENT_LOWAT set at 4 bytes. The burst is 40K, the write size is 4K and the watermark is 4 bytes. There are ten writes per burst.
> 
> You are on Linux here, right?
> 
> AFAICS, Linux will still accept whatever fits in an skb. And that is likely more than 4K (with GSO on by default).

This (max payload per skb) can be tuned at the driver level, at least for experimental purposes or dedicated devices.

ip link set dev eth0 gso_max_size 8000

To fetch current values :

ip -d link sh dev eth0


> 
> However, do you go back to select() after each write() or do you loop over the write() calls?
> 
> 
> Christoph
> 
>> The S8 histograms are the times waiting on the select().  The first value is the bin number (multiplied by 100usec bin width) and second the bin count. The worst case time is at the end and is timestamped per unix epoch.
>>
>> The second run is over a controlled WiFi link where a 99.7% point of 4-8ms for a WiFi TX op arbitration win is in the ballpark. The first is 1G wired and is in the 600 usec range. (No media arbitration there.)
>>
>>  [root@localhost iperf2-code]# src/iperf -c 10.19.87.9 --trip-times -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
>> WARN: option of --burst-size without --burst-period defaults --burst-period to 1 second
>> ------------------------------------------------------------
>> Client connecting to 10.19.87.9, TCP port 5001 with pid 2124 (1 flows)
>> Write buffer size: 4096 Byte
>> Bursting: 40.0 KByte every 1.00 seconds
>> TCP window size: 85.0 KByte (default)
>> Event based writes (pending queue watermark at 4 bytes)
>> Enabled select histograms bin-width=0.100 ms, bins=10000
>> ------------------------------------------------------------
>> [  1] local 10.19.87.10%eth0 port 33166 connected with 10.19.87.9 port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=0.54 ms) on 2021-10-26 16:07:33 (PDT)
>> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
>> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0       14K/5368 us  8
>> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,2:5,3:2,4:1,11:1 (5.00/95.00/99.7%=1/11/11,Outliers=0,obl/obu=0/0) (1.089 ms/1635289653.928360)
>> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/569 us  72
>> [  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:1,3:4,4:1,7:1,8:1 (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.736 ms/1635289654.928088)
>> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/312 us  131
>> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:2,5:2,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.548 ms/1635289655.927776)
>> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/302 us  136
>> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:2,3:5,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.584 ms/1635289656.927814)
>> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/316 us  130
>> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:2,4:2,5:2,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.572 ms/1635289657.927810)
>> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/253 us  162
>> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:4,5:1 (5.00/95.00/99.7%=1/5/5,Outliers=0,obl/obu=0/0) (0.417 ms/1635289658.927630)
>> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/290 us  141
>> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:3,4:3,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.573 ms/1635289659.927771)
>> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/359 us  114
>> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,3:4,4:3,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.570 ms/1635289660.927753)
>> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/349 us  117
>> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:5,4:1,7:1 (5.00/95.00/99.7%=1/7/7,Outliers=0,obl/obu=0/0) (0.608 ms/1635289661.927843)
>> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/347 us  118
>> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:1,3:5,8:1 (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.725 ms/1635289662.927861)
>> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0       14K/1519 us  27
>> [  1] 0.00-10.01 sec S8(f)-PDF: bin(w=100us):cnt(100)=1:25,2:13,3:36,4:11,5:5,6:5,7:2,8:2,11:1 (5.00/95.00/99.7%=1/7/11,Outliers=0,obl/obu=0/0) (1.089 ms/1635289653.928360)
>>
>> [root@localhost iperf2-code]# src/iperf -c 192.168.1.1 --trip-times -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
>> WARN: option of --burst-size without --burst-period defaults --burst-period to 1 second
>> ------------------------------------------------------------
>> Client connecting to 192.168.1.1, TCP port 5001 with pid 2131 (1 flows)
>> Write buffer size: 4096 Byte
>> Bursting: 40.0 KByte every 1.00 seconds
>> TCP window size: 85.0 KByte (default)
>> Event based writes (pending queue watermark at 4 bytes)
>> Enabled select histograms bin-width=0.100 ms, bins=10000
>> ------------------------------------------------------------
>> [  1] local 192.168.1.4%eth1 port 45518 connected with 192.168.1.1 port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=5.48 ms) on 2021-10-26 16:07:56 (PDT)
>> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
>> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0       14K/10339 us  4
>> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,40:1,47:1,49:2,50:3,51:1,60:1 (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.990 ms/1635289676.802143)
>> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4853 us  8
>> [  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,38:1,39:1,44:1,45:1,49:1,51:1,52:1,60:1 (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.937 ms/1635289677.802274)
>> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4991 us  8
>> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,48:1,49:2,50:2,51:1,60:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.307 ms/1635289678.794326)
>> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4610 us  9
>> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:3,50:3,56:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.362 ms/1635289679.794335)
>> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5028 us  8
>> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:6,59:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.367 ms/1635289680.794399)
>> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5113 us  8
>> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:3,50:2,58:1,60:1,65:1 (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.442 ms/1635289681.794392)
>> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5054 us  8
>> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:3,51:1,60:2,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.374 ms/1635289682.794335)
>> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5138 us  8
>> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:2,40:1,49:2,50:1,60:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.396 ms/1635289683.794338)
>> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5329 us  8
>> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,38:1,45:2,49:1,50:3,63:1 (5.00/95.00/99.7%=1/63/63,Outliers=0,obl/obu=0/0) (6.292 ms/1635289684.794262)
>> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5329 us  8
>> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:3,50:3,84:1 (5.00/95.00/99.7%=1/84/84,Outliers=0,obl/obu=0/0) (8.306 ms/1635289685.796315)
>> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0       14K/6331 us  6
>> [  1] 0.00-10.01 sec S8(f)-PDF: bin(w=100us):cnt(100)=1:19,38:2,39:5,40:2,44:1,45:3,47:1,48:1,49:26,50:17,51:4,52:1,56:1,58:1,59:1,60:7,63:1,64:5,65:1,84:1 (5.00/95.00/99.7%=1/64/84,Outliers=0,obl/obu=0/0) (8.306 ms/1635289685.796315)
>>
>> Bob
>>
>> On Tue, Oct 26, 2021 at 11:45 AM Christoph Paasch <cpaasch@apple.com <mailto:cpaasch@apple.com>> wrote:
>>
>>     Hello,
>>
>>     > On Oct 25, 2021, at 9:24 PM, Eric Dumazet <eric.dumazet@gmail.com <mailto:eric.dumazet@gmail.com>> wrote:
>>     >
>>     >
>>     >
>>     > On 10/25/21 8:11 PM, Stuart Cheshire via Bloat wrote:
>>     >> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net <mailto:make-wifi-fast@lists.bufferbloat.net>> wrote:
>>     >>
>>     >>> Hi All,
>>     >>>
>>     >>> Sorry for the spam. I'm trying to support a meaningful TCP message latency w/iperf 2 from the sender side w/o requiring e2e clock synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to help with this. It seems that this event goes off when the bytes are in flight vs have reached the destination network stack. If that's the case, then iperf 2 client (sender) may be able to produce the message latency by adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled RTT.
>>     >>>
>>     >>> Does this seem reasonable?
>>     >>
>>     >> I’m not 100% sure what you’re asking, but I will try to help.
>>     >>
>>     >> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report your endpoint as writable (e.g., via kqueue or epoll) until less than that threshold of data remains unsent. It won’t stop you writing more bytes if you want to, up to the socket send buffer size, but it won’t *ask* you for more data until the TCP_NOTSENT_LOWAT threshold is reached.
>>     >
>>     >
>>     > When I implemented TCP_NOTSENT_LOWAT back in 2013 [1], I made sure that sendmsg() would actually
>>     > stop feeding more bytes in TCP transmit queue if the current amount of unsent bytes
>>     > was above the threshold.
>>     >
>>     > So it looks like Apple implementation is different, based on your description ?
>>
>>     Yes, TCP_NOTSENT_LOWAT only impacts the wakeup on iOS/macOS/...
>>
>>     An app can still fill the send-buffer if it does a sendmsg() with a large buffer or does repeated calls to sendmsg().
>>
>>     Fur Apple, the goal of TCP_NOTSENT_LOWAT was to allow an app to quickly change the data it "scheduled" to send. And thus allow the app to write the smallest "logical unit" it has. If that unit is 512KB large, the app is allowed to send that.
>>     For example, in case of video-streaming one may want to skip ahead in the video. In that case the app still needs to transmit the remaining parts of the previous frame anyways, before it can send the new video frame.
>>     That's the reason why the Apple implementation allows one to write more than just the lowat threshold.
>>
>>
>>     That being said, I do think that Linux's way allows for an easier API because the app does not need to be careful at how much data it sends after an epoll/kqueue wakeup. So, the latency-benefits will be easier to get.
>>
>>
>>     Christoph
>>
>>
>>
>>     > [1] https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36 <https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36>
>>     >
>>     > netperf does not use epoll(), but rather a loop over sendmsg().
>>     >
>>     > One of the point of TCP_NOTSENT_LOWAT for Google was to be able to considerably increase
>>     > max number of bytes in transmit queues (3rd column of /proc/sys/net/ipv4/tcp_wmem)
>>     > by 10x, allowing for autotune to increase BDP for big RTT flows, this without
>>     > increasing memory needs for flows with small RTT.
>>     >
>>     > In other words, the TCP implementation attempts to keep BDP bytes in flight + TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes in flight is necessary to fill the network pipe and get good throughput. The TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give the source software some advance notice that the TCP implementation will soon be looking for more bytes to send, so that the buffer doesn’t run dry, thereby lowering throughput. (The old SO_SNDBUF option conflates both “bytes in flight” and “bytes buffered and ready to go” into the same number.)
>>     >>
>>     >> If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification, that will tell you roughly how long it took n bytes to depart the machine. You won’t know why, though. The bytes could depart the machine in response for acks indicating that the same number of bytes have been accepted at the receiver. But the bytes can also depart the machine because CWND is growing. Of course, both of those things are usually happening at the same time.
>>     >>
>>     >> How to use TCP_NOTSENT_LOWAT is explained in this video:
>>     >>
>>     >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199 <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>>
>>     >>
>>     >> Later in the same video is a two-minute demo (time offset 42:00 to time offset 44:00) showing a “before and after” demo illustrating the dramatic difference this makes for screen sharing responsiveness.
>>     >>
>>     >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520 <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>>
>>     >>
>>     >> Stuart Cheshire
>>     >> _______________________________________________
>>     >> Bloat mailing list
>>     >> Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
>>     >> https://lists.bufferbloat.net/listinfo/bloat <https://lists.bufferbloat.net/listinfo/bloat>
>>     >>
>>     > _______________________________________________
>>     > Bloat mailing list
>>     > Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
>>     > https://lists.bufferbloat.net/listinfo/bloat <https://lists.bufferbloat.net/listinfo/bloat>
>>
>>
>> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-27  1:12                                         ` [Cerowrt-devel] " Eric Dumazet
@ 2021-10-27  3:45                                           ` Bob McMahon
  2021-10-27  5:40                                             ` [Cerowrt-devel] " Eric Dumazet
  2021-10-28 16:04                                             ` Christoph Paasch
  0 siblings, 2 replies; 108+ messages in thread
From: Bob McMahon @ 2021-10-27  3:45 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Christoph Paasch, Stuart Cheshire, Cake List,
	Valdis Klētnieks, Make-Wifi-fast, David P. Reed, starlink,
	codel, cerowrt-devel, bloat, Steve Crocker, Vint Cerf

[-- Attachment #1: Type: text/plain, Size: 24042 bytes --]

This is linux. The code flow is burst writes until the burst size, take a
timestamp, call select(), take second timestamp and insert time delta into
histogram, await clock_nanosleep() to schedule the next burst. (actually,
the deltas, inserts into the histogram and user i/o are done in another
thread, i.e. iperf 2's reporter thread.)

I still must be missing something.  Does anything else need to be set to
reduce the skb size? Everything seems to be indicating 4K writes even when
gso_max_size is 2000 (I assume these are units of bytes?) There are ten
writes, ten reads and ten  RTTs for the bursts.  I don't see partial writes
at the app level.

[root@localhost iperf2-code]# ip link set dev eth1 gso_max_size 2000
[root@localhost iperf2-code]# ip -d link sh dev eth1
9: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 00:90:4c:40:04:59 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu
68 maxmtu 1500 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size
2000 gso_max_segs 65535
[root@localhost iperf2-code]# uname -r
5.0.9-301.fc30.x86_64


It looks like RTT is being driven by WiFi TXOPs as doubling the write size
increases the aggregation by two but has no significant effect on the RTTs.

4K writes: tot_mpdus 328 tot_ampdus 209 mpduperampdu 2


8k writes:  tot_mpdus 317 tot_ampdus 107 mpduperampdu 3


[root@localhost iperf2-code]# src/iperf -c 192.168.1.1%eth1 --trip-times -i
1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
WARN: option of --burst-size without --burst-period defaults --burst-period
to 1 second
------------------------------------------------------------
Client connecting to 192.168.1.1, TCP port 5001 with pid 5145 via eth1 (1
flows)
Write buffer size: 4096 Byte
Bursting: 40.0 KByte every 1.00 seconds
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 4 bytes)
Enabled select histograms bin-width=0.100 ms, bins=10000
------------------------------------------------------------
[  1] local 192.168.1.4%eth1 port 45680 connected with 192.168.1.1 port
5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=5.30 ms) on
2021-10-26 20:25:29 (PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
Cwnd/RTT        NetPwr
[  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0
14K/10091 us  4
[  1] 0.00-1.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:1,36:1,40:1,44:1,46:1,48:1,49:1,50:2,52:1
(5.00/95.00/99.7%=1/52/52,Outliers=0,obl/obu=0/0) (5.121
ms/1635305129.152339)
[  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/4990 us  8
[  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,45:1,49:5,50:1
(5.00/95.00/99.7%=1/50/50,Outliers=0,obl/obu=0/0) (4.991
ms/1635305130.153330)
[  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/4904 us  8
[  1] 2.00-3.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:2,29:1,49:4,50:1,59:1,75:1
(5.00/95.00/99.7%=1/75/75,Outliers=0,obl/obu=0/0) (7.455
ms/1635305131.147353)
[  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/4964 us  8
[  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:4,50:2,59:1,65:1
(5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.460
ms/1635305132.146338)
[  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/4970 us  8
[  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:6,59:1,65:1
(5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.404
ms/1635305133.146335)
[  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/4986 us  8
[  1] 5.00-6.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:2,48:1,49:1,50:4,59:1,64:1
(5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.395
ms/1635305134.146343)
[  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/5059 us  8
[  1] 6.00-7.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:2,39:1,49:3,50:2,60:1,85:1
(5.00/95.00/99.7%=1/85/85,Outliers=0,obl/obu=0/0) (8.417
ms/1635305135.148343)
[  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/5407 us  8
[  1] 7.00-8.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:2,40:1,49:4,50:1,59:1,75:1
(5.00/95.00/99.7%=1/75/75,Outliers=0,obl/obu=0/0) (7.428
ms/1635305136.147343)
[  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/5188 us  8
[  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,40:1,49:3,50:3,64:1
(5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.388
ms/1635305137.146284)
[  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
14K/5306 us  8
[  1] 9.00-10.00 sec S8-PDF:
bin(w=100us):cnt(10)=1:2,39:1,49:2,50:2,51:1,60:1,65:1
(5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.422
ms/1635305138.146316)
[  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0
14K/5939 us  7
[  1] 0.00-10.01 sec S8(f)-PDF:
bin(w=100us):cnt(100)=1:19,29:1,36:1,39:3,40:3,44:1,45:1,46:1,48:2,49:33,50:18,51:1,52:1,59:5,60:2,64:2,65:3,75:2,85:1
(5.00/95.00/99.7%=1/65/85,Outliers=0,obl/obu=0/0) (8.417
ms/1635305135.148343)

[root@localhost iperf2-code]# src/iperf -s -i 1 -e -B 192.168.1.1%eth1
------------------------------------------------------------
Server listening on TCP port 5001 with pid 6287
Binding to local address 192.168.1.1 and iface eth1
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.1%eth1 port 5001 connected with 192.168.1.4 port
45680 (MSS=1448) (burst-period=1.0000s) (trip-times) (sock=4) (peer
2.1.4-master) on 2021-10-26 20:25:29 (PDT)
[ ID] Burst (start-end)  Transfer     Bandwidth       XferTime  (DC%)
Reads=Dist          NetPwr
[  1] 0.0001-0.0500 sec  40.1 KBytes  6.59 Mbits/sec  49.848 ms (5%)
 12=12:0:0:0:0:0:0:0  0
[  1] 1.0002-1.0461 sec  40.0 KBytes  7.14 Mbits/sec  45.913 ms (4.6%)
 10=10:0:0:0:0:0:0:0  0
[  1] 2.0002-2.0491 sec  40.0 KBytes  6.70 Mbits/sec  48.876 ms (4.9%)
 11=11:0:0:0:0:0:0:0  0
[  1] 3.0002-3.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.886 ms (5%)
 10=10:0:0:0:0:0:0:0  0
[  1] 4.0002-4.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.887 ms (5%)
 10=10:0:0:0:0:0:0:0  0
[  1] 5.0002-5.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.881 ms (5%)
 10=10:0:0:0:0:0:0:0  0
[  1] 6.0002-6.0511 sec  40.0 KBytes  6.44 Mbits/sec  50.895 ms (5.1%)
 10=10:0:0:0:0:0:0:0  0
[  1] 7.0002-7.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.889 ms (5%)
 10=10:0:0:0:0:0:0:0  0
[  1] 8.0002-8.0481 sec  40.0 KBytes  6.84 Mbits/sec  47.901 ms (4.8%)
 11=11:0:0:0:0:0:0:0  0
[  1] 9.0002-9.0491 sec  40.0 KBytes  6.70 Mbits/sec  48.872 ms (4.9%)
 10=10:0:0:0:0:0:0:0  0
[  1] 0.0000-10.0031 sec   400 KBytes   328 Kbits/sec
104=104:0:0:0:0:0:0:0

Bob

On Tue, Oct 26, 2021 at 6:12 PM Eric Dumazet <eric.dumazet@gmail.com> wrote:

>
>
> On 10/26/21 4:38 PM, Christoph Paasch wrote:
> > Hi Bob,
> >
> >> On Oct 26, 2021, at 4:23 PM, Bob McMahon <bob.mcmahon@broadcom.com
> <mailto:bob.mcmahon@broadcom.com>> wrote:
> >> I'm confused. I don't see any blocking nor partial writes per the
> write() at the app level with TCP_NOTSENT_LOWAT set at 4 bytes. The burst
> is 40K, the write size is 4K and the watermark is 4 bytes. There are ten
> writes per burst.
> >
> > You are on Linux here, right?
> >
> > AFAICS, Linux will still accept whatever fits in an skb. And that is
> likely more than 4K (with GSO on by default).
>
> This (max payload per skb) can be tuned at the driver level, at least for
> experimental purposes or dedicated devices.
>
> ip link set dev eth0 gso_max_size 8000
>
> To fetch current values :
>
> ip -d link sh dev eth0
>
>
> >
> > However, do you go back to select() after each write() or do you loop
> over the write() calls?
> >
> >
> > Christoph
> >
> >> The S8 histograms are the times waiting on the select().  The first
> value is the bin number (multiplied by 100usec bin width) and second the
> bin count. The worst case time is at the end and is timestamped per unix
> epoch.
> >>
> >> The second run is over a controlled WiFi link where a 99.7% point of
> 4-8ms for a WiFi TX op arbitration win is in the ballpark. The first is 1G
> wired and is in the 600 usec range. (No media arbitration there.)
> >>
> >>  [root@localhost iperf2-code]# src/iperf -c 10.19.87.9 --trip-times -i
> 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
> >> WARN: option of --burst-size without --burst-period defaults
> --burst-period to 1 second
> >> ------------------------------------------------------------
> >> Client connecting to 10.19.87.9, TCP port 5001 with pid 2124 (1 flows)
> >> Write buffer size: 4096 Byte
> >> Bursting: 40.0 KByte every 1.00 seconds
> >> TCP window size: 85.0 KByte (default)
> >> Event based writes (pending queue watermark at 4 bytes)
> >> Enabled select histograms bin-width=0.100 ms, bins=10000
> >> ------------------------------------------------------------
> >> [  1] local 10.19.87.10%eth0 port 33166 connected with 10.19.87.9 port
> 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=0.54 ms) on
> 2021-10-26 16:07:33 (PDT)
> >> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
> Cwnd/RTT        NetPwr
> >> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0
> 14K/5368 us  8
> >> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,2:5,3:2,4:1,11:1
> (5.00/95.00/99.7%=1/11/11,Outliers=0,obl/obu=0/0) (1.089
> ms/1635289653.928360)
> >> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/569 us  72
> >> [  1] 1.00-2.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,2:1,3:4,4:1,7:1,8:1
> (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.736 ms/1635289654.928088)
> >> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/312 us  131
> >> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:2,5:2,6:1
> (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.548 ms/1635289655.927776)
> >> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/302 us  136
> >> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:2,3:5,6:1
> (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.584 ms/1635289656.927814)
> >> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/316 us  130
> >> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:2,4:2,5:2,6:1
> (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.572 ms/1635289657.927810)
> >> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/253 us  162
> >> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:4,5:1
> (5.00/95.00/99.7%=1/5/5,Outliers=0,obl/obu=0/0) (0.417 ms/1635289658.927630)
> >> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/290 us  141
> >> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:3,4:3,6:1
> (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.573 ms/1635289659.927771)
> >> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/359 us  114
> >> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,3:4,4:3,6:1
> (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.570 ms/1635289660.927753)
> >> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/349 us  117
> >> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:5,4:1,7:1
> (5.00/95.00/99.7%=1/7/7,Outliers=0,obl/obu=0/0) (0.608 ms/1635289661.927843)
> >> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>   14K/347 us  118
> >> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:1,3:5,8:1
> (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.725 ms/1635289662.927861)
> >> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0
>   14K/1519 us  27
> >> [  1] 0.00-10.01 sec S8(f)-PDF:
> bin(w=100us):cnt(100)=1:25,2:13,3:36,4:11,5:5,6:5,7:2,8:2,11:1
> (5.00/95.00/99.7%=1/7/11,Outliers=0,obl/obu=0/0) (1.089
> ms/1635289653.928360)
> >>
> >> [root@localhost iperf2-code]# src/iperf -c 192.168.1.1 --trip-times -i
> 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
> >> WARN: option of --burst-size without --burst-period defaults
> --burst-period to 1 second
> >> ------------------------------------------------------------
> >> Client connecting to 192.168.1.1, TCP port 5001 with pid 2131 (1 flows)
> >> Write buffer size: 4096 Byte
> >> Bursting: 40.0 KByte every 1.00 seconds
> >> TCP window size: 85.0 KByte (default)
> >> Event based writes (pending queue watermark at 4 bytes)
> >> Enabled select histograms bin-width=0.100 ms, bins=10000
> >> ------------------------------------------------------------
> >> [  1] local 192.168.1.4%eth1 port 45518 connected with 192.168.1.1 port
> 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=5.48 ms) on
> 2021-10-26 16:07:56 (PDT)
> >> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
> Cwnd/RTT        NetPwr
> >> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0
> 14K/10339 us  4
> >> [  1] 0.00-1.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:1,40:1,47:1,49:2,50:3,51:1,60:1
> (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.990
> ms/1635289676.802143)
> >> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/4853 us  8
> >> [  1] 1.00-2.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,38:1,39:1,44:1,45:1,49:1,51:1,52:1,60:1
> (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.937
> ms/1635289677.802274)
> >> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/4991 us  8
> >> [  1] 2.00-3.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,48:1,49:2,50:2,51:1,60:1,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.307
> ms/1635289678.794326)
> >> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/4610 us  9
> >> [  1] 3.00-4.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,49:3,50:3,56:1,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.362
> ms/1635289679.794335)
> >> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/5028 us  8
> >> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:6,59:1,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.367
> ms/1635289680.794399)
> >> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/5113 us  8
> >> [  1] 5.00-6.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,49:3,50:2,58:1,60:1,65:1
> (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.442
> ms/1635289681.794392)
> >> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/5054 us  8
> >> [  1] 6.00-7.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,39:1,49:3,51:1,60:2,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.374
> ms/1635289682.794335)
> >> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/5138 us  8
> >> [  1] 7.00-8.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,39:2,40:1,49:2,50:1,60:1,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.396
> ms/1635289683.794338)
> >> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
> 14K/5329 us  8
> >> [  1] 8.00-9.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,38:1,45:2,49:1,50:3,63:1
> (5.00/95.00/99.7%=1/63/63,Outliers=0,obl/obu=0/0) (6.292
> ms/1635289684.794262)
> >> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>   14K/5329 us  8
> >> [  1] 9.00-10.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,39:1,49:3,50:3,84:1
> (5.00/95.00/99.7%=1/84/84,Outliers=0,obl/obu=0/0) (8.306
> ms/1635289685.796315)
> >> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0
>   14K/6331 us  6
> >> [  1] 0.00-10.01 sec S8(f)-PDF:
> bin(w=100us):cnt(100)=1:19,38:2,39:5,40:2,44:1,45:3,47:1,48:1,49:26,50:17,51:4,52:1,56:1,58:1,59:1,60:7,63:1,64:5,65:1,84:1
> (5.00/95.00/99.7%=1/64/84,Outliers=0,obl/obu=0/0) (8.306
> ms/1635289685.796315)
> >>
> >> Bob
> >>
> >> On Tue, Oct 26, 2021 at 11:45 AM Christoph Paasch <cpaasch@apple.com
> <mailto:cpaasch@apple.com>> wrote:
> >>
> >>     Hello,
> >>
> >>     > On Oct 25, 2021, at 9:24 PM, Eric Dumazet <eric.dumazet@gmail.com
> <mailto:eric.dumazet@gmail.com>> wrote:
> >>     >
> >>     >
> >>     >
> >>     > On 10/25/21 8:11 PM, Stuart Cheshire via Bloat wrote:
> >>     >> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <
> make-wifi-fast@lists.bufferbloat.net <mailto:
> make-wifi-fast@lists.bufferbloat.net>> wrote:
> >>     >>
> >>     >>> Hi All,
> >>     >>>
> >>     >>> Sorry for the spam. I'm trying to support a meaningful TCP
> message latency w/iperf 2 from the sender side w/o requiring e2e clock
> synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to
> help with this. It seems that this event goes off when the bytes are in
> flight vs have reached the destination network stack. If that's the case,
> then iperf 2 client (sender) may be able to produce the message latency by
> adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled
> RTT.
> >>     >>>
> >>     >>> Does this seem reasonable?
> >>     >>
> >>     >> I’m not 100% sure what you’re asking, but I will try to help.
> >>     >>
> >>     >> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t
> report your endpoint as writable (e.g., via kqueue or epoll) until less
> than that threshold of data remains unsent. It won’t stop you writing more
> bytes if you want to, up to the socket send buffer size, but it won’t *ask*
> you for more data until the TCP_NOTSENT_LOWAT threshold is reached.
> >>     >
> >>     >
> >>     > When I implemented TCP_NOTSENT_LOWAT back in 2013 [1], I made
> sure that sendmsg() would actually
> >>     > stop feeding more bytes in TCP transmit queue if the current
> amount of unsent bytes
> >>     > was above the threshold.
> >>     >
> >>     > So it looks like Apple implementation is different, based on your
> description ?
> >>
> >>     Yes, TCP_NOTSENT_LOWAT only impacts the wakeup on iOS/macOS/...
> >>
> >>     An app can still fill the send-buffer if it does a sendmsg() with a
> large buffer or does repeated calls to sendmsg().
> >>
> >>     Fur Apple, the goal of TCP_NOTSENT_LOWAT was to allow an app to
> quickly change the data it "scheduled" to send. And thus allow the app to
> write the smallest "logical unit" it has. If that unit is 512KB large, the
> app is allowed to send that.
> >>     For example, in case of video-streaming one may want to skip ahead
> in the video. In that case the app still needs to transmit the remaining
> parts of the previous frame anyways, before it can send the new video frame.
> >>     That's the reason why the Apple implementation allows one to write
> more than just the lowat threshold.
> >>
> >>
> >>     That being said, I do think that Linux's way allows for an easier
> API because the app does not need to be careful at how much data it sends
> after an epoll/kqueue wakeup. So, the latency-benefits will be easier to
> get.
> >>
> >>
> >>     Christoph
> >>
> >>
> >>
> >>     > [1]
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36
> <
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36
> >
> >>     >
> >>     > netperf does not use epoll(), but rather a loop over sendmsg().
> >>     >
> >>     > One of the point of TCP_NOTSENT_LOWAT for Google was to be able
> to considerably increase
> >>     > max number of bytes in transmit queues (3rd column of
> /proc/sys/net/ipv4/tcp_wmem)
> >>     > by 10x, allowing for autotune to increase BDP for big RTT flows,
> this without
> >>     > increasing memory needs for flows with small RTT.
> >>     >
> >>     > In other words, the TCP implementation attempts to keep BDP bytes
> in flight + TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of
> bytes in flight is necessary to fill the network pipe and get good
> throughput. The TCP_NOTSENT_LOWAT of bytes buffered and ready to go is
> provided to give the source software some advance notice that the TCP
> implementation will soon be looking for more bytes to send, so that the
> buffer doesn’t run dry, thereby lowering throughput. (The old SO_SNDBUF
> option conflates both “bytes in flight” and “bytes buffered and ready to
> go” into the same number.)
> >>     >>
> >>     >> If you wait for the TCP_NOTSENT_LOWAT notification, write a
> chunk of n bytes of data, and then wait for the next TCP_NOTSENT_LOWAT
> notification, that will tell you roughly how long it took n bytes to depart
> the machine. You won’t know why, though. The bytes could depart the machine
> in response for acks indicating that the same number of bytes have been
> accepted at the receiver. But the bytes can also depart the machine because
> CWND is growing. Of course, both of those things are usually happening at
> the same time.
> >>     >>
> >>     >> How to use TCP_NOTSENT_LOWAT is explained in this video:
> >>     >>
> >>     >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>>
> >>     >>
> >>     >> Later in the same video is a two-minute demo (time offset 42:00
> to time offset 44:00) showing a “before and after” demo illustrating the
> dramatic difference this makes for screen sharing responsiveness.
> >>     >>
> >>     >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>>
> >>     >>
> >>     >> Stuart Cheshire
> >>     >> _______________________________________________
> >>     >> Bloat mailing list
> >>     >> Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
> >>     >> https://lists.bufferbloat.net/listinfo/bloat <
> https://lists.bufferbloat.net/listinfo/bloat>
> >>     >>
> >>     > _______________________________________________
> >>     > Bloat mailing list
> >>     > Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
> >>     > https://lists.bufferbloat.net/listinfo/bloat <
> https://lists.bufferbloat.net/listinfo/bloat>
> >>
> >>
> >> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 30429 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Bloat] [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-27  3:45                                           ` Bob McMahon
@ 2021-10-27  5:40                                             ` Eric Dumazet
  2021-10-28 16:04                                             ` Christoph Paasch
  1 sibling, 0 replies; 108+ messages in thread
From: Eric Dumazet @ 2021-10-27  5:40 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Christoph Paasch, Stuart Cheshire, Cake List,
	Valdis Klētnieks, Make-Wifi-fast, David P. Reed, starlink,
	codel, cerowrt-devel, bloat, Steve Crocker, Vint Cerf



On 10/26/21 8:45 PM, Bob McMahon wrote:
> This is linux. The code flow is burst writes until the burst size, take a timestamp, call select(), take second timestamp and insert time delta into histogram, await clock_nanosleep() to schedule the next burst. (actually, the deltas, inserts into the histogram and user i/o are done in another thread, i.e. iperf 2's reporter thread.)
> 
> I still must be missing something.  Does anything else need to be set to reduce the skb size? Everything seems to be indicating 4K writes even when gso_max_size is 2000 (I assume these are units of bytes?) There are ten writes, ten reads and ten  RTTs for the bursts.  I don't see partial writes at the app level. 
> 
>     [root@localhost iperf2-code]# ip link set dev eth1 gso_max_size 2000

You could check with tcpdump on eth1, that outgoing packets are no longer 'TSO/GSO', but single MSS ones.

(Note: this device gso_max_size is only taken into account for flows established after the change)

> 
>     [root@localhost iperf2-code]# ip -d link sh dev eth1
>     9: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
>         link/ether 00:90:4c:40:04:59 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 1500 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 2000 gso_max_segs 65535
>     [root@localhost iperf2-code]# uname -r
>     5.0.9-301.fc30.x86_64
> 
> 
> It looks like RTT is being driven by WiFi TXOPs as doubling the write size increases the aggregation by two but has no significant effect on the RTTs.
> 
>     4K writes: tot_mpdus 328 tot_ampdus 209 mpduperampdu 2
> 
> 
>     8k writes:  tot_mpdus 317 tot_ampdus 107 mpduperampdu 3
> 
> 
> [root@localhost iperf2-code]# src/iperf -c 192.168.1.1%eth1 --trip-times -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
> WARN: option of --burst-size without --burst-period defaults --burst-period to 1 second
> ------------------------------------------------------------
> Client connecting to 192.168.1.1, TCP port 5001 with pid 5145 via eth1 (1 flows)
> Write buffer size: 4096 Byte
> Bursting: 40.0 KByte every 1.00 seconds
> TCP window size: 85.0 KByte (default)
> Event based writes (pending queue watermark at 4 bytes)
> Enabled select histograms bin-width=0.100 ms, bins=10000
> ------------------------------------------------------------
> [  1] local 192.168.1.4%eth1 port 45680 connected with 192.168.1.1 port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=5.30 ms) on 2021-10-26 20:25:29 (PDT)
> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0       14K/10091 us  4
> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,36:1,40:1,44:1,46:1,48:1,49:1,50:2,52:1 (5.00/95.00/99.7%=1/52/52,Outliers=0,obl/obu=0/0) (5.121 ms/1635305129.152339)
> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4990 us  8
> [  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,45:1,49:5,50:1 (5.00/95.00/99.7%=1/50/50,Outliers=0,obl/obu=0/0) (4.991 ms/1635305130.153330)
> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4904 us  8
> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,29:1,49:4,50:1,59:1,75:1 (5.00/95.00/99.7%=1/75/75,Outliers=0,obl/obu=0/0) (7.455 ms/1635305131.147353)
> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4964 us  8
> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:4,50:2,59:1,65:1 (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.460 ms/1635305132.146338)
> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4970 us  8
> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:6,59:1,65:1 (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.404 ms/1635305133.146335)
> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4986 us  8
> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,48:1,49:1,50:4,59:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.395 ms/1635305134.146343)
> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5059 us  8
> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:3,50:2,60:1,85:1 (5.00/95.00/99.7%=1/85/85,Outliers=0,obl/obu=0/0) (8.417 ms/1635305135.148343)
> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5407 us  8
> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,40:1,49:4,50:1,59:1,75:1 (5.00/95.00/99.7%=1/75/75,Outliers=0,obl/obu=0/0) (7.428 ms/1635305136.147343)
> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5188 us  8
> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,40:1,49:3,50:3,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.388 ms/1635305137.146284)
> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5306 us  8
> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:2,50:2,51:1,60:1,65:1 (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.422 ms/1635305138.146316)
> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0       14K/5939 us  7
> [  1] 0.00-10.01 sec S8(f)-PDF: bin(w=100us):cnt(100)=1:19,29:1,36:1,39:3,40:3,44:1,45:1,46:1,48:2,49:33,50:18,51:1,52:1,59:5,60:2,64:2,65:3,75:2,85:1 (5.00/95.00/99.7%=1/65/85,Outliers=0,obl/obu=0/0) (8.417 ms/1635305135.148343)
> 
> [root@localhost iperf2-code]# src/iperf -s -i 1 -e -B 192.168.1.1%eth1
> ------------------------------------------------------------
> Server listening on TCP port 5001 with pid 6287
> Binding to local address 192.168.1.1 and iface eth1
> Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
> TCP window size:  128 KByte (default)
> ------------------------------------------------------------
> [  1] local 192.168.1.1%eth1 port 5001 connected with 192.168.1.4 port 45680 (MSS=1448) (burst-period=1.0000s) (trip-times) (sock=4) (peer 2.1.4-master) on 2021-10-26 20:25:29 (PDT)
> [ ID] Burst (start-end)  Transfer     Bandwidth       XferTime  (DC%)     Reads=Dist          NetPwr
> [  1] 0.0001-0.0500 sec  40.1 KBytes  6.59 Mbits/sec  49.848 ms (5%)    12=12:0:0:0:0:0:0:0  0
> [  1] 1.0002-1.0461 sec  40.0 KBytes  7.14 Mbits/sec  45.913 ms (4.6%)    10=10:0:0:0:0:0:0:0  0
> [  1] 2.0002-2.0491 sec  40.0 KBytes  6.70 Mbits/sec  48.876 ms (4.9%)    11=11:0:0:0:0:0:0:0  0
> [  1] 3.0002-3.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.886 ms (5%)    10=10:0:0:0:0:0:0:0  0
> [  1] 4.0002-4.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.887 ms (5%)    10=10:0:0:0:0:0:0:0  0
> [  1] 5.0002-5.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.881 ms (5%)    10=10:0:0:0:0:0:0:0  0
> [  1] 6.0002-6.0511 sec  40.0 KBytes  6.44 Mbits/sec  50.895 ms (5.1%)    10=10:0:0:0:0:0:0:0  0
> [  1] 7.0002-7.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.889 ms (5%)    10=10:0:0:0:0:0:0:0  0
> [  1] 8.0002-8.0481 sec  40.0 KBytes  6.84 Mbits/sec  47.901 ms (4.8%)    11=11:0:0:0:0:0:0:0  0
> [  1] 9.0002-9.0491 sec  40.0 KBytes  6.70 Mbits/sec  48.872 ms (4.9%)    10=10:0:0:0:0:0:0:0  0
> [  1] 0.0000-10.0031 sec   400 KBytes   328 Kbits/sec               104=104:0:0:0:0:0:0:0
> 
> Bob
> 
> On Tue, Oct 26, 2021 at 6:12 PM Eric Dumazet <eric.dumazet@gmail.com <mailto:eric.dumazet@gmail.com>> wrote:
> 
> 
> 
>     On 10/26/21 4:38 PM, Christoph Paasch wrote:
>     > Hi Bob,
>     >
>     >> On Oct 26, 2021, at 4:23 PM, Bob McMahon <bob.mcmahon@broadcom.com <mailto:bob.mcmahon@broadcom.com> <mailto:bob.mcmahon@broadcom.com <mailto:bob.mcmahon@broadcom.com>>> wrote:
>     >> I'm confused. I don't see any blocking nor partial writes per the write() at the app level with TCP_NOTSENT_LOWAT set at 4 bytes. The burst is 40K, the write size is 4K and the watermark is 4 bytes. There are ten writes per burst.
>     >
>     > You are on Linux here, right?
>     >
>     > AFAICS, Linux will still accept whatever fits in an skb. And that is likely more than 4K (with GSO on by default).
> 
>     This (max payload per skb) can be tuned at the driver level, at least for experimental purposes or dedicated devices.
> 
>     ip link set dev eth0 gso_max_size 8000
> 
>     To fetch current values :
> 
>     ip -d link sh dev eth0
> 
> 
>     >
>     > However, do you go back to select() after each write() or do you loop over the write() calls?
>     >
>     >
>     > Christoph
>     >
>     >> The S8 histograms are the times waiting on the select().  The first value is the bin number (multiplied by 100usec bin width) and second the bin count. The worst case time is at the end and is timestamped per unix epoch.
>     >>
>     >> The second run is over a controlled WiFi link where a 99.7% point of 4-8ms for a WiFi TX op arbitration win is in the ballpark. The first is 1G wired and is in the 600 usec range. (No media arbitration there.)
>     >>
>     >>  [root@localhost iperf2-code]# src/iperf -c 10.19.87.9 --trip-times -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
>     >> WARN: option of --burst-size without --burst-period defaults --burst-period to 1 second
>     >> ------------------------------------------------------------
>     >> Client connecting to 10.19.87.9, TCP port 5001 with pid 2124 (1 flows)
>     >> Write buffer size: 4096 Byte
>     >> Bursting: 40.0 KByte every 1.00 seconds
>     >> TCP window size: 85.0 KByte (default)
>     >> Event based writes (pending queue watermark at 4 bytes)
>     >> Enabled select histograms bin-width=0.100 ms, bins=10000
>     >> ------------------------------------------------------------
>     >> [  1] local 10.19.87.10%eth0 port 33166 connected with 10.19.87.9 port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=0.54 ms) on 2021-10-26 16:07:33 (PDT)
>     >> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
>     >> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0       14K/5368 us  8
>     >> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,2:5,3:2,4:1,11:1 (5.00/95.00/99.7%=1/11/11,Outliers=0,obl/obu=0/0) (1.089 ms/1635289653.928360)
>     >> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/569 us  72
>     >> [  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:1,3:4,4:1,7:1,8:1 (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.736 ms/1635289654.928088)
>     >> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/312 us  131
>     >> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:2,5:2,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.548 ms/1635289655.927776)
>     >> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/302 us  136
>     >> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:2,3:5,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.584 ms/1635289656.927814)
>     >> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/316 us  130
>     >> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:2,4:2,5:2,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.572 ms/1635289657.927810)
>     >> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/253 us  162
>     >> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:4,5:1 (5.00/95.00/99.7%=1/5/5,Outliers=0,obl/obu=0/0) (0.417 ms/1635289658.927630)
>     >> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/290 us  141
>     >> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:3,4:3,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.573 ms/1635289659.927771)
>     >> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/359 us  114
>     >> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,3:4,4:3,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.570 ms/1635289660.927753)
>     >> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/349 us  117
>     >> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:5,4:1,7:1 (5.00/95.00/99.7%=1/7/7,Outliers=0,obl/obu=0/0) (0.608 ms/1635289661.927843)
>     >> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/347 us  118
>     >> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:1,3:5,8:1 (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.725 ms/1635289662.927861)
>     >> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0       14K/1519 us  27
>     >> [  1] 0.00-10.01 sec S8(f)-PDF: bin(w=100us):cnt(100)=1:25,2:13,3:36,4:11,5:5,6:5,7:2,8:2,11:1 (5.00/95.00/99.7%=1/7/11,Outliers=0,obl/obu=0/0) (1.089 ms/1635289653.928360)
>     >>
>     >> [root@localhost iperf2-code]# src/iperf -c 192.168.1.1 --trip-times -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
>     >> WARN: option of --burst-size without --burst-period defaults --burst-period to 1 second
>     >> ------------------------------------------------------------
>     >> Client connecting to 192.168.1.1, TCP port 5001 with pid 2131 (1 flows)
>     >> Write buffer size: 4096 Byte
>     >> Bursting: 40.0 KByte every 1.00 seconds
>     >> TCP window size: 85.0 KByte (default)
>     >> Event based writes (pending queue watermark at 4 bytes)
>     >> Enabled select histograms bin-width=0.100 ms, bins=10000
>     >> ------------------------------------------------------------
>     >> [  1] local 192.168.1.4%eth1 port 45518 connected with 192.168.1.1 port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=5.48 ms) on 2021-10-26 16:07:56 (PDT)
>     >> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
>     >> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0       14K/10339 us  4
>     >> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,40:1,47:1,49:2,50:3,51:1,60:1 (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.990 ms/1635289676.802143)
>     >> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4853 us  8
>     >> [  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,38:1,39:1,44:1,45:1,49:1,51:1,52:1,60:1 (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.937 ms/1635289677.802274)
>     >> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4991 us  8
>     >> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,48:1,49:2,50:2,51:1,60:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.307 ms/1635289678.794326)
>     >> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4610 us  9
>     >> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:3,50:3,56:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.362 ms/1635289679.794335)
>     >> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5028 us  8
>     >> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:6,59:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.367 ms/1635289680.794399)
>     >> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5113 us  8
>     >> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:3,50:2,58:1,60:1,65:1 (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.442 ms/1635289681.794392)
>     >> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5054 us  8
>     >> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:3,51:1,60:2,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.374 ms/1635289682.794335)
>     >> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5138 us  8
>     >> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:2,40:1,49:2,50:1,60:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.396 ms/1635289683.794338)
>     >> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5329 us  8
>     >> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,38:1,45:2,49:1,50:3,63:1 (5.00/95.00/99.7%=1/63/63,Outliers=0,obl/obu=0/0) (6.292 ms/1635289684.794262)
>     >> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5329 us  8
>     >> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:3,50:3,84:1 (5.00/95.00/99.7%=1/84/84,Outliers=0,obl/obu=0/0) (8.306 ms/1635289685.796315)
>     >> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0       14K/6331 us  6
>     >> [  1] 0.00-10.01 sec S8(f)-PDF: bin(w=100us):cnt(100)=1:19,38:2,39:5,40:2,44:1,45:3,47:1,48:1,49:26,50:17,51:4,52:1,56:1,58:1,59:1,60:7,63:1,64:5,65:1,84:1 (5.00/95.00/99.7%=1/64/84,Outliers=0,obl/obu=0/0) (8.306 ms/1635289685.796315)
>     >>
>     >> Bob
>     >>
>     >> On Tue, Oct 26, 2021 at 11:45 AM Christoph Paasch <cpaasch@apple.com <mailto:cpaasch@apple.com> <mailto:cpaasch@apple.com <mailto:cpaasch@apple.com>>> wrote:
>     >>
>     >>     Hello,
>     >>
>     >>     > On Oct 25, 2021, at 9:24 PM, Eric Dumazet <eric.dumazet@gmail.com <mailto:eric.dumazet@gmail.com> <mailto:eric.dumazet@gmail.com <mailto:eric.dumazet@gmail.com>>> wrote:
>     >>     >
>     >>     >
>     >>     >
>     >>     > On 10/25/21 8:11 PM, Stuart Cheshire via Bloat wrote:
>     >>     >> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net <mailto:make-wifi-fast@lists.bufferbloat.net> <mailto:make-wifi-fast@lists.bufferbloat.net <mailto:make-wifi-fast@lists.bufferbloat.net>>> wrote:
>     >>     >>
>     >>     >>> Hi All,
>     >>     >>>
>     >>     >>> Sorry for the spam. I'm trying to support a meaningful TCP message latency w/iperf 2 from the sender side w/o requiring e2e clock synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to help with this. It seems that this event goes off when the bytes are in flight vs have reached the destination network stack. If that's the case, then iperf 2 client (sender) may be able to produce the message latency by adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled RTT.
>     >>     >>>
>     >>     >>> Does this seem reasonable?
>     >>     >>
>     >>     >> I’m not 100% sure what you’re asking, but I will try to help.
>     >>     >>
>     >>     >> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report your endpoint as writable (e.g., via kqueue or epoll) until less than that threshold of data remains unsent. It won’t stop you writing more bytes if you want to, up to the socket send buffer size, but it won’t *ask* you for more data until the TCP_NOTSENT_LOWAT threshold is reached.
>     >>     >
>     >>     >
>     >>     > When I implemented TCP_NOTSENT_LOWAT back in 2013 [1], I made sure that sendmsg() would actually
>     >>     > stop feeding more bytes in TCP transmit queue if the current amount of unsent bytes
>     >>     > was above the threshold.
>     >>     >
>     >>     > So it looks like Apple implementation is different, based on your description ?
>     >>
>     >>     Yes, TCP_NOTSENT_LOWAT only impacts the wakeup on iOS/macOS/...
>     >>
>     >>     An app can still fill the send-buffer if it does a sendmsg() with a large buffer or does repeated calls to sendmsg().
>     >>
>     >>     Fur Apple, the goal of TCP_NOTSENT_LOWAT was to allow an app to quickly change the data it "scheduled" to send. And thus allow the app to write the smallest "logical unit" it has. If that unit is 512KB large, the app is allowed to send that.
>     >>     For example, in case of video-streaming one may want to skip ahead in the video. In that case the app still needs to transmit the remaining parts of the previous frame anyways, before it can send the new video frame.
>     >>     That's the reason why the Apple implementation allows one to write more than just the lowat threshold.
>     >>
>     >>
>     >>     That being said, I do think that Linux's way allows for an easier API because the app does not need to be careful at how much data it sends after an epoll/kqueue wakeup. So, the latency-benefits will be easier to get.
>     >>
>     >>
>     >>     Christoph
>     >>
>     >>
>     >>
>     >>     > [1] https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36 <https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36> <https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36 <https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36>>
>     >>     >
>     >>     > netperf does not use epoll(), but rather a loop over sendmsg().
>     >>     >
>     >>     > One of the point of TCP_NOTSENT_LOWAT for Google was to be able to considerably increase
>     >>     > max number of bytes in transmit queues (3rd column of /proc/sys/net/ipv4/tcp_wmem)
>     >>     > by 10x, allowing for autotune to increase BDP for big RTT flows, this without
>     >>     > increasing memory needs for flows with small RTT.
>     >>     >
>     >>     > In other words, the TCP implementation attempts to keep BDP bytes in flight + TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes in flight is necessary to fill the network pipe and get good throughput. The TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give the source software some advance notice that the TCP implementation will soon be looking for more bytes to send, so that the buffer doesn’t run dry, thereby lowering throughput. (The old SO_SNDBUF option conflates both “bytes in flight” and “bytes buffered and ready to go” into the same number.)
>     >>     >>
>     >>     >> If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification, that will tell you roughly how long it took n bytes to depart the machine. You won’t know why, though. The bytes could depart the machine in response for acks indicating that the same number of bytes have been accepted at the receiver. But the bytes can also depart the machine because CWND is growing. Of course, both of those things are usually happening at the same time.
>     >>     >>
>     >>     >> How to use TCP_NOTSENT_LOWAT is explained in this video:
>     >>     >>
>     >>     >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199 <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199 <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>>>
>     >>     >>
>     >>     >> Later in the same video is a two-minute demo (time offset 42:00 to time offset 44:00) showing a “before and after” demo illustrating the dramatic difference this makes for screen sharing responsiveness.
>     >>     >>
>     >>     >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520 <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520 <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>>>
>     >>     >>
>     >>     >> Stuart Cheshire
>     >>     >> _______________________________________________
>     >>     >> Bloat mailing list
>     >>     >> Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net> <mailto:Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>>
>     >>     >> https://lists.bufferbloat.net/listinfo/bloat <https://lists.bufferbloat.net/listinfo/bloat> <https://lists.bufferbloat.net/listinfo/bloat <https://lists.bufferbloat.net/listinfo/bloat>>
>     >>     >>
>     >>     > _______________________________________________
>     >>     > Bloat mailing list
>     >>     > Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net> <mailto:Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>>
>     >>     > https://lists.bufferbloat.net/listinfo/bloat <https://lists.bufferbloat.net/listinfo/bloat> <https://lists.bufferbloat.net/listinfo/bloat <https://lists.bufferbloat.net/listinfo/bloat>>
>     >>
>     >>
>     >> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
> 
> 
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Cerowrt-devel] [Make-wifi-fast] [Starlink] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-26 17:23                                     ` Bob McMahon
@ 2021-10-27 14:29                                       ` Sebastian Moeller
  0 siblings, 0 replies; 108+ messages in thread
From: Sebastian Moeller @ 2021-10-27 14:29 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Bjørn Ivar Teigen, Cake List, Make-Wifi-fast, starlink,
	codel, cerowrt-devel, bloat

Hi Bob,

OWD != RTT/2 seems generically to be the rule on the internet not the exception, even with perfectly  symmetric access links. Routing between AS often is asymmetric in it self (hot potato routing, where each AS hands over packets destined to others as early as possible, means that forward and backward path are often noticeably different; or rather they are different but that is hard to notice unless one can get path measurements like traceroutes from both directions). That last point is what makes me believe that internet speedtests, should always also include traceroutes from server to client and from client to server so one at least has a rough idea where the packets are going, but I digress...

Regards
	Sebastian



> On Oct 26, 2021, at 19:23, Bob McMahon via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
> 
> Hi Bjørn,
> 
> I find, when possible, it's preferred to take telemetry data of actual traffic (or reads and writes) vs a proxy. We had a case where TCP BE was outperforming TCP w/VI because BE had the most engineering resources assigned to it and engineers did a better job with BE. Using a proxy protocol wouldn't have exercised the same logic paths (in this case it was in the L2 driver) as TCP did. Hence, measuring actual TCP traffic (or socket reads and socket writes) was needed to flush out the problem. Note: I also find that network engineers tend to focus on the stack but it's the e2e at the application level that impacts user experience. Send side bloat can drive the OWD while the TCP stack's RTT may look fine. For WiFi test & measurements, we've decided most testing should be using TCP_NOSENT_LOWAT because it helps mitigate send side bloat which WiFi engineering doesn't focus on per lack of ability to impact.
> 
> Also, I think OWD is under tested and two way based testing can give incomplete and inaccurate information, particularly with respect to things like an e2e transport's control loop.  A most obvious example is assuming 1/2 RTT is the same as OWD to/fro. For WiFi this assumption is most always false. It also false for many residential internet connections where OWD asymmetry is designed in.
> 
> Bob
> 
> 
> On Tue, Oct 26, 2021 at 3:04 AM Bjørn Ivar Teigen <bjorn@domos.no> wrote:
> Hi Bob,
> 
> My name is Bjørn Ivar Teigen and I'm working on modeling and measuring WiFi MAC-layer protocol performance for my PhD.
> 
> Is it necessary to measure the latency using the TCP stream itself? I had a similar problem in the past, and solved it by doing the latency measurements using TWAMP running alongside the TCP traffic. The requirement for this to work is that the TWAMP packets are placed in the same queue(s) as the TCP traffic, and that the impact of measurement traffic is small enough so as not to interfere too much with your TCP results.
> Just my two cents, hope it's helpful.
> 
> Bjørn
> 
> On Tue, 26 Oct 2021 at 06:32, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
> Thanks Stuart this is helpful. I'm measuring the time just before the first write() (of potentially a burst of writes to achieve a burst size) per a socket fd's select event occurring when TCP_NOT_SENT_LOWAT being set to a small value, then sampling the RTT and CWND and providing histograms for all three, all on that event. I'm not sure the correctness of RTT and CWND at this sample point. This is a controlled test over 802.11ax and OFDMA where the TCP acks per the WiFi clients are being scheduled by the AP using 802.11ax trigger frames so the AP is affecting the end/end BDP per scheduling the transmits and the acks. The AP can grow the BDP or shrink it based on these scheduling decisions.  From there we're trying to maximize network power (throughput/delay) for elephant flows and just latency for mouse flows. (We also plan some RF frequency stuff to per OFDMA) Anyway, the AP based scheduling along with aggregation and OFDMA makes WiFi scheduling optimums non-obvious - at least to me - and I'm trying to provide insights into how an AP is affecting end/end performance.
> 
> The more direct approach for e2e TCP latency and network power has been to measure first write() to final read() and compute the e2e delay. This requires clock sync on the ends. (We're using ptp4l with GPS OCXO atomic references for that but this is typically only available in some labs.) 
> 
> Bob
>  
> 
> On Mon, Oct 25, 2021 at 8:11 PM Stuart Cheshire <cheshire@apple.com> wrote:
> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net> wrote:
> 
> > Hi All,
> > 
> > Sorry for the spam. I'm trying to support a meaningful TCP message latency w/iperf 2 from the sender side w/o requiring e2e clock synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to help with this. It seems that this event goes off when the bytes are in flight vs have reached the destination network stack. If that's the case, then iperf 2 client (sender) may be able to produce the message latency by adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled RTT.
> > 
> > Does this seem reasonable?
> 
> I’m not 100% sure what you’re asking, but I will try to help.
> 
> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report your endpoint as writable (e.g., via kqueue or epoll) until less than that threshold of data remains unsent. It won’t stop you writing more bytes if you want to, up to the socket send buffer size, but it won’t *ask* you for more data until the TCP_NOTSENT_LOWAT threshold is reached. In other words, the TCP implementation attempts to keep BDP bytes in flight + TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes in flight is necessary to fill the network pipe and get good throughput. The TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give the source software some advance notice that the TCP implementation will soon be looking for more bytes to send, so that the buffer doesn’t run dry, thereby lowering throughput. (The old SO_SNDBUF option conflates both “bytes in flight” and “bytes buffered and ready to go” into the same number.)
> 
> If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification, that will tell you roughly how long it took n bytes to depart the machine. You won’t know why, though. The bytes could depart the machine in response for acks indicating that the same number of bytes have been accepted at the receiver. But the bytes can also depart the machine because CWND is growing. Of course, both of those things are usually happening at the same time.
> 
> How to use TCP_NOTSENT_LOWAT is explained in this video:
> 
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>
> 
> Later in the same video is a two-minute demo (time offset 42:00 to time offset 44:00) showing a “before and after” demo illustrating the dramatic difference this makes for screen sharing responsiveness.
> 
> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>
> 
> Stuart Cheshire
> 
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it._______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
> 
> 
> -- 
> Bjørn Ivar Teigen
> Head of Research
> +47 47335952 | bjorn@domos.no | www.domos.no
> WiFi Slicing by Domos
> 
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it._______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-27  3:45                                           ` Bob McMahon
  2021-10-27  5:40                                             ` [Cerowrt-devel] " Eric Dumazet
@ 2021-10-28 16:04                                             ` Christoph Paasch
  2021-10-29 21:16                                               ` Bob McMahon
  1 sibling, 1 reply; 108+ messages in thread
From: Christoph Paasch @ 2021-10-28 16:04 UTC (permalink / raw)
  To: Bob McMahon
  Cc: Eric Dumazet, Stuart Cheshire, Cake List, Valdis Klētnieks,
	Make-Wifi-fast, David P. Reed, starlink, codel, cerowrt-devel,
	bloat, Steve Crocker, Vint Cerf



> On Oct 26, 2021, at 8:45 PM, Bob McMahon <bob.mcmahon@broadcom.com> wrote:
> 
> This is linux. The code flow is burst writes until the burst size, take a timestamp, call select(), take second timestamp and insert time delta into histogram, await clock_nanosleep() to schedule the next burst. (actually, the deltas, inserts into the histogram and user i/o are done in another thread, i.e. iperf 2's reporter thread.)
> I still must be missing something.  Does anything else need to be set to reduce the skb size? Everything seems to be indicating 4K writes even when gso_max_size is 2000 (I assume these are units of bytes?) There are ten writes, ten reads and ten  RTTs for the bursts.  I don't see partial writes at the app level. 

One thing to keep in mind is that once the congestion-window increased to > 40KB (your burst-size), all of the writes will not be blocking at all. TCP_NOTSENT_LOWAT is really just about the "notsent" part. Once the congestion-window is big enough to send 40KB in a burst, it will just all be immediately sent out.

> [root@localhost iperf2-code]# ip link set dev eth1 gso_max_size 2000
> [root@localhost iperf2-code]# ip -d link sh dev eth1
> 9: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
>     link/ether 00:90:4c:40:04:59 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 1500 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 2000 gso_max_segs 65535
> [root@localhost iperf2-code]# uname -r
> 5.0.9-301.fc30.x86_64
> 
> It looks like RTT is being driven by WiFi TXOPs as doubling the write size increases the aggregation by two but has no significant effect on the RTTs.
> 
> 4K writes: tot_mpdus 328 tot_ampdus 209 mpduperampdu 2
> 
> 8k writes:  tot_mpdus 317 tot_ampdus 107 mpduperampdu 3
> 
> [root@localhost iperf2-code]# src/iperf -c 192.168.1.1%eth1 --trip-times -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
> WARN: option of --burst-size without --burst-period defaults --burst-period to 1 second
> ------------------------------------------------------------
> Client connecting to 192.168.1.1, TCP port 5001 with pid 5145 via eth1 (1 flows)
> Write buffer size: 4096 Byte
> Bursting: 40.0 KByte every 1.00 seconds
> TCP window size: 85.0 KByte (default)
> Event based writes (pending queue watermark at 4 bytes)
> Enabled select histograms bin-width=0.100 ms, bins=10000
> ------------------------------------------------------------
> [  1] local 192.168.1.4%eth1 port 45680 connected with 192.168.1.1 port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=5.30 ms) on 2021-10-26 20:25:29 (PDT)
> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0       14K/10091 us  4
> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,36:1,40:1,44:1,46:1,48:1,49:1,50:2,52:1 (5.00/95.00/99.7%=1/52/52,Outliers=0,obl/obu=0/0) (5.121 ms/1635305129.152339)

Am I reading this correctly, that your writes take worst-case 5 milli-seconds ?

This looks correct then, because you seem to have an RTT of around 5ms.


It's surprising though that your congestion-window is not increasing.


Christoph


> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4990 us  8
> [  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,45:1,49:5,50:1 (5.00/95.00/99.7%=1/50/50,Outliers=0,obl/obu=0/0) (4.991 ms/1635305130.153330)
> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4904 us  8
> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,29:1,49:4,50:1,59:1,75:1 (5.00/95.00/99.7%=1/75/75,Outliers=0,obl/obu=0/0) (7.455 ms/1635305131.147353)
> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4964 us  8
> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:4,50:2,59:1,65:1 (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.460 ms/1635305132.146338)
> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4970 us  8
> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:6,59:1,65:1 (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.404 ms/1635305133.146335)
> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4986 us  8
> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,48:1,49:1,50:4,59:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.395 ms/1635305134.146343)
> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5059 us  8
> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:3,50:2,60:1,85:1 (5.00/95.00/99.7%=1/85/85,Outliers=0,obl/obu=0/0) (8.417 ms/1635305135.148343)
> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5407 us  8
> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,40:1,49:4,50:1,59:1,75:1 (5.00/95.00/99.7%=1/75/75,Outliers=0,obl/obu=0/0) (7.428 ms/1635305136.147343)
> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5188 us  8
> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,40:1,49:3,50:3,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.388 ms/1635305137.146284)
> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5306 us  8
> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:2,50:2,51:1,60:1,65:1 (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.422 ms/1635305138.146316)
> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0       14K/5939 us  7
> [  1] 0.00-10.01 sec S8(f)-PDF: bin(w=100us):cnt(100)=1:19,29:1,36:1,39:3,40:3,44:1,45:1,46:1,48:2,49:33,50:18,51:1,52:1,59:5,60:2,64:2,65:3,75:2,85:1 (5.00/95.00/99.7%=1/65/85,Outliers=0,obl/obu=0/0) (8.417 ms/1635305135.148343)
> 
> [root@localhost iperf2-code]# src/iperf -s -i 1 -e -B 192.168.1.1%eth1
> ------------------------------------------------------------
> Server listening on TCP port 5001 with pid 6287
> Binding to local address 192.168.1.1 and iface eth1
> Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
> TCP window size:  128 KByte (default)
> ------------------------------------------------------------
> [  1] local 192.168.1.1%eth1 port 5001 connected with 192.168.1.4 port 45680 (MSS=1448) (burst-period=1.0000s) (trip-times) (sock=4) (peer 2.1.4-master) on 2021-10-26 20:25:29 (PDT)
> [ ID] Burst (start-end)  Transfer     Bandwidth       XferTime  (DC%)     Reads=Dist          NetPwr
> [  1] 0.0001-0.0500 sec  40.1 KBytes  6.59 Mbits/sec  49.848 ms (5%)    12=12:0:0:0:0:0:0:0  0
> [  1] 1.0002-1.0461 sec  40.0 KBytes  7.14 Mbits/sec  45.913 ms (4.6%)    10=10:0:0:0:0:0:0:0  0
> [  1] 2.0002-2.0491 sec  40.0 KBytes  6.70 Mbits/sec  48.876 ms (4.9%)    11=11:0:0:0:0:0:0:0  0
> [  1] 3.0002-3.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.886 ms (5%)    10=10:0:0:0:0:0:0:0  0
> [  1] 4.0002-4.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.887 ms (5%)    10=10:0:0:0:0:0:0:0  0
> [  1] 5.0002-5.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.881 ms (5%)    10=10:0:0:0:0:0:0:0  0
> [  1] 6.0002-6.0511 sec  40.0 KBytes  6.44 Mbits/sec  50.895 ms (5.1%)    10=10:0:0:0:0:0:0:0  0
> [  1] 7.0002-7.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.889 ms (5%)    10=10:0:0:0:0:0:0:0  0
> [  1] 8.0002-8.0481 sec  40.0 KBytes  6.84 Mbits/sec  47.901 ms (4.8%)    11=11:0:0:0:0:0:0:0  0
> [  1] 9.0002-9.0491 sec  40.0 KBytes  6.70 Mbits/sec  48.872 ms (4.9%)    10=10:0:0:0:0:0:0:0  0
> [  1] 0.0000-10.0031 sec   400 KBytes   328 Kbits/sec               104=104:0:0:0:0:0:0:0
> 
> Bob
> 
> On Tue, Oct 26, 2021 at 6:12 PM Eric Dumazet <eric.dumazet@gmail.com> wrote:
> 
> 
> On 10/26/21 4:38 PM, Christoph Paasch wrote:
> > Hi Bob,
> > 
> >> On Oct 26, 2021, at 4:23 PM, Bob McMahon <bob.mcmahon@broadcom.com <mailto:bob.mcmahon@broadcom.com>> wrote:
> >> I'm confused. I don't see any blocking nor partial writes per the write() at the app level with TCP_NOTSENT_LOWAT set at 4 bytes. The burst is 40K, the write size is 4K and the watermark is 4 bytes. There are ten writes per burst.
> > 
> > You are on Linux here, right?
> > 
> > AFAICS, Linux will still accept whatever fits in an skb. And that is likely more than 4K (with GSO on by default).
> 
> This (max payload per skb) can be tuned at the driver level, at least for experimental purposes or dedicated devices.
> 
> ip link set dev eth0 gso_max_size 8000
> 
> To fetch current values :
> 
> ip -d link sh dev eth0
> 
> 
> > 
> > However, do you go back to select() after each write() or do you loop over the write() calls?
> > 
> > 
> > Christoph
> > 
> >> The S8 histograms are the times waiting on the select().  The first value is the bin number (multiplied by 100usec bin width) and second the bin count. The worst case time is at the end and is timestamped per unix epoch.
> >>
> >> The second run is over a controlled WiFi link where a 99.7% point of 4-8ms for a WiFi TX op arbitration win is in the ballpark. The first is 1G wired and is in the 600 usec range. (No media arbitration there.)
> >>
> >>  [root@localhost iperf2-code]# src/iperf -c 10.19.87.9 --trip-times -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
> >> WARN: option of --burst-size without --burst-period defaults --burst-period to 1 second
> >> ------------------------------------------------------------
> >> Client connecting to 10.19.87.9, TCP port 5001 with pid 2124 (1 flows)
> >> Write buffer size: 4096 Byte
> >> Bursting: 40.0 KByte every 1.00 seconds
> >> TCP window size: 85.0 KByte (default)
> >> Event based writes (pending queue watermark at 4 bytes)
> >> Enabled select histograms bin-width=0.100 ms, bins=10000
> >> ------------------------------------------------------------
> >> [  1] local 10.19.87.10%eth0 port 33166 connected with 10.19.87.9 port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=0.54 ms) on 2021-10-26 16:07:33 (PDT)
> >> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
> >> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0       14K/5368 us  8
> >> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,2:5,3:2,4:1,11:1 (5.00/95.00/99.7%=1/11/11,Outliers=0,obl/obu=0/0) (1.089 ms/1635289653.928360)
> >> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/569 us  72
> >> [  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:1,3:4,4:1,7:1,8:1 (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.736 ms/1635289654.928088)
> >> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/312 us  131
> >> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:2,5:2,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.548 ms/1635289655.927776)
> >> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/302 us  136
> >> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:2,3:5,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.584 ms/1635289656.927814)
> >> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/316 us  130
> >> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:2,4:2,5:2,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.572 ms/1635289657.927810)
> >> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/253 us  162
> >> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:4,5:1 (5.00/95.00/99.7%=1/5/5,Outliers=0,obl/obu=0/0) (0.417 ms/1635289658.927630)
> >> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/290 us  141
> >> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:3,4:3,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.573 ms/1635289659.927771)
> >> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/359 us  114
> >> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,3:4,4:3,6:1 (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.570 ms/1635289660.927753)
> >> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/349 us  117
> >> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:5,4:1,7:1 (5.00/95.00/99.7%=1/7/7,Outliers=0,obl/obu=0/0) (0.608 ms/1635289661.927843)
> >> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/347 us  118
> >> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:1,3:5,8:1 (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.725 ms/1635289662.927861)
> >> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0       14K/1519 us  27
> >> [  1] 0.00-10.01 sec S8(f)-PDF: bin(w=100us):cnt(100)=1:25,2:13,3:36,4:11,5:5,6:5,7:2,8:2,11:1 (5.00/95.00/99.7%=1/7/11,Outliers=0,obl/obu=0/0) (1.089 ms/1635289653.928360)
> >>
> >> [root@localhost iperf2-code]# src/iperf -c 192.168.1.1 --trip-times -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
> >> WARN: option of --burst-size without --burst-period defaults --burst-period to 1 second
> >> ------------------------------------------------------------
> >> Client connecting to 192.168.1.1, TCP port 5001 with pid 2131 (1 flows)
> >> Write buffer size: 4096 Byte
> >> Bursting: 40.0 KByte every 1.00 seconds
> >> TCP window size: 85.0 KByte (default)
> >> Event based writes (pending queue watermark at 4 bytes)
> >> Enabled select histograms bin-width=0.100 ms, bins=10000
> >> ------------------------------------------------------------
> >> [  1] local 192.168.1.4%eth1 port 45518 connected with 192.168.1.1 port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=5.48 ms) on 2021-10-26 16:07:56 (PDT)
> >> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry     Cwnd/RTT        NetPwr
> >> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0       14K/10339 us  4
> >> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,40:1,47:1,49:2,50:3,51:1,60:1 (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.990 ms/1635289676.802143)
> >> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4853 us  8
> >> [  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,38:1,39:1,44:1,45:1,49:1,51:1,52:1,60:1 (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.937 ms/1635289677.802274)
> >> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4991 us  8
> >> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,48:1,49:2,50:2,51:1,60:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.307 ms/1635289678.794326)
> >> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/4610 us  9
> >> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:3,50:3,56:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.362 ms/1635289679.794335)
> >> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5028 us  8
> >> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:6,59:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.367 ms/1635289680.794399)
> >> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5113 us  8
> >> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:3,50:2,58:1,60:1,65:1 (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.442 ms/1635289681.794392)
> >> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5054 us  8
> >> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:3,51:1,60:2,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.374 ms/1635289682.794335)
> >> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5138 us  8
> >> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:2,40:1,49:2,50:1,60:1,64:1 (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.396 ms/1635289683.794338)
> >> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5329 us  8
> >> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,38:1,45:2,49:1,50:3,63:1 (5.00/95.00/99.7%=1/63/63,Outliers=0,obl/obu=0/0) (6.292 ms/1635289684.794262)
> >> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0       14K/5329 us  8
> >> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,49:3,50:3,84:1 (5.00/95.00/99.7%=1/84/84,Outliers=0,obl/obu=0/0) (8.306 ms/1635289685.796315)
> >> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0       14K/6331 us  6
> >> [  1] 0.00-10.01 sec S8(f)-PDF: bin(w=100us):cnt(100)=1:19,38:2,39:5,40:2,44:1,45:3,47:1,48:1,49:26,50:17,51:4,52:1,56:1,58:1,59:1,60:7,63:1,64:5,65:1,84:1 (5.00/95.00/99.7%=1/64/84,Outliers=0,obl/obu=0/0) (8.306 ms/1635289685.796315)
> >>
> >> Bob
> >>
> >> On Tue, Oct 26, 2021 at 11:45 AM Christoph Paasch <cpaasch@apple.com <mailto:cpaasch@apple.com>> wrote:
> >>
> >>     Hello,
> >>
> >>     > On Oct 25, 2021, at 9:24 PM, Eric Dumazet <eric.dumazet@gmail.com <mailto:eric.dumazet@gmail.com>> wrote:
> >>     >
> >>     >
> >>     >
> >>     > On 10/25/21 8:11 PM, Stuart Cheshire via Bloat wrote:
> >>     >> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <make-wifi-fast@lists.bufferbloat.net <mailto:make-wifi-fast@lists.bufferbloat.net>> wrote:
> >>     >>
> >>     >>> Hi All,
> >>     >>>
> >>     >>> Sorry for the spam. I'm trying to support a meaningful TCP message latency w/iperf 2 from the sender side w/o requiring e2e clock synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to help with this. It seems that this event goes off when the bytes are in flight vs have reached the destination network stack. If that's the case, then iperf 2 client (sender) may be able to produce the message latency by adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled RTT.
> >>     >>>
> >>     >>> Does this seem reasonable?
> >>     >>
> >>     >> I’m not 100% sure what you’re asking, but I will try to help.
> >>     >>
> >>     >> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t report your endpoint as writable (e.g., via kqueue or epoll) until less than that threshold of data remains unsent. It won’t stop you writing more bytes if you want to, up to the socket send buffer size, but it won’t *ask* you for more data until the TCP_NOTSENT_LOWAT threshold is reached.
> >>     >
> >>     >
> >>     > When I implemented TCP_NOTSENT_LOWAT back in 2013 [1], I made sure that sendmsg() would actually
> >>     > stop feeding more bytes in TCP transmit queue if the current amount of unsent bytes
> >>     > was above the threshold.
> >>     >
> >>     > So it looks like Apple implementation is different, based on your description ?
> >>
> >>     Yes, TCP_NOTSENT_LOWAT only impacts the wakeup on iOS/macOS/...
> >>
> >>     An app can still fill the send-buffer if it does a sendmsg() with a large buffer or does repeated calls to sendmsg().
> >>
> >>     Fur Apple, the goal of TCP_NOTSENT_LOWAT was to allow an app to quickly change the data it "scheduled" to send. And thus allow the app to write the smallest "logical unit" it has. If that unit is 512KB large, the app is allowed to send that.
> >>     For example, in case of video-streaming one may want to skip ahead in the video. In that case the app still needs to transmit the remaining parts of the previous frame anyways, before it can send the new video frame.
> >>     That's the reason why the Apple implementation allows one to write more than just the lowat threshold.
> >>
> >>
> >>     That being said, I do think that Linux's way allows for an easier API because the app does not need to be careful at how much data it sends after an epoll/kqueue wakeup. So, the latency-benefits will be easier to get.
> >>
> >>
> >>     Christoph
> >>
> >>
> >>
> >>     > [1] https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36 <https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36>
> >>     >
> >>     > netperf does not use epoll(), but rather a loop over sendmsg().
> >>     >
> >>     > One of the point of TCP_NOTSENT_LOWAT for Google was to be able to considerably increase
> >>     > max number of bytes in transmit queues (3rd column of /proc/sys/net/ipv4/tcp_wmem)
> >>     > by 10x, allowing for autotune to increase BDP for big RTT flows, this without
> >>     > increasing memory needs for flows with small RTT.
> >>     >
> >>     > In other words, the TCP implementation attempts to keep BDP bytes in flight + TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP of bytes in flight is necessary to fill the network pipe and get good throughput. The TCP_NOTSENT_LOWAT of bytes buffered and ready to go is provided to give the source software some advance notice that the TCP implementation will soon be looking for more bytes to send, so that the buffer doesn’t run dry, thereby lowering throughput. (The old SO_SNDBUF option conflates both “bytes in flight” and “bytes buffered and ready to go” into the same number.)
> >>     >>
> >>     >> If you wait for the TCP_NOTSENT_LOWAT notification, write a chunk of n bytes of data, and then wait for the next TCP_NOTSENT_LOWAT notification, that will tell you roughly how long it took n bytes to depart the machine. You won’t know why, though. The bytes could depart the machine in response for acks indicating that the same number of bytes have been accepted at the receiver. But the bytes can also depart the machine because CWND is growing. Of course, both of those things are usually happening at the same time.
> >>     >>
> >>     >> How to use TCP_NOTSENT_LOWAT is explained in this video:
> >>     >>
> >>     >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199 <https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>>
> >>     >>
> >>     >> Later in the same video is a two-minute demo (time offset 42:00 to time offset 44:00) showing a “before and after” demo illustrating the dramatic difference this makes for screen sharing responsiveness.
> >>     >>
> >>     >> <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520 <https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>>
> >>     >>
> >>     >> Stuart Cheshire
> >>     >> _______________________________________________
> >>     >> Bloat mailing list
> >>     >> Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
> >>     >> https://lists.bufferbloat.net/listinfo/bloat <https://lists.bufferbloat.net/listinfo/bloat>
> >>     >>
> >>     > _______________________________________________
> >>     > Bloat mailing list
> >>     > Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
> >>     > https://lists.bufferbloat.net/listinfo/bloat <https://lists.bufferbloat.net/listinfo/bloat>
> >>
> >>
> >> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.
> 
> This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.


^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [Bloat] [Make-wifi-fast] TCP_NOTSENT_LOWAT applied to e2e TCP msg latency
  2021-10-28 16:04                                             ` Christoph Paasch
@ 2021-10-29 21:16                                               ` Bob McMahon
  0 siblings, 0 replies; 108+ messages in thread
From: Bob McMahon @ 2021-10-29 21:16 UTC (permalink / raw)
  To: Christoph Paasch
  Cc: Eric Dumazet, Stuart Cheshire, Cake List, Valdis Klētnieks,
	Make-Wifi-fast, David P. Reed, starlink, codel, cerowrt-devel,
	bloat, Steve Crocker, Vint Cerf

[-- Attachment #1: Type: text/plain, Size: 32785 bytes --]

Thanks for pointing out the congestion window. Not sure why it doesn't
increase. I think that takes a stack expert ;) The run below with rx window
clamp does seem to align with linux blocking the writes.

Yes, in the previous runt the worst cases were 5.121ms which does align
with the RTT.

As a side note: I wonder if WiFi AP folks can somehow better "schedule
aggregates" based on GSO "predictions." One of the challenges for WiFi is
to align aggregates with what TCP is feeding it. I'm not sure if an
intermediary last hop AP could keep the queue size based upon the e2e
source "big tcp" so-to-speak. This is all out of my areas of expertise but
it might be nice if the two non-linear control loops, being the AP &
802.11ax first/last link hop scheduling and e2e TCP's feedback loop could
somehow plugged together in a way to help with both e2e low latency and
throughput.

Here's a run with receive side window clamping set to 1024 bytes which I
think should force CWND not to grow. In this case it does look like
linux is blocking the writes as the TCP_NOTSENT_LOWAT select waits are sub
100 microseconds so the write must have blocked.

[root@localhost iperf2-code]# src/iperf -c 192.168.1.1%eth1 --trip-times -i
1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
WARN: option of --burst-size without --burst-period defaults --burst-period
to 1 second
------------------------------------------------------------
Client connecting to 192.168.1.1, TCP port 5001 with pid 24601 via eth1 (1
flows)
Write buffer size: 4096 Byte
Bursting: 40.0 KByte every 1.00 seconds
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 4 bytes)
Enabled select histograms bin-width=0.100 ms, bins=10000
------------------------------------------------------------
[  1] local 192.168.1.4%eth1 port 46042 connected with 192.168.1.1 port
5001 (MSS=576) (prefetch=4) (trip-times) (sock=3) (ct=5.01 ms) on
2021-10-29 13:57:22 (PDT)
[ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
Cwnd/RTT        NetPwr
[  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  10/0          0
 5K/10109 us  4
[  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,40:1,50:7,51:1
(5.00/95.00/99.7%=1/51/51,Outliers=0,obl/obu=0/0) (5.015
ms/1635541042.537251)
[  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
 5K/4941 us  8
[  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:10
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0) (0.015 ms/1635541043.465805)
[  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
 5K/5036 us  8
[  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:10
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0) (0.013 ms/1635541044.602288)
[  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
 5K/4956 us  8
[  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:10
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0) (0.015 ms/1635541045.465820)
[  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
 5K/5121 us  8
[  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:10
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0) (0.014 ms/1635541046.664221)
[  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
 5K/5029 us  8
[  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:10
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0) (0.091 ms/1635541047.466021)
[  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
 5K/4930 us  8
[  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:9,2:1
(5.00/95.00/99.7%=1/2/2,Outliers=0,obl/obu=0/0) (0.121 ms/1635541048.466058)
[  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
 5K/5096 us  8
[  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:10
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0) (0.015 ms/1635541049.465821)
[  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
 5K/5086 us  8
[  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:10
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0) (0.015 ms/1635541050.466051)
[  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
 5K/5112 us  8
[  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:10
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0) (0.015 ms/1635541051.465915)
[  1] 0.00-10.02 sec   400 KBytes   327 Kbits/sec  100/0          0
 5K/6518 us  6
[  1] 0.00-10.02 sec S8(f)-PDF:
bin(w=100us):cnt(100)=1:90,2:1,40:1,50:7,51:1
(5.00/95.00/99.7%=1/50/51,Outliers=9,obl/obu=0/0) (5.015
ms/1635541042.537251)


[root@localhost iperf2-code]# src/iperf -s -i 1 -e -B 192.168.1.1%ap0
--tcp-rx-window-clamp 1024
------------------------------------------------------------
Server listening on TCP port 5001 with pid 22772
Binding to local address 192.168.1.1 and iface ap0
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.1%ap0 port 5001 connected with 192.168.1.4 port 46042
(MSS=1448) (clamp=1024) (burst-period=1.00s) (trip-times) (sock=4) (peer
2.1.4-master) on 2021-10-29 13:57:22 (PDT)
[ ID] Burst (start-end)  Transfer     Bandwidth       XferTime  (DC%)
Reads=Dist          NetPwr
[  1] 0.00-0.20 sec  40.1 KBytes  1.65 Mbits/sec  199.727 ms (20%)
 42=42:0:0:0:0:0:0:0  0
[  1] 1.00-1.20 sec  40.0 KBytes  1.65 Mbits/sec  198.674 ms (20%)
 40=40:0:0:0:0:0:0:0  0
[  1] 2.00-2.20 sec  40.0 KBytes  1.64 Mbits/sec  199.729 ms (20%)
 40=40:0:0:0:0:0:0:0  0
[  1] 3.00-3.19 sec  40.0 KBytes  1.69 Mbits/sec  193.638 ms (19%)
 40=40:0:0:0:0:0:0:0  0
[  1] 4.00-4.20 sec  40.0 KBytes  1.62 Mbits/sec  201.660 ms (20%)
 40=40:0:0:0:0:0:0:0  0
[  1] 5.00-5.20 sec  40.0 KBytes  1.65 Mbits/sec  198.460 ms (20%)
 40=40:0:0:0:0:0:0:0  0
[  1] 6.00-6.19 sec  40.0 KBytes  1.69 Mbits/sec  194.418 ms (19%)
 40=40:0:0:0:0:0:0:0  0
[  1] 7.00-7.20 sec  40.0 KBytes  1.66 Mbits/sec  197.658 ms (20%)
 40=40:0:0:0:0:0:0:0  0
[  1] 8.00-8.20 sec  40.0 KBytes  1.67 Mbits/sec  196.431 ms (20%)
 40=40:0:0:0:0:0:0:0  0
[  1] 9.00-9.20 sec  40.0 KBytes  1.63 Mbits/sec  200.665 ms (20%)
 40=40:0:0:0:0:0:0:0  0
[  1] 0.00-10.00 sec   400 KBytes   328 Kbits/sec
402=402:0:0:0:0:0:0:0


Bob


On Thu, Oct 28, 2021 at 9:04 AM Christoph Paasch <cpaasch@apple.com> wrote:

>
>
> > On Oct 26, 2021, at 8:45 PM, Bob McMahon <bob.mcmahon@broadcom.com>
> wrote:
> >
> > This is linux. The code flow is burst writes until the burst size, take
> a timestamp, call select(), take second timestamp and insert time delta
> into histogram, await clock_nanosleep() to schedule the next burst.
> (actually, the deltas, inserts into the histogram and user i/o are done in
> another thread, i.e. iperf 2's reporter thread.)
> > I still must be missing something.  Does anything else need to be set to
> reduce the skb size? Everything seems to be indicating 4K writes even when
> gso_max_size is 2000 (I assume these are units of bytes?) There are ten
> writes, ten reads and ten  RTTs for the bursts.  I don't see partial writes
> at the app level.
>
> One thing to keep in mind is that once the congestion-window increased to
> > 40KB (your burst-size), all of the writes will not be blocking at all.
> TCP_NOTSENT_LOWAT is really just about the "notsent" part. Once the
> congestion-window is big enough to send 40KB in a burst, it will just all
> be immediately sent out.
>
> > [root@localhost iperf2-code]# ip link set dev eth1 gso_max_size 2000
> > [root@localhost iperf2-code]# ip -d link sh dev eth1
> > 9: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
> UNKNOWN mode DEFAULT group default qlen 1000
> >     link/ether 00:90:4c:40:04:59 brd ff:ff:ff:ff:ff:ff promiscuity 0
> minmtu 68 maxmtu 1500 addrgenmode eui64 numtxqueues 1 numrxqueues 1
> gso_max_size 2000 gso_max_segs 65535
> > [root@localhost iperf2-code]# uname -r
> > 5.0.9-301.fc30.x86_64
> >
> > It looks like RTT is being driven by WiFi TXOPs as doubling the write
> size increases the aggregation by two but has no significant effect on the
> RTTs.
> >
> > 4K writes: tot_mpdus 328 tot_ampdus 209 mpduperampdu 2
> >
> > 8k writes:  tot_mpdus 317 tot_ampdus 107 mpduperampdu 3
> >
> > [root@localhost iperf2-code]# src/iperf -c 192.168.1.1%eth1
> --trip-times -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K
> --histograms
> > WARN: option of --burst-size without --burst-period defaults
> --burst-period to 1 second
> > ------------------------------------------------------------
> > Client connecting to 192.168.1.1, TCP port 5001 with pid 5145 via eth1
> (1 flows)
> > Write buffer size: 4096 Byte
> > Bursting: 40.0 KByte every 1.00 seconds
> > TCP window size: 85.0 KByte (default)
> > Event based writes (pending queue watermark at 4 bytes)
> > Enabled select histograms bin-width=0.100 ms, bins=10000
> > ------------------------------------------------------------
> > [  1] local 192.168.1.4%eth1 port 45680 connected with 192.168.1.1 port
> 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=5.30 ms) on
> 2021-10-26 20:25:29 (PDT)
> > [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
>  Cwnd/RTT        NetPwr
> > [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0
>  14K/10091 us  4
> > [  1] 0.00-1.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:1,36:1,40:1,44:1,46:1,48:1,49:1,50:2,52:1
> (5.00/95.00/99.7%=1/52/52,Outliers=0,obl/obu=0/0) (5.121
> ms/1635305129.152339)
>
> Am I reading this correctly, that your writes take worst-case 5
> milli-seconds ?
>
> This looks correct then, because you seem to have an RTT of around 5ms.
>
>
> It's surprising though that your congestion-window is not increasing.
>
>
> Christoph
>
>
> > [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>  14K/4990 us  8
> > [  1] 1.00-2.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,39:1,45:1,49:5,50:1
> (5.00/95.00/99.7%=1/50/50,Outliers=0,obl/obu=0/0) (4.991
> ms/1635305130.153330)
> > [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>  14K/4904 us  8
> > [  1] 2.00-3.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,29:1,49:4,50:1,59:1,75:1
> (5.00/95.00/99.7%=1/75/75,Outliers=0,obl/obu=0/0) (7.455
> ms/1635305131.147353)
> > [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>  14K/4964 us  8
> > [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:4,50:2,59:1,65:1
> (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.460
> ms/1635305132.146338)
> > [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>  14K/4970 us  8
> > [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:6,59:1,65:1
> (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.404
> ms/1635305133.146335)
> > [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>  14K/4986 us  8
> > [  1] 5.00-6.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,48:1,49:1,50:4,59:1,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.395
> ms/1635305134.146343)
> > [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>  14K/5059 us  8
> > [  1] 6.00-7.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,39:1,49:3,50:2,60:1,85:1
> (5.00/95.00/99.7%=1/85/85,Outliers=0,obl/obu=0/0) (8.417
> ms/1635305135.148343)
> > [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>  14K/5407 us  8
> > [  1] 7.00-8.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,40:1,49:4,50:1,59:1,75:1
> (5.00/95.00/99.7%=1/75/75,Outliers=0,obl/obu=0/0) (7.428
> ms/1635305136.147343)
> > [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>  14K/5188 us  8
> > [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,40:1,49:3,50:3,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.388
> ms/1635305137.146284)
> > [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>  14K/5306 us  8
> > [  1] 9.00-10.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,39:1,49:2,50:2,51:1,60:1,65:1
> (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.422
> ms/1635305138.146316)
> > [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0
>  14K/5939 us  7
> > [  1] 0.00-10.01 sec S8(f)-PDF:
> bin(w=100us):cnt(100)=1:19,29:1,36:1,39:3,40:3,44:1,45:1,46:1,48:2,49:33,50:18,51:1,52:1,59:5,60:2,64:2,65:3,75:2,85:1
> (5.00/95.00/99.7%=1/65/85,Outliers=0,obl/obu=0/0) (8.417
> ms/1635305135.148343)
> >
> > [root@localhost iperf2-code]# src/iperf -s -i 1 -e -B 192.168.1.1%eth1
> > ------------------------------------------------------------
> > Server listening on TCP port 5001 with pid 6287
> > Binding to local address 192.168.1.1 and iface eth1
> > Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
> > TCP window size:  128 KByte (default)
> > ------------------------------------------------------------
> > [  1] local 192.168.1.1%eth1 port 5001 connected with 192.168.1.4 port
> 45680 (MSS=1448) (burst-period=1.0000s) (trip-times) (sock=4) (peer
> 2.1.4-master) on 2021-10-26 20:25:29 (PDT)
> > [ ID] Burst (start-end)  Transfer     Bandwidth       XferTime  (DC%)
>  Reads=Dist          NetPwr
> > [  1] 0.0001-0.0500 sec  40.1 KBytes  6.59 Mbits/sec  49.848 ms (5%)
> 12=12:0:0:0:0:0:0:0  0
> > [  1] 1.0002-1.0461 sec  40.0 KBytes  7.14 Mbits/sec  45.913 ms (4.6%)
>   10=10:0:0:0:0:0:0:0  0
> > [  1] 2.0002-2.0491 sec  40.0 KBytes  6.70 Mbits/sec  48.876 ms (4.9%)
>   11=11:0:0:0:0:0:0:0  0
> > [  1] 3.0002-3.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.886 ms (5%)
> 10=10:0:0:0:0:0:0:0  0
> > [  1] 4.0002-4.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.887 ms (5%)
> 10=10:0:0:0:0:0:0:0  0
> > [  1] 5.0002-5.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.881 ms (5%)
> 10=10:0:0:0:0:0:0:0  0
> > [  1] 6.0002-6.0511 sec  40.0 KBytes  6.44 Mbits/sec  50.895 ms (5.1%)
>   10=10:0:0:0:0:0:0:0  0
> > [  1] 7.0002-7.0501 sec  40.0 KBytes  6.57 Mbits/sec  49.889 ms (5%)
> 10=10:0:0:0:0:0:0:0  0
> > [  1] 8.0002-8.0481 sec  40.0 KBytes  6.84 Mbits/sec  47.901 ms (4.8%)
>   11=11:0:0:0:0:0:0:0  0
> > [  1] 9.0002-9.0491 sec  40.0 KBytes  6.70 Mbits/sec  48.872 ms (4.9%)
>   10=10:0:0:0:0:0:0:0  0
> > [  1] 0.0000-10.0031 sec   400 KBytes   328 Kbits/sec
>  104=104:0:0:0:0:0:0:0
> >
> > Bob
> >
> > On Tue, Oct 26, 2021 at 6:12 PM Eric Dumazet <eric.dumazet@gmail.com>
> wrote:
> >
> >
> > On 10/26/21 4:38 PM, Christoph Paasch wrote:
> > > Hi Bob,
> > >
> > >> On Oct 26, 2021, at 4:23 PM, Bob McMahon <bob.mcmahon@broadcom.com
> <mailto:bob.mcmahon@broadcom.com>> wrote:
> > >> I'm confused. I don't see any blocking nor partial writes per the
> write() at the app level with TCP_NOTSENT_LOWAT set at 4 bytes. The burst
> is 40K, the write size is 4K and the watermark is 4 bytes. There are ten
> writes per burst.
> > >
> > > You are on Linux here, right?
> > >
> > > AFAICS, Linux will still accept whatever fits in an skb. And that is
> likely more than 4K (with GSO on by default).
> >
> > This (max payload per skb) can be tuned at the driver level, at least
> for experimental purposes or dedicated devices.
> >
> > ip link set dev eth0 gso_max_size 8000
> >
> > To fetch current values :
> >
> > ip -d link sh dev eth0
> >
> >
> > >
> > > However, do you go back to select() after each write() or do you loop
> over the write() calls?
> > >
> > >
> > > Christoph
> > >
> > >> The S8 histograms are the times waiting on the select().  The first
> value is the bin number (multiplied by 100usec bin width) and second the
> bin count. The worst case time is at the end and is timestamped per unix
> epoch.
> > >>
> > >> The second run is over a controlled WiFi link where a 99.7% point of
> 4-8ms for a WiFi TX op arbitration win is in the ballpark. The first is 1G
> wired and is in the 600 usec range. (No media arbitration there.)
> > >>
> > >>  [root@localhost iperf2-code]# src/iperf -c 10.19.87.9 --trip-times
> -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
> > >> WARN: option of --burst-size without --burst-period defaults
> --burst-period to 1 second
> > >> ------------------------------------------------------------
> > >> Client connecting to 10.19.87.9, TCP port 5001 with pid 2124 (1 flows)
> > >> Write buffer size: 4096 Byte
> > >> Bursting: 40.0 KByte every 1.00 seconds
> > >> TCP window size: 85.0 KByte (default)
> > >> Event based writes (pending queue watermark at 4 bytes)
> > >> Enabled select histograms bin-width=0.100 ms, bins=10000
> > >> ------------------------------------------------------------
> > >> [  1] local 10.19.87.10%eth0 port 33166 connected with 10.19.87.9
> port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=0.54 ms) on
> 2021-10-26 16:07:33 (PDT)
> > >> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
>  Cwnd/RTT        NetPwr
> > >> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0
>    14K/5368 us  8
> > >> [  1] 0.00-1.00 sec S8-PDF: bin(w=100us):cnt(10)=1:1,2:5,3:2,4:1,11:1
> (5.00/95.00/99.7%=1/11/11,Outliers=0,obl/obu=0/0) (1.089
> ms/1635289653.928360)
> > >> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/569 us  72
> > >> [  1] 1.00-2.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,2:1,3:4,4:1,7:1,8:1
> (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.736 ms/1635289654.928088)
> > >> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/312 us  131
> > >> [  1] 2.00-3.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:2,5:2,6:1
> (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.548 ms/1635289655.927776)
> > >> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/302 us  136
> > >> [  1] 3.00-4.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,2:2,3:5,6:1
> (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.584 ms/1635289656.927814)
> > >> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/316 us  130
> > >> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:2,4:2,5:2,6:1
> (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.572 ms/1635289657.927810)
> > >> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/253 us  162
> > >> [  1] 5.00-6.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:2,3:4,5:1
> (5.00/95.00/99.7%=1/5/5,Outliers=0,obl/obu=0/0) (0.417 ms/1635289658.927630)
> > >> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/290 us  141
> > >> [  1] 6.00-7.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:3,4:3,6:1
> (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.573 ms/1635289659.927771)
> > >> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/359 us  114
> > >> [  1] 7.00-8.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,3:4,4:3,6:1
> (5.00/95.00/99.7%=1/6/6,Outliers=0,obl/obu=0/0) (0.570 ms/1635289660.927753)
> > >> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/349 us  117
> > >> [  1] 8.00-9.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,3:5,4:1,7:1
> (5.00/95.00/99.7%=1/7/7,Outliers=0,obl/obu=0/0) (0.608 ms/1635289661.927843)
> > >> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/347 us  118
> > >> [  1] 9.00-10.00 sec S8-PDF: bin(w=100us):cnt(10)=1:3,2:1,3:5,8:1
> (5.00/95.00/99.7%=1/8/8,Outliers=0,obl/obu=0/0) (0.725 ms/1635289662.927861)
> > >> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0
>      14K/1519 us  27
> > >> [  1] 0.00-10.01 sec S8(f)-PDF:
> bin(w=100us):cnt(100)=1:25,2:13,3:36,4:11,5:5,6:5,7:2,8:2,11:1
> (5.00/95.00/99.7%=1/7/11,Outliers=0,obl/obu=0/0) (1.089
> ms/1635289653.928360)
> > >>
> > >> [root@localhost iperf2-code]# src/iperf -c 192.168.1.1 --trip-times
> -i 1 -e --tcp-write-prefetch 4 -l 4K --burst-size=40K --histograms
> > >> WARN: option of --burst-size without --burst-period defaults
> --burst-period to 1 second
> > >> ------------------------------------------------------------
> > >> Client connecting to 192.168.1.1, TCP port 5001 with pid 2131 (1
> flows)
> > >> Write buffer size: 4096 Byte
> > >> Bursting: 40.0 KByte every 1.00 seconds
> > >> TCP window size: 85.0 KByte (default)
> > >> Event based writes (pending queue watermark at 4 bytes)
> > >> Enabled select histograms bin-width=0.100 ms, bins=10000
> > >> ------------------------------------------------------------
> > >> [  1] local 192.168.1.4%eth1 port 45518 connected with 192.168.1.1
> port 5001 (MSS=1448) (prefetch=4) (trip-times) (sock=3) (ct=5.48 ms) on
> 2021-10-26 16:07:56 (PDT)
> > >> [ ID] Interval        Transfer    Bandwidth       Write/Err  Rtry
>  Cwnd/RTT        NetPwr
> > >> [  1] 0.00-1.00 sec  40.1 KBytes   329 Kbits/sec  11/0          0
>    14K/10339 us  4
> > >> [  1] 0.00-1.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:1,40:1,47:1,49:2,50:3,51:1,60:1
> (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.990
> ms/1635289676.802143)
> > >> [  1] 1.00-2.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/4853 us  8
> > >> [  1] 1.00-2.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,38:1,39:1,44:1,45:1,49:1,51:1,52:1,60:1
> (5.00/95.00/99.7%=1/60/60,Outliers=0,obl/obu=0/0) (5.937
> ms/1635289677.802274)
> > >> [  1] 2.00-3.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/4991 us  8
> > >> [  1] 2.00-3.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,48:1,49:2,50:2,51:1,60:1,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.307
> ms/1635289678.794326)
> > >> [  1] 3.00-4.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/4610 us  9
> > >> [  1] 3.00-4.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,49:3,50:3,56:1,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.362
> ms/1635289679.794335)
> > >> [  1] 4.00-5.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/5028 us  8
> > >> [  1] 4.00-5.00 sec S8-PDF: bin(w=100us):cnt(10)=1:2,49:6,59:1,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.367
> ms/1635289680.794399)
> > >> [  1] 5.00-6.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/5113 us  8
> > >> [  1] 5.00-6.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,49:3,50:2,58:1,60:1,65:1
> (5.00/95.00/99.7%=1/65/65,Outliers=0,obl/obu=0/0) (6.442
> ms/1635289681.794392)
> > >> [  1] 6.00-7.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/5054 us  8
> > >> [  1] 6.00-7.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,39:1,49:3,51:1,60:2,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.374
> ms/1635289682.794335)
> > >> [  1] 7.00-8.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/5138 us  8
> > >> [  1] 7.00-8.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,39:2,40:1,49:2,50:1,60:1,64:1
> (5.00/95.00/99.7%=1/64/64,Outliers=0,obl/obu=0/0) (6.396
> ms/1635289683.794338)
> > >> [  1] 8.00-9.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/5329 us  8
> > >> [  1] 8.00-9.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,38:1,45:2,49:1,50:3,63:1
> (5.00/95.00/99.7%=1/63/63,Outliers=0,obl/obu=0/0) (6.292
> ms/1635289684.794262)
> > >> [  1] 9.00-10.00 sec  40.0 KBytes   328 Kbits/sec  10/0          0
>    14K/5329 us  8
> > >> [  1] 9.00-10.00 sec S8-PDF:
> bin(w=100us):cnt(10)=1:2,39:1,49:3,50:3,84:1
> (5.00/95.00/99.7%=1/84/84,Outliers=0,obl/obu=0/0) (8.306
> ms/1635289685.796315)
> > >> [  1] 0.00-10.01 sec   400 KBytes   327 Kbits/sec  102/0          0
>      14K/6331 us  6
> > >> [  1] 0.00-10.01 sec S8(f)-PDF:
> bin(w=100us):cnt(100)=1:19,38:2,39:5,40:2,44:1,45:3,47:1,48:1,49:26,50:17,51:4,52:1,56:1,58:1,59:1,60:7,63:1,64:5,65:1,84:1
> (5.00/95.00/99.7%=1/64/84,Outliers=0,obl/obu=0/0) (8.306
> ms/1635289685.796315)
> > >>
> > >> Bob
> > >>
> > >> On Tue, Oct 26, 2021 at 11:45 AM Christoph Paasch <cpaasch@apple.com
> <mailto:cpaasch@apple.com>> wrote:
> > >>
> > >>     Hello,
> > >>
> > >>     > On Oct 25, 2021, at 9:24 PM, Eric Dumazet <
> eric.dumazet@gmail.com <mailto:eric.dumazet@gmail.com>> wrote:
> > >>     >
> > >>     >
> > >>     >
> > >>     > On 10/25/21 8:11 PM, Stuart Cheshire via Bloat wrote:
> > >>     >> On 21 Oct 2021, at 17:51, Bob McMahon via Make-wifi-fast <
> make-wifi-fast@lists.bufferbloat.net <mailto:
> make-wifi-fast@lists.bufferbloat.net>> wrote:
> > >>     >>
> > >>     >>> Hi All,
> > >>     >>>
> > >>     >>> Sorry for the spam. I'm trying to support a meaningful TCP
> message latency w/iperf 2 from the sender side w/o requiring e2e clock
> synchronization. I thought I'd try to use the TCP_NOTSENT_LOWAT event to
> help with this. It seems that this event goes off when the bytes are in
> flight vs have reached the destination network stack. If that's the case,
> then iperf 2 client (sender) may be able to produce the message latency by
> adding the drain time (write start to TCP_NOTSENT_LOWAT) and the sampled
> RTT.
> > >>     >>>
> > >>     >>> Does this seem reasonable?
> > >>     >>
> > >>     >> I’m not 100% sure what you’re asking, but I will try to help.
> > >>     >>
> > >>     >> When you set TCP_NOTSENT_LOWAT, the TCP implementation won’t
> report your endpoint as writable (e.g., via kqueue or epoll) until less
> than that threshold of data remains unsent. It won’t stop you writing more
> bytes if you want to, up to the socket send buffer size, but it won’t *ask*
> you for more data until the TCP_NOTSENT_LOWAT threshold is reached.
> > >>     >
> > >>     >
> > >>     > When I implemented TCP_NOTSENT_LOWAT back in 2013 [1], I made
> sure that sendmsg() would actually
> > >>     > stop feeding more bytes in TCP transmit queue if the current
> amount of unsent bytes
> > >>     > was above the threshold.
> > >>     >
> > >>     > So it looks like Apple implementation is different, based on
> your description ?
> > >>
> > >>     Yes, TCP_NOTSENT_LOWAT only impacts the wakeup on iOS/macOS/...
> > >>
> > >>     An app can still fill the send-buffer if it does a sendmsg() with
> a large buffer or does repeated calls to sendmsg().
> > >>
> > >>     Fur Apple, the goal of TCP_NOTSENT_LOWAT was to allow an app to
> quickly change the data it "scheduled" to send. And thus allow the app to
> write the smallest "logical unit" it has. If that unit is 512KB large, the
> app is allowed to send that.
> > >>     For example, in case of video-streaming one may want to skip
> ahead in the video. In that case the app still needs to transmit the
> remaining parts of the previous frame anyways, before it can send the new
> video frame.
> > >>     That's the reason why the Apple implementation allows one to
> write more than just the lowat threshold.
> > >>
> > >>
> > >>     That being said, I do think that Linux's way allows for an easier
> API because the app does not need to be careful at how much data it sends
> after an epoll/kqueue wakeup. So, the latency-benefits will be easier to
> get.
> > >>
> > >>
> > >>     Christoph
> > >>
> > >>
> > >>
> > >>     > [1]
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36
> <
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=c9bee3b7fdecb0c1d070c7b54113b3bdfb9a3d36
> >
> > >>     >
> > >>     > netperf does not use epoll(), but rather a loop over sendmsg().
> > >>     >
> > >>     > One of the point of TCP_NOTSENT_LOWAT for Google was to be able
> to considerably increase
> > >>     > max number of bytes in transmit queues (3rd column of
> /proc/sys/net/ipv4/tcp_wmem)
> > >>     > by 10x, allowing for autotune to increase BDP for big RTT
> flows, this without
> > >>     > increasing memory needs for flows with small RTT.
> > >>     >
> > >>     > In other words, the TCP implementation attempts to keep BDP
> bytes in flight + TCP_NOTSENT_LOWAT bytes buffered and ready to go. The BDP
> of bytes in flight is necessary to fill the network pipe and get good
> throughput. The TCP_NOTSENT_LOWAT of bytes buffered and ready to go is
> provided to give the source software some advance notice that the TCP
> implementation will soon be looking for more bytes to send, so that the
> buffer doesn’t run dry, thereby lowering throughput. (The old SO_SNDBUF
> option conflates both “bytes in flight” and “bytes buffered and ready to
> go” into the same number.)
> > >>     >>
> > >>     >> If you wait for the TCP_NOTSENT_LOWAT notification, write a
> chunk of n bytes of data, and then wait for the next TCP_NOTSENT_LOWAT
> notification, that will tell you roughly how long it took n bytes to depart
> the machine. You won’t know why, though. The bytes could depart the machine
> in response for acks indicating that the same number of bytes have been
> accepted at the receiver. But the bytes can also depart the machine because
> CWND is growing. Of course, both of those things are usually happening at
> the same time.
> > >>     >>
> > >>     >> How to use TCP_NOTSENT_LOWAT is explained in this video:
> > >>     >>
> > >>     >> <
> https://developer.apple.com/videos/play/wwdc2015/719/?time=2199 <
> https://developer.apple.com/videos/play/wwdc2015/719/?time=2199>>
> > >>     >>
> > >>     >> Later in the same video is a two-minute demo (time offset
> 42:00 to time offset 44:00) showing a “before and after” demo illustrating
> the dramatic difference this makes for screen sharing responsiveness.
> > >>     >>
> > >>     >> <
> https://developer.apple.com/videos/play/wwdc2015/719/?time=2520 <
> https://developer.apple.com/videos/play/wwdc2015/719/?time=2520>>
> > >>     >>
> > >>     >> Stuart Cheshire
> > >>     >> _______________________________________________
> > >>     >> Bloat mailing list
> > >>     >> Bloat@lists.bufferbloat.net <mailto:
> Bloat@lists.bufferbloat.net>
> > >>     >> https://lists.bufferbloat.net/listinfo/bloat <
> https://lists.bufferbloat.net/listinfo/bloat>
> > >>     >>
> > >>     > _______________________________________________
> > >>     > Bloat mailing list
> > >>     > Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net
> >
> > >>     > https://lists.bufferbloat.net/listinfo/bloat <
> https://lists.bufferbloat.net/listinfo/bloat>
> > >>
> > >>
> > >> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
> >
> > This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 38753 bytes --]

^ permalink raw reply	[flat|nested] 108+ messages in thread

end of thread, other threads:[~2021-10-29 21:17 UTC | newest]

Thread overview: 108+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-01  0:12 [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board Dave Taht
2021-07-02  1:16 ` David P. Reed
2021-07-02  4:04   ` [Make-wifi-fast] " Bob McMahon
2021-07-02 16:11     ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Dick Roy
2021-07-02 17:07   ` [Cerowrt-devel] " Dave Taht
2021-07-02 23:28     ` [Make-wifi-fast] " Bob McMahon
2021-07-06 13:46       ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Ben Greear
2021-07-06 20:43         ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
2021-07-06 21:24           ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Ben Greear
2021-07-06 22:05             ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
2021-07-07 13:34               ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Ben Greear
2021-07-07 19:19                 ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
2021-07-08 19:38         ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " David P. Reed
2021-07-08 22:51           ` [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
2021-07-09  3:08           ` [Cerowrt-devel] [Starlink] [Make-wifi-fast] " Leonard Kleinrock
2021-07-09 10:05             ` [Cerowrt-devel] [Make-wifi-fast] [Starlink] " Luca Muscariello
2021-07-09 19:31               ` [Cerowrt-devel] Little's Law mea culpa, but not invalidating my main point David P. Reed
2021-07-09 20:24                 ` Bob McMahon
2021-07-09 22:57                 ` [Bloat] " Holland, Jake
2021-07-09 23:37                   ` Toke Høiland-Jørgensen
2021-07-09 23:01                 ` [Cerowrt-devel] " Leonard Kleinrock
2021-07-09 23:56                   ` [Cerowrt-devel] [Bloat] " Jonathan Morton
2021-07-17 23:56                     ` [Cerowrt-devel] [Make-wifi-fast] " Aaron Wood
2021-07-10 19:51                   ` Bob McMahon
2021-07-10 23:24                     ` Bob McMahon
2021-07-12 13:46                 ` [Bloat] " Livingood, Jason
2021-07-12 17:40                   ` [Cerowrt-devel] " David P. Reed
2021-07-12 18:21                     ` Bob McMahon
2021-07-12 18:38                       ` Bob McMahon
2021-07-12 19:07                       ` [Cerowrt-devel] " Ben Greear
2021-07-12 20:04                         ` Bob McMahon
2021-07-12 20:32                           ` [Cerowrt-devel] " Ben Greear
2021-07-12 20:36                             ` [Cerowrt-devel] [Cake] " David Lang
2021-07-12 20:50                               ` Bob McMahon
2021-07-12 20:42                             ` Bob McMahon
2021-07-13  7:14                             ` [Cerowrt-devel] " Amr Rizk
2021-07-13 17:07                               ` Bob McMahon
2021-07-13 17:49                                 ` [Cerowrt-devel] " David P. Reed
2021-07-14 18:37                                   ` Bob McMahon
2021-07-15  1:27                                     ` Holland, Jake
2021-07-16  0:34                                       ` Bob McMahon
     [not found]                                   ` <A5E35F34-A4D5-45B1-8E2D-E2F6DE988A1E@cs.ucla.edu>
2021-07-22 16:30                                     ` Bob McMahon
2021-07-13 17:22                               ` Bob McMahon
2021-07-17 23:29                             ` [Cerowrt-devel] " Aaron Wood
2021-07-18 19:06                               ` Bob McMahon
2021-07-12 21:54                           ` [Cerowrt-devel] [Make-wifi-fast] " Jonathan Morton
2021-09-20  1:21                 ` [Cerowrt-devel] " Dave Taht
2021-09-20  4:00                   ` Valdis Klētnieks
2021-09-20  4:09                     ` David Lang
2021-09-20 21:30                       ` David P. Reed
2021-09-20 21:44                         ` [Cerowrt-devel] [Cake] " David P. Reed
2021-09-20 12:57                     ` [Cerowrt-devel] [Starlink] " Steve Crocker
2021-09-20 16:36                       ` [Cerowrt-devel] [Cake] " John Sager
2021-09-21  2:40                       ` [Starlink] [Cerowrt-devel] " Vint Cerf
2021-09-23 17:46                         ` Bob McMahon
2021-09-26 18:24                           ` [Cerowrt-devel] [Starlink] " David P. Reed
2021-10-22  0:51                             ` TCP_NOTSENT_LOWAT applied to e2e TCP msg latency Bob McMahon
2021-10-26  3:11                               ` [Make-wifi-fast] " Stuart Cheshire
2021-10-26  4:24                                 ` [Cerowrt-devel] [Bloat] " Eric Dumazet
2021-10-26 18:45                                   ` Christoph Paasch
2021-10-26 23:23                                     ` Bob McMahon
2021-10-26 23:38                                       ` Christoph Paasch
2021-10-27  1:12                                         ` [Cerowrt-devel] " Eric Dumazet
2021-10-27  3:45                                           ` Bob McMahon
2021-10-27  5:40                                             ` [Cerowrt-devel] " Eric Dumazet
2021-10-28 16:04                                             ` Christoph Paasch
2021-10-29 21:16                                               ` Bob McMahon
2021-10-26  5:32                                 ` Bob McMahon
2021-10-26 10:04                                   ` [Cerowrt-devel] [Starlink] " Bjørn Ivar Teigen
2021-10-26 17:23                                     ` Bob McMahon
2021-10-27 14:29                                       ` [Cerowrt-devel] [Make-wifi-fast] [Starlink] " Sebastian Moeller
2021-08-02 22:59               ` [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board Bob McMahon
2021-08-02 23:16                 ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
2021-08-02 23:50                   ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
2021-08-03  3:06                     ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
2021-08-02 23:55                   ` Ben Greear
2021-08-03  0:01                     ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
2021-08-03  3:12                       ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
2021-08-03  3:23                         ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
2021-08-03  4:30                           ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
2021-08-03  4:38                             ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
2021-08-03  4:44                               ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " David Lang
2021-08-03 16:01                                 ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
2021-08-08  4:35                             ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
2021-08-08  5:04                               ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
2021-08-08  5:04                           ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
2021-08-08  5:07                             ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
2021-08-10 14:10                           ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Rodney W. Grimes
2021-08-10 16:13                             ` Dick Roy
2021-08-10 17:06                               ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
2021-08-10 17:56                                 ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
2021-08-10 18:11                                 ` Dick Roy
2021-08-10 19:21                                   ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
2021-08-10 20:16                                     ` [Cerowrt-devel] Anhyone have a spare couple a hundred million ... Elon may need to start a go-fund-me page! Dick Roy
2021-08-10 20:33                                       ` [Cerowrt-devel] [Starlink] " Jeremy Austin
2021-08-10 20:44                                         ` David Lang
2021-08-10 22:54                                           ` Bob McMahon
2021-09-02 17:36                                   ` [Cerowrt-devel] [Cake] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board David P. Reed
2021-09-03 14:35                                     ` [Bloat] [Cake] [Starlink] [Make-wifi-fast] [Cerowrt-devel] " Matt Mathis
2021-09-03 18:33                                       ` [Cerowrt-devel] [Bloat] [Cake] [Starlink] [Make-wifi-fast] " David P. Reed
2021-08-03  0:37                   ` [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] " Leonard Kleinrock
2021-08-03  1:24                     ` [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] " Bob McMahon
2021-08-08  5:07                       ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy
2021-08-08  5:15                         ` [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] " Bob McMahon
2021-08-08 18:36                           ` [Cerowrt-devel] [Make-wifi-fast] [Starlink] [Cake] " Aaron Wood
2021-08-08 18:48                             ` [Cerowrt-devel] [Bloat] " Jonathan Morton
2021-08-08 19:58                               ` [Bloat] [Make-wifi-fast] [Starlink] [Cake] [Cerowrt-devel] " Bob McMahon
2021-08-08  4:20                     ` [Cerowrt-devel] [Starlink] [Cake] [Make-wifi-fast] " Dick Roy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox