[Rpm] lightweight active sensing of bandwidth and buffering

rjmcmahon rjmcmahon at rjmcmahon.com
Wed Nov 2 15:21:28 EDT 2022


Bloat is really a unit of memory so the inP metric in iperf 2 may be 
preferred. It's calculated using Little's law. The bloated UDP run over 
a WiFi link is ~2470 packets and min e2e latency of ~31 ms. The non 
bloat steady state is ~330 packets and min latency of ~4ms.  The latter 
is likely showing that the WiFi transmits being delayed by ~4ms NAV 
though I'd have to dig more to verify. (Note: I'll probably add inP to 
burst isoch traffic too. Not there yet as there hasn't been wide 
adoption of --isochronous though some use it extensively)

[root at fedora ~]# iperf -s -u -i 1 -e -B 192.168.1.231%eth1
------------------------------------------------------------
Server listening on UDP port 5001 with pid 611171
Binding to local address 192.168.1.231 and iface eth1
Read buffer size: 1.44 KByte (Dist bin width= 183 Byte)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.231%eth1 port 5001 connected with 192.168.1.15 
port 50870 (trip-times) (sock=3) (peer 2.1.9-master) on 2022-11-02 
12:00:36 (PDT)
[ ID] Interval        Transfer     Bandwidth        Jitter   Lost/Total  
Latency avg/min/max/stdev PPS  Rx/inP  NetPwr
[  1] 0.00-1.00 sec   118 MBytes   986 Mbits/sec   0.016 ms 4545/88410 
(5.1%) 29.484/5.175/42.872/6.106 ms 83979 pps 83865/2476(513) pkts 4181
[  1] 0.00-1.00 sec  47 datagrams received out-of-order
[  1] 1.00-2.00 sec   123 MBytes  1.03 Gbits/sec   0.017 ms 3937/91326 
(4.3%) 31.576/29.353/39.704/1.169 ms 87396 pps 87389/2760(102) pkts 4068
[  1] 2.00-3.00 sec   123 MBytes  1.03 Gbits/sec   0.008 ms 3311/91243 
(3.6%) 31.602/29.218/39.096/1.259 ms 87899 pps 87932/2778(111) pkts 4090
[  1] 3.00-4.00 sec   123 MBytes  1.03 Gbits/sec   0.010 ms 3474/91385 
(3.8%) 31.570/29.032/42.499/1.856 ms 87943 pps 87911/2776(163) pkts 4093
[  1] 4.00-5.00 sec   122 MBytes  1.02 Gbits/sec   0.015 ms 4482/91331 
(4.9%) 31.349/29.163/40.234/1.291 ms 86813 pps 86849/2721(112) pkts 4073
[  1] 5.00-6.00 sec   122 MBytes  1.02 Gbits/sec   0.023 ms 4249/91102 
(4.7%) 31.376/29.336/41.963/1.318 ms 86890 pps 86853/2726(115) pkts 4069
[  1] 6.00-7.00 sec   122 MBytes  1.03 Gbits/sec   0.007 ms 4052/91410 
(4.4%) 31.424/29.178/38.683/1.037 ms 87344 pps 87358/2745(91) pkts 4087
[  1] 7.00-8.00 sec   123 MBytes  1.03 Gbits/sec   0.013 ms 3464/91297 
(3.8%) 31.414/29.189/38.161/1.061 ms 87839 pps 87833/2759(93) pkts 4110
[  1] 8.00-9.00 sec   123 MBytes  1.03 Gbits/sec   0.010 ms 3253/90930 
(3.6%) 31.686/29.335/39.989/1.478 ms 87662 pps 87677/2778(130) pkts 4068
[  1] 9.00-10.00 sec   122 MBytes  1.03 Gbits/sec   0.010 ms 3971/91331 
(4.3%) 31.486/29.197/39.750/1.407 ms 87378 pps 87360/2751(123) pkts 4079
[  1] 0.00-10.03 sec  1.20 GBytes  1.02 Gbits/sec   0.021 ms 
39124/913050 (4.3%) 31.314/5.175/42.872/2.365 ms 87095 pps 
873926/2727(206) pkts 4089
[  1] 0.00-10.03 sec  47 datagrams received out-of-order
[  3] WARNING: ack of last datagram failed.
[  2] local 192.168.1.231%eth1 port 5001 connected with 192.168.1.15 
port 46481 (trip-times) (sock=5) (peer 2.1.9-master) on 2022-11-02 
12:01:24 (PDT)
[  2] 0.00-1.00 sec   106 MBytes   886 Mbits/sec   0.012 ms 799/76100 
(1%) 4.377/2.780/9.694/0.547 ms 75391 pps 75301/330(41) pkts 25287
[  2] 0.00-1.00 sec  46 datagrams received out-of-order
[  2] 1.00-2.00 sec   106 MBytes   892 Mbits/sec   0.018 ms 746/76624 
(0.97%) 4.453/3.148/9.444/0.693 ms 75884 pps 75878/338(53) pkts 25048
[  2] 2.00-3.00 sec   106 MBytes   892 Mbits/sec   0.010 ms 640/76528 
(0.84%) 4.388/3.022/9.848/0.529 ms 75888 pps 75888/333(40) pkts 25425
[  2] 3.00-4.00 sec   106 MBytes   891 Mbits/sec   0.014 ms 707/76436 
(0.92%) 4.412/3.065/7.490/0.514 ms 75727 pps 75729/334(39) pkts 25231
[  2] 4.00-5.00 sec   106 MBytes   892 Mbits/sec   0.012 ms 764/76623 
(1%) 4.408/3.081/8.836/0.571 ms 75859 pps 75859/334(43) pkts 25300
[  2] 5.00-6.00 sec   106 MBytes   888 Mbits/sec   0.016 ms 941/76483 
(1.2%) 4.330/3.097/7.841/0.490 ms 75548 pps 75542/327(37) pkts 25648
[  2] 6.00-7.00 sec   106 MBytes   893 Mbits/sec   0.015 ms 549/76500 
(0.72%) 4.405/3.021/9.493/0.554 ms 75945 pps 75951/335(42) pkts 25346
[  2] 7.00-8.00 sec   106 MBytes   892 Mbits/sec   0.021 ms 683/76572 
(0.89%) 4.362/3.042/8.770/0.488 ms 75892 pps 75889/331(37) pkts 25576
[  2] 8.00-9.00 sec   106 MBytes   892 Mbits/sec   0.019 ms 688/76520 
(0.9%) 4.355/2.964/10.486/0.553 ms 75906 pps 75832/331(42) pkts 25599
[  2] 9.00-10.00 sec   106 MBytes   892 Mbits/sec   0.015 ms 644/76496 
(0.84%) 4.387/3.043/10.469/0.563 ms 75775 pps 75852/332(43) pkts 25419
[  2] 0.00-10.01 sec  1.04 GBytes   891 Mbits/sec   0.024 ms 7161/765314 
(0.94%) 4.388/2.780/10.486/0.554 ms 75775 pps 758153/333(42) pkts 25385
[  2] 0.00-10.01 sec  46 datagrams received out-of-order


[root at ctrl1fc35 ~]# iperf -c 192.168.1.231%enp2s0 -u --trip-times -b 1G
------------------------------------------------------------
Client connecting to 192.168.1.231, UDP port 5001 with pid 724001 via 
enp2s0 (1 flows)
Sending 1470 byte datagrams, IPG target: 10.95 us (kalman adjust)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.15 port 50870 connected with 192.168.1.231 port 
5001 (trip-times)
[ ID] Interval       Transfer     Bandwidth
[  1] 0.00-10.00 sec  1.25 GBytes  1.07 Gbits/sec
[  1] Sent 913051 datagrams
[  1] Server Report:
[ ID] Interval        Transfer     Bandwidth        Jitter   Lost/Total  
Latency avg/min/max/stdev PPS  Rx/inP  NetPwr
[  1] 0.00-10.03 sec  1.20 GBytes  1.02 Gbits/sec   0.020 ms 
39124/913050 (4.3%) 31.314/5.175/42.872/0.021 ms 90994 pps 90994/2849(2) 
pkts 4089
[  1] 0.00-10.03 sec  47 datagrams received out-of-order
[root at ctrl1fc35 ~]# iperf -c 192.168.1.231%enp2s0 -u --trip-times -b 
900m
------------------------------------------------------------
Client connecting to 192.168.1.231, UDP port 5001 with pid 724015 via 
enp2s0 (1 flows)
Sending 1470 byte datagrams, IPG target: 13.07 us (kalman adjust)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.15 port 46481 connected with 192.168.1.231 port 
5001 (trip-times)
[ ID] Interval       Transfer     Bandwidth
[  1] 0.00-10.00 sec  1.05 GBytes   900 Mbits/sec
[  1] Sent 765315 datagrams
[  1] Server Report:
[ ID] Interval        Transfer     Bandwidth        Jitter   Lost/Total  
Latency avg/min/max/stdev PPS  Rx/inP  NetPwr
[  1] 0.00-10.01 sec  1.04 GBytes   891 Mbits/sec   0.024 ms 7161/765314 
(0.94%) 4.388/2.780/10.486/0.032 ms 76490 pps 76490/336(2) pkts 25385
[  1] 0.00-10.01 sec  46 datagrams received out-of-order

Bob
> Hi Bob,
> 
> thanks!
> 
> 
>> On Nov 2, 2022, at 00:39, rjmcmahon via Rpm 
>> <rpm at lists.bufferbloat.net> wrote:
>> 
>> Bufferbloat shifts the minimum of the latency or OWD CDF.
> 
> 	[SM] Thank you for spelling this out explicitly, I only worked on a
> vage implicit assumption along those lines. However what I want to
> avoid is using delay magnitude itself as classifier between high and
> low load condition as that seems statistically uncouth to then show
> that the delay differs between the two classes;).
> 	Yet, your comment convinced me that my current load threshold (at
> least for the high load condition) probably is too small, exactly
> because the "base" of the high-load CDFs coincides with the base of
> the low-load CDFs implying that the high-load class contains too many
> samples with decent delay (which after all is one of the goals of the
> whole autorate endeavor).
> 
> 
>> A suggestion is to disable x-axis auto-scaling and start from zero.
> 
> 	[SM] Will reconsider. I started with start at zero, end then switched
> to an x-range that starts with the delay corresponding to 0.01% for
> the reflector/condition with the lowest such value and stops at 97.5%
> for the reflector/condition with the highest delay value. My rationale
> is that the base delay/path delay of each reflector is not all that
> informative* (and it can still be learned from reading the x-axis),
> the long tail > 50% however is where I expect most differences so I
> want to emphasize this and finally I wanted to avoid that the actual
> "curvy" part gets compressed so much that all lines more or less
> coincide. As I said, I will reconsider this
> 
> 
> *) We also maintain individual baselines per reflector, so I could
> just plot the differences from baseline, but that would essentially
> equalize all reflectors, and I think having a plot that easily shows
> reflectors with outlying base delay can be informative when selecting
> reflector candidates. However once we actually switch to OWDs baseline
> correction might be required anyways, as due to colck differences ICMP
> type 13/14 data can have massive offsets that are mostly indicative of
> un synched clocks**.
> 
> **) This is whyI would prefer to use NTP servers as reflectors with
> NTP requests, my expectation is all of these should be reasonably
> synced by default so that offsets should be in the sane range....
> 
> 
>> 
>> Bob
>>> For about 2 years now the cake w-adaptive bandwidth project has been
>>> exploring techniques to lightweightedly sense  bandwidth and 
>>> buffering
>>> problems. One of my favorites was their discovery that ICMP type 13
>>> got them working OWD from millions of ipv4 devices!
>>> They've also explored leveraging ntp and multiple other methods, and
>>> have scripts available that do a good job of compensating for 5g and
>>> starlink's misbehaviors.
>>> They've also pioneered a whole bunch of new graphing techniques, 
>>> which
>>> I do wish were used more than single number summaries especially in
>>> analyzing the behaviors of new metrics like rpm, samknows, ookla, and
>>> RFC9097 - to see what is being missed.
>>> There are thousands of posts about this research topic, a new post on
>>> OWD just went by here.
>>> https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379/793
>>> and of course, I love flent's enormous graphing toolset for 
>>> simulating
>>> and analyzing complex network behaviors.
>> _______________________________________________
>> Rpm mailing list
>> Rpm at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm


More information about the Rpm mailing list