[Rpm] [ippm] Preliminary measurement comparison of "Working Latency" metrics

Dave Taht dave.taht at gmail.com
Sun Dec 11 21:38:59 EST 2022


Adding in the author of irtt.

You had a significant timer error rate in your 1ms testing. Modern
hardware, particularly vms have great difficulty doing that below 3ms.

Similarly the cloud can be highly noisy. I am presently testing a highly
sdn - and virtualized networking fabric in a major DC... And the results
are rather disturbing. I'd like to repeat your test suite inside that DC.



On Sun, Dec 11, 2022, 11:21 AM MORTON JR., AL <acmorton at att.com> wrote:

> Hi IPPM,
>
> Prior to IETF-115, I shared a series of measurements with the IPPM list.
> We're looking at responsiveness and working latency with various metrics
> and multiple testing utilities. This message continues the discussion with
> new input.
>
> When I first published some measurements, Dave Taht added his assessment
> and included other relevant email lists in the discussion. I'm continuing
> to cross-post to all the lists in this thread.
>
> Dave originally suggested that I try a tool called irtt; I've done that
> now and these are the results.
>
> Bob McMahon: I queued your request to try your iperf2 tool behind the irtt
> measurements. I hope to make some more measurements this week...
>
> -=-=-=-=-=-=-=-=-=-=-=-=-=-
>
> We're testing a DOCIS 3.1 based service with 1Gbps down, nominally 22Mbps
> up. I used wired ETH connected to the DOCSIS modem's switch.
>
> Dave Taht made his server available for the irtt measurements. I installed
> irtt on a VM in my Macbook, the same VM that runs UDPST.  I ran quite a few
> tests to become familiar with irtt, so I'll just summarize the relevant
> ones here.
>
> I ran irtt with its traffic at the maximum allowed on Dave's server. The
> test duration was 10sec, with packets spaced at 1ms intervals from my VM
> client. This is the complete output:
>
> ./irtt client -i 1ms -l 1250 -d 10s fremont.starlink.taht.net
>
>                          Min     Mean   Median      Max  Stddev
>                          ---     ----   ------      ---  ------
>                 RTT  46.63ms  51.58ms   51.4ms     58ms  1.55ms
>          send delay  20.74ms  25.57ms   25.4ms  32.04ms  1.54ms
>       receive delay   25.8ms  26.01ms  25.96ms  30.48ms   219µs
>
>       IPDV (jitter)   1.03µs   1.15ms   1.02ms   6.87ms   793µs
>           send IPDV    176ns   1.13ms    994µs   6.41ms   776µs
>        receive IPDV      7ns   79.8µs   41.7µs   4.54ms   140µs
>
>      send call time   10.1µs   55.8µs            1.34ms  30.3µs
>         timer error     68ns    431µs            6.49ms   490µs
>   server proc. time    680ns   2.43µs            47.7µs  1.73µs
>
>                 duration: 10.2s (wait 174ms)
>    packets sent/received: 7137/7131 (0.08% loss)
>  server packets received: 7131/7137 (0.08%/0.00% loss up/down)
>      bytes sent/received: 8921250/8913750
>        send/receive rate: 7.14 Mbps / 7.13 Mbps
>            packet length: 1250 bytes
>              timer stats: 2863/10000 (28.63%) missed, 43.10% error
> acm at acm-ubuntu1804-1:~/goirtt/irtt$
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> irtt supplies lots of info/stats about the RTT distribution. In the
> lightly loaded (7.1Mbps) scenario above, the RTT range is about 12 ms. The
> Mean and the Median are approximately the same.
> irtt also supplies Inter-Packet Delay Varition (IPDV), showing that the
> packets seem to be delayed more than the sending interval occasionally (Max
> of 6.8ms).
> irtt measurements indicate very low packet loss: no congestion at 7Mbps.
>
> For the opposite end of the congestion spectrum, I ran irtt with UDPST
> (RFC 9097) running in parallel (and using the Type C search algorithm). We
> pick-up a lot more RTT and wider RTT range in this scenario:
>
> irtt with udpst using Type C search = max load:
>
>                          Min     Mean   Median      Max   Stddev
>                          ---     ----   ------      ---   ------
>                 RTT  47.58ms    118ms  56.53ms  301.6ms  90.28ms
>          send delay  24.05ms  94.85ms  33.38ms  278.5ms  90.26ms
>       receive delay  22.99ms  23.17ms  23.13ms  25.42ms    156µs
>
>       IPDV (jitter)    162ns   1.04ms    733µs   6.36ms   1.02ms
>           send IPDV   3.81µs   1.01ms    697µs   6.24ms   1.02ms
>        receive IPDV     88ns     93µs   49.8µs   1.48ms    145µs
>
>      send call time   4.28µs   39.3µs             903µs   32.4µs
>         timer error     86ns    287µs            6.13ms    214µs
>   server proc. time    670ns   3.59µs            19.3µs   2.26µs
>
>                 duration: 10.9s (wait 904.8ms)
>    packets sent/received: 8305/2408 (71.01% loss)
>  server packets received: 2408/8305 (71.01%/0.00% loss up/down)
>      bytes sent/received: 10381250/3010000
>        send/receive rate: 8.31 Mbps / 2.47 Mbps
>            packet length: 1250 bytes
>              timer stats: 1695/10000 (16.95%) missed, 28.75% error
> acm at acm-ubuntu1804-1:~/goirtt/irtt$
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> The irtt measurement of RTT range is now about 250ms in this
> maximally-loaded scenario.
> One key RTT difference is the Mean delay, which is now twice the Median
> with working load added by UDPST.
> Congestion is evident in the irtt loss measurements = 71%. However, the
> competing traffic did not break irtt's process or measurements.
> UDPST's measurements were ~22Mbps Capacity (steady-state), with high loss,
> and similar RTT range of 251 ms.
>
>
> Additional tests included load from UDPST at fixed rates: 14Mbps and
> 20Mbps. We compare the RTT range for all four conditions in the table below:
>
> Load with irtt   irtt RTT range    UDPST RTT range
> ==================================================
> irtt alone            12ms               ---
> UDPST at 14Mbps       11ms               22ms
> UDPST at 20Mbps       14ms               46ms
> UDPST at MaxCap      250ms              251ms
>
> The unexpected result with irtt measurements is that the RTT range did not
> increase with load, where the UDPST RTT range increases with load. We're
> assuming that the majority of delay increases occur in the DOCSIS upstream
> queue and both test streams should see similar delay range as they do with
> maximum load. Perhaps there are some differences owing to the periodic irtt
> stream and the bursty UDPST stream (both are UDP), but this is speculation.
>
> To check that the test path was operating similarly with earlier tests, we
> ran a couple of NetworkQuality and UDPST tests as before:
>
> Comparison of NQ-vs, UDPST -A c, same columns as defined earlier.
> Working Latency & Capacity Summary (Upstream only)
> (capacity in Mbps, delays in ms, h and m are RPM categories, High and
> Medium)
>
> Net Qual                           UDPST (RFC 9097)
> UpCap     RPM    DelLD  DelMin     UpCap(stable)    RTTmin   RTTrange
> 22        276 m  217ms  11ms       23               28       0-252
> 22        291 m  206ms  11ms       ~22*             28*      0-251*
>
> * UDPST test result with the ~7Mbps irtt stream present.
>
> We found that uplink test results are similar to previous tests, and the
> new irtt results were collected under similar conditions.
>
> In conclusion, irtt provides a clear summary of the RTT distribution.
> Minimum, Mean, Median and Max RTT are useful individually and in
> combinations/comparisons. The irtt measurements compare favorably to those
> collected during IP-Layer Capacity tests with the UDPST utility.
>
> comments welcome,
> Al
>
> > -----Original Message-----
> > From: ippm <ippm-bounces at ietf.org> On Behalf Of MORTON JR., AL
> > Sent: Saturday, November 5, 2022 3:37 PM
> > To: Sebastian Moeller <moeller0 at gmx.de>
> > Cc: Rpm <rpm at lists.bufferbloat.net>; Will Hawkins <hawkinsw at obs.cr>;
> > ippm at ietf.org
> > Subject: Re: [ippm] [Rpm] Preliminary measurement comparison of "Working
> > Latency" metrics
> >
> > *** Security Advisory: This Message Originated Outside of AT&T ***.
> > Reference http://cso.att.com/EmailSecurity/IDSP.html for more
> information.
> >
> > Hi Sebastian, thanks for your comments and information.
> > I'll reply to a few key points in this top-post.
> >
> > Sebastian wrote:
> > >     [SM] DOCSIS ISPs traditionally provision higher peak rates than
> they
> > > advertise, with a DOCSIS modem/router with > 1 Gbps LAN capacity (even
> via
> > > bonded LAG) people in Germany routinely measure TCP/IPv4/HTTP goodput
> in the
> > > 1050 Mbps range. But typically gigabit ethernet limits the practically
> > > achievable throughput somewhat:
> > [acm]
> > Right, my ISP's CPE has Gig-E ports so I won't see any benefit from
> over-rate
> > provisioning.
> >
> > Sebastian wrote:
> > ...
> > > IPv4/TCP/RFC1323 timestamps payload:        1000 * ((1500 - 12 - 20 -
> 20)/(1500
> > + 38 + 4)) = 939.04 Mbps
> > ...
> > > Speedtests tend to report IPv4/TCP timestamps might be on depending on
> the
> > OS,
> > > but on-line speedtest almost never return the simple average rate over
> the
> > > measurement duration, but play "clever" tricks to exclude the start-up
> phase
> > > and to aggregate over multiple flows that invariably ending with
> results
> > that
> > > tend to exceed the hard limits shown above...
> > [acm]
> > My measurements with the Ookla desktop app on MacOS (shared earlier this
> week)
> > are very consistent at ~940Mbps Dn-stream, so timestamps calculation
> sounds
> > right.
> > My ISP specifies their service speed at 940Mbps, as though they assume
> > subscribers will measure using Ookla or other TCP tool. In fact, our team
> > hasn't seen such consistent results from Ookla or any TCP-based test in
> the
> > 1Gbps range, and this makes me wonder if there might be some
> test-recognition
> > here.
> >
> > FYI - UDPST uses a max packet size with is sub-MTU, to avoid
> fragmentation
> > when various encapsulations are encountered. Also, the definition of
> IP-Layer
> > Capacity in the various standards (e.g., RFC 9097) includes the bits in
> the IP
> > header in the total Capacity.
> >
> > So, instead of:
> > > Ethernet payload rate +VLAN:                1000 * ((1500)/(1500 + 38
> + 4)) =
> > 972.76 Mbps
> > We have
> > Ethernet payload rate +VLAN:          1000 * ((1222)/(1222 + 38 + 4)) =
> 966.77
> > Mbps
> > and why our Maximum IP-Layer Capacity measurements are approx ~967 Mbps
> >
> > UDPST has an option to use datagrams for the traditional 1500 octet MTU
> (-T),
> > but a user could cause a lot of fragmentation on links with
> encapsulations wit
> > this option.
> >
> > acm wrote:
> > > > The comparison of networkQuality and goresponsiveness is somewhat
> > confounded
> > > > by the need to use the Apple server infrastructure for both these
> methods
> > ...
> > Sebastian wrote:
> > >     [SM] Puzzled, I thought when comparing the two networkQuality
> variants
> > > having the same back-end sort of helps be reducing the differences to
> the
> > > clients? But comparison to UDPST suffers somewhat.
> > [acm]
> > Yes, I was hoping to match-up the client and server implementations.
> Instead,
> > this might be more of a Client-X and Server-Y interoperability test (and
> could
> > explain some results?), but that was not to be.
> >
> > Thanks for your suggested command line for gores.
> >
> > IIRC one of your early messages, Sebastian, your MacOS indicates that
> 12.6 is
> > the latest. It's the same for me: my sufficiently powerful MacBook cannot
> > upgrade to Ventura for the latest in networkQuality versions. It would
> help us
> > who are doing some testing if the latest version of networkQuality could
> be
> > made installable for us, somehow...
> >
> > thanks again and regards,
> > Al
> >
> >
> > > -----Original Message-----
> > > From: Sebastian Moeller <moeller0 at gmx.de>
> > > Sent: Friday, November 4, 2022 3:10 PM
> > > To: MORTON JR., AL <acmorton at att.com>
> > > Cc: Dave Täht <dave.taht at gmail.com>; rjmcmahon <
> rjmcmahon at rjmcmahon.com>;
> > Rpm
> > > <rpm at lists.bufferbloat.net>; ippm at ietf.org; Will Hawkins <
> hawkinsw at obs.cr>
> > > Subject: Re: [Rpm] [ippm] Preliminary measurement comparison of
> "Working
> > > Latency" metrics
> > >
> > > Hi Al,
> > >
> > >
> > > > On Nov 4, 2022, at 18:14, MORTON JR., AL via Rpm
> > <rpm at lists.bufferbloat.net>
> > > wrote:
> > > >
> > > > Hi all,
> > > >
> > > > I have been working through the threads on misery metrics and
> lightweight
> > > sensing of bw and buffering, and with those insights and gold-nuggets
> in
> > mind,
> > > I'm iterating through testing the combinations of tools that Dave That
> and
> > Bob
> > > McMahon suggested.
> > > >
> > > > earlier this week, Dave wrote:
> > > >> How does networkQuality compare vs a vs your tool vs a vs
> > goresponsiveness?
> > > >
> > > > goresponsiveness installed flawlessly - very nice instructions and
> > getting-
> > > started info.
> > > >
> > > > Comparison of NetQual (networkQuality -vs), UDPST -A c,
> > > gores/goresponsiveness
> > > >
> > > > Working Latency & Capacity Summary on DOCSIS 3.1 access with 1 Gbps
> down-
> > > stream service
> > > > (capacity in Mbps, delays in ms, h and m are RPM categories, High and
> > > Medium)
> > > >
> > > > NetQual                            UDPST (RFC 9097)             gores
> > > > DnCap     RPM    DelLD  DelMin     DnCap    RTTmin   RTTrange   DnCap
> > RPM
> > > DelLD
> > > > 882       788 m  76ms   8ms        967      28       0-16       127
> > > (1382) 43ms
> > > > 892      1036 h  58ms   8          966      27       0-18       128
> > > (2124) 28ms
> > > > 887      1304 h  46ms   6          969      27       0-18       130
> > > (1478) 41ms
> > > > 885      1008 h  60ms   8          967      28       0-22       127
> > > (1490) 40ms
> > > > 894      1383 h  43ms   11         967      28       0-15       133
> > > (2731) 22ms
> > > >
> > > > NetQual                            UDPST (RFC 9097)             gores
> > > > UpCap     RPM    DelLD  DelMin     UpCap    RTTmin   RTTrange   UpCap
> > RPM
> > > DelLD
> > > > 21        327 m  183ms  8ms        22 (51)  28       0-253      12
> > > > 21        413 m  145ms  8          22 (43)  28       0-255      15
> > > > 22        273 m  220ms  6          23 (53)  28       0-259      10
> > > > 21        377 m  159ms  8          23 (51)  28       0-250      10
> > > > 22        281 m  214ms  11         23 (52)  28       0-250       6
> > > >
> > > > These tests were conducted in a round-robin fashion to minimize the
> > > possibility of network variations between measurements:
> > > > NetQual - rest - UDPST-Dn - rest- UDPST-Up - rest - gores - rest -
> repeat
> > > >
> > > > NetQual indicates the same reduced capacity in Downstream when
> compared to
> > > UDPST (940Mbps is the max for TCP payloads, while 967-970 is max for
> IP-
> > layer
> > > capacity, dep. on VLAN tag).
> > >
> > >     [SM] DOCSIS ISPs traditionally provision higher peak rates than
> they
> > > advertise, with a DOCSIS modem/router with > 1 Gbps LAN capacity (even
> via
> > > bonded LAG) people in Germany routinely measure TCP/IPv4/HTTP goodput
> in the
> > > 1050 Mbps range. But typically gigabit ethernet limits the practically
> > > achievable throughput somewhat:
> > >
> > > Ethernet payload rate:                      1000 * ((1500)/(1500 +
> 38)) =
> > >      975.29 Mbps
> > > Ethernet payload rate +VLAN:                1000 * ((1500)/(1500 + 38
> + 4)) =
> > >      972.76 Mbps
> > > IPv4 payload (ethernet+VLAN):               1000 * ((1500 - 20)/(1500
> + 38 + 4))
> > =
> > >              959.79 Mbps
> > > IPv6 payload (ethernet+VLAN):               1000 * ((1500 - 40)/(1500
> + 38 + 4))
> > =
> > >              946.82 Mbps
> > > IPv4/TCP payload (ethernet+VLAN):   1000 * ((1500 - 20 - 20)/(1500 + 38
> > +
> > > 4)) =        946.82 Mbps
> > > IPv6/TCP payload (ethernet+VLAN):   1000 * ((1500 - 20 - 40)/(1500 + 38
> > +
> > > 4)) =        933.85 Mbps
> > > IPv4/TCP/RFC1323 timestamps payload:        1000 * ((1500 - 12 - 20 -
> 20)/(1500
> > +
> > > 38 + 4)) = 939.04 Mbps
> > > IPv6/TCP/RFC1323 timestamps payload:        1000 * ((1500 - 12 - 20 -
> 40)/(1500
> > +
> > > 38 + 4)) = 926.07 Mbps
> > >
> > >
> > > Speedtests tend to report IPv4/TCP timestamps might be on depending on
> the
> > OS,
> > > but on-line speedtest almost never return the simple average rate over
> the
> > > measurement duration, but play "clever" tricks to exclude the start-up
> phase
> > > and to aggregate over multiple flows that invariably ending with
> results
> > that
> > > tend to exceed the hard limits shown above...
> > >
> > >
> > > > Upstream capacities are not very different (a factor that made TCP
> methods
> > > more viable many years ago when most consumer access speeds were
> limited to
> > > 10's of Megabits).
> > >
> > >     [SM] My take on this is that this is partly due to the goal of
> ramping
> > > up very quickly and get away with a short measurement duration. That
> causes
> > > imprecision. As Dave said flent's RRUL test defaults to 60 seconds and
> I
> > often
> > > ran/run it for 5 to 10 minutes to get somewhat more reliable numbers
> (and
> > for
> > > timecourses to look at and reason about).
> > >
> > > > gores reports significantly lower capacity in both downstream and
> upstream
> > > measurements, a factor of 7 less than NetQual for downstream.
> Interestingly,
> > > the reduced capacity (taken as the working load) results in higher
> > > responsiveness: RPM meas are higher and loaded delays are lower for
> > > downstream.
> > >
> > >     [SM] Yepp, if you only fill-up the queue partially you will
> harvest less
> > > queueing delay and hence retain more responsiveness, albeit this
> really just
> > > seems to be better interpreted as go-responsiveness failing to achieve
> > > "working-conditions".
> > >
> > > I tend to run gores like this:
> > > time ./networkQuality --config mensura.cdn-apple.com --port 443 --path
> > > /api/v1/gm/config  --sattimeout 60 --extended-stats --logger-filename
> > > go_networkQuality_$(date +%Y%m%d_%H%M%S)
> > >
> > > --sattimeout 60 extends the time out for the saturation measurement
> somewhat
> > > (before I saw it simply failing on a 1 Gbps access link, it did give
> some
> > > diagnostic message though).
> > >
> > >
> > > >
> > > > The comparison of networkQuality and goresponsiveness is somewhat
> > confounded
> > > by the need to use the Apple server infrastructure for both these
> methods
> > (the
> > > documentation provides this option - thanks!). I don't have admin
> access to
> > > our server at the moment. But the measured differences are large
> despite the
> > > confounding factor.
> > >
> > >     [SM] Puzzled, I thought when comparing the two ntworkQuality
> variants
> > > having the same back-end sort of helps be reducing the differences to
> the
> > > clients? But comparison to UDPST suffers somewhat.
> > >
> > > >
> > > > goresponsiveness has its own, very different output format than
> > > networkQuality. There isn't a comparable "-v" option other than -debug
> > (which
> > > is extremely detailed). gores only reports RPM for the downstream.
> > >
> > >     [SM] I agree it would be nice if gores would grow a sequential
> mode as
> > > well.
> > >
> > >
> > > > I hope that these results will prompt more of the principle
> evangelists
> > and
> > > coders to weigh-in.
> > >
> > >     [SM] No luck ;) but at least "amateur hour" (aka me) is ready to
> > > discuss.
> > >
> > > >
> > > > It's also worth noting that the RTTrange reported by UDPST is the
> range
> > > above the minimum RTT, and represents the entire 10 second test. The
> > > consistency of the maximum of the range (~255ms) seems to indicate that
> > UDPST
> > > has characterized the length of the upstream buffer during
> measurements.
> > >
> > >     [SM] thanks for explaining that.
> > >
> > > Regards
> > >     Sebastian
> > >
> > > >
> > > > I'm sure there is more to observe that is prompted by these
> measurements;
> > > comments welcome!
> > > >
> > > > Al
> > > >
> > > >
> > > >> -----Original Message-----
> > > >> From: ippm <ippm-bounces at ietf.org> On Behalf Of MORTON JR., AL
> > > >> Sent: Tuesday, November 1, 2022 10:51 AM
> > > >> To: Dave Taht <dave.taht at gmail.com>
> > > >> Cc: ippm at ietf.org; Rpm <rpm at lists.bufferbloat.net>
> > > >> Subject: Re: [ippm] Preliminary measurement comparison of "Working
> > Latency"
> > > >> metrics
> > > >>
> > > >> *** Security Advisory: This Message Originated Outside of AT&T ***.
> > > >> Reference http://cso.att.com/EmailSecurity/IDSP.html for more
> > information.
> > > >>
> > > >> Hi Dave,
> > > >> Thanks for trying UDPST (RFC 9097)!
> > > >>
> > > >> Something you might try with starlink:
> > > >> use the -X option and UDPST will generate random payloads.
> > > >>
> > > >> The measurements with -X will reflect the uncompressed that are
> possible.
> > > >> I tried this on a ship-board Internet access: uncompressed rate was
> > > 100kbps.
> > > >>
> > > >> A few more quick replies below,
> > > >> Al
> > > >>
> > > >>> -----Original Message-----
> > > >>> From: Dave Taht <dave.taht at gmail.com>
> > > >>> Sent: Tuesday, November 1, 2022 12:22 AM
> > > >>> To: MORTON JR., AL <acmorton at att.com>
> > > >>> Cc: ippm at ietf.org; Rpm <rpm at lists.bufferbloat.net>
> > > >>> Subject: Re: [ippm] Preliminary measurement comparison of "Working
> > > Latency"
> > > >>> metrics
> > > >>>
> > > >>> Dear Al:
> > > >>>
> > > >>> OK, I took your udpst tool for a spin.
> > > >>>
> > > >>> NICE! 120k binary (I STILL work on machines with only 4MB of
> flash),
> > > >>> good feature set, VERY fast,
> > > >> [acm]
> > > >> Len Ciavattone (my partner in crime on several measurement
> projects) is
> > the
> > > >> lead coder: he has implemented many measurement tools extremely
> > > efficiently,
> > > >> this one in C-lang.
> > > >>
> > > >>> and in very brief testing, seemed
> > > >>> to be accurate in the starlink case, though it's hard to tell with
> > > >>> them as the rate changes every 15s.
> > > >> [acm]
> > > >> Great! Our guiding principle developing UDPST has been to test the
> > accuracy
> > > of
> > > >> measurements against a ground-truth. It pays-off.
> > > >>
> > > >>>
> > > >>> I filed a couple bug reports on trivial stuff:
> > > >>>
> > > >>
> > >
> >
> https://urldefense.com/v3/__https://github.com/BroadbandForum/obudpst/issues/8
> > > >>>
> > > >>
> > >
> >
> __;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUc
> > > >>> SswH_opYmEw$
> > > >> [acm]
> > > >> much appreciated...  We have an OB-UDPST project meeting this
> Friday, can
> > > >> discuss then.
> > > >>
> > > >>>
> > > >>> (Adding diffserv and ecn washing or marking detection would be a
> nice
> > > >>> feature to have)
> > > >>>
> > > >>> Aside from the sheer joy coming from the "it compiles! and runs!"
> > > >>> phase I haven't looked much further.
> > > >>>
> > > >>> I left a copy running on one of my starlink testbeds -
> > > >>> fremont.starlink.taht.net - if anyone wants to try it. It's
> > > >>> instrumented with netperf, flent, irtt, iperf2 (not quite the
> latest
> > > >>> version from bob, but close), and now udpst, and good to about a
> gbit.
> > > >>>
> > > >>> nice tool!
> > > >> [acm]
> > > >> Thanks again!
> > > >>
> > > >>>
> > > >>> Has anyone here played with crusader? (
> > > >>>
> > > >>
> > >
> >
> https://urldefense.com/v3/__https://github.com/Zoxc/crusader__;!!BhdT!iufMVqCy
> > > >>>
> oH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHm4Wtzjc$
> > > )
> > > >>>
> > > >>> On Mon, Oct 31, 2022 at 4:30 PM Dave Taht <dave.taht at gmail.com>
> wrote:
> > > >>>>
> > > >>>> On Mon, Oct 31, 2022 at 1:41 PM MORTON JR., AL <acmorton at att.com>
> > wrote:
> > > >>>>
> > > >>>>>> have you tried irtt?
> > > >>>
> > > >>
> > >
> > (
> https://urldefense.com/v3/__https://github.com/heistp/irtt__;!!BhdT!iufMVqCyo
> > > >>>
> H_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHBSirIcE$
> > > )
> > > >>>>> I have not. Seems like a reasonable tool for UDP testing. The
> feature
> > I
> > > >>> didn't like in my scan of the documentation is the use of
> Inter-packet
> > > delay
> > > >>> variation (IPDV) instead of packet delay variation (PDV):
> variation from
> > > the
> > > >>> minimum (or reference) delay. The morbidly curious can find my
> analysis
> > in
> > > >> RFC
> > > >>> 5481:
> > > >>>
> > > >>
> > >
> >
> https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/rfc5481__;
> !!
> > > >>>
> > > >>
> > >
> >
> BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHt
> > > >>> T7QSlc$
> > > >>>>
> > > >>>> irtt was meant to simulate high speed voip and one day
> > > >>>> videoconferencing. Please inspect the json output
> > > >>>> for other metrics. Due to OS limits it is typically only accurate
> to a
> > > >>>> 3ms interval. One thing it does admirably is begin to expose the
> > > >>>> sordid sump of L2 behaviors in 4g, 5g, wifi, and other wireless
> > > >>>> technologies, as well as request/grant systems like cable and
> gpon,
> > > >>>> especially when otherwise idle.
> > > >>>>
> > > >>>> Here is a highres plot of starlink's behaviors from last year:
> > > >>>> https://urldefense.com/v3/__https://forum.openwrt.org/t/cake-w-
> > adaptive-
> > > >>> bandwidth-
> > > >>>
> > > >>
> > >
> >
> historic/108848/3238__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vle
> > > >>> KCIKpL3PBLwci5nOIEUcSswH_5Sms3w$
> > > >>>>
> > > >>>> clearly showing them "optimizing for bandwidth" and changing next
> sat
> > > >>>> hop, and about a 40ms interval of buffering between these
> switches.
> > > >>>> I'd published elsewhere, if anyone cares, a preliminary study of
> what
> > > >>>> starlink's default behaviors did to cubic and BBR...
> > > >>>>
> > > >>>>>
> > > >>>>> irtt's use of IPDV means that the results won’t compare with
> UDPST,
> > and
> > > >>> possibly networkQuality. But I may give it a try anyway...
> > > >>>>
> > > >>>> The more the merrier! Someday the "right" metrics will arrive.
> > > >>>>
> > > >>>> As a side note, this paper focuses on RAN uplink latency
> > > >>>>
> > > >>>
> > > >>
> > >
> >
> https://urldefense.com/v3/__https://dl.ifip.org/db/conf/itc/itc2021/1570740615
> > > >>>
> > > >>
> > >
> >
> .pdf__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO
> > > >>> IEUcSswHgvqerjg$   which I think
> > > >>>> is a major barrier to most forms of 5G actually achieving good
> > > >>>> performance in a FPS game, if it is true for more RANs. I'd like
> more
> > > >>>> to be testing uplink latencies idle and with load, on all
> > > >>>> technologies.
> > > >>>>
> > > >>>>>
> > > >>>>> thanks again, Dave.
> > > >>>>> Al
> > > >>>>>
> > > >>>>>> -----Original Message-----
> > > >>>>>> From: Dave Taht <dave.taht at gmail.com>
> > > >>>>>> Sent: Monday, October 31, 2022 12:52 PM
> > > >>>>>> To: MORTON JR., AL <acmorton at att.com>
> > > >>>>>> Cc: ippm at ietf.org; Rpm <rpm at lists.bufferbloat.net>
> > > >>>>>> Subject: Re: [ippm] Preliminary measurement comparison of
> "Working
> > > >>> Latency"
> > > >>>>>> metrics
> > > >>>>>>
> > > >>>>>> Thank you very much for the steer to RFC9097. I'd completely
> missed
> > > >>> that.
> > > >>>>>>
> > > >>>>>> On Mon, Oct 31, 2022 at 9:04 AM MORTON JR., AL <
> acmorton at att.com>
> > > >> wrote:
> > > >>>>>>>
> > > >>>>>>> (astute readers may have guessed that I pressed "send" too
> soon on
> > > >>> previous
> > > >>>>>> message...)
> > > >>>>>>>
> > > >>>>>>> I also conducted upstream tests this time, here are the
> results:
> > > >>>>>>> (capacity in Mbps, delays in ms, h and m are RPM categories,
> High
> > > >> and
> > > >>>>>> Medium)
> > > >>>>>>>
> > > >>>>>>> Net Qual                           UDPST (RFC9097)
> > > >> Ookla
> > > >>>>>>> UpCap     RPM    DelLD  DelMin     UpCap    RTTmin   RTTrange
> > > >> UpCap
> > > >>>>>> Ping(no load)
> > > >>>>>>> 34        1821 h 33ms   11ms       23 (42)  28       0-252
>   22
> > > >>> 8
> > > >>>>>>> 22         281 m 214ms  8ms        27 (52)  25       5-248
>   22
> > > >>> 8
> > > >>>>>>> 22         290 m 207ms  8ms        27 (55)  28       0-253
>   22
> > > >>> 9
> > > >>>>>>> 21         330 m 182ms  11ms       23 (44)  28       0-255
>   22
> > > >>> 7
> > > >>>>>>> 22         334 m 180ms  9ms        33 (56)  25       0-255
>   22
> > > >>> 9
> > > >>>>>>>
> > > >>>>>>> The Upstream capacity measurements reflect an interesting
> feature
> > > >> that
> > > >>> we
> > > >>>>>> can reliably and repeatably measure with UDPST. The first ~3
> seconds
> > > >> of
> > > >>>>>> upstream data experience a "turbo mode" of ~50Mbps. UDPST
> displays
> > > >> this
> > > >>>>>> behavior in its 1 second sub-interval measurements and has a
> bimodal
> > > >>> reporting
> > > >>>>>> option that divides the complete measurement interval in two
> time
> > > >>> intervals to
> > > >>>>>> report an initial (turbo) max capacity and a steady-state max
> > capacity
> > > >>> for the
> > > >>>>>> later intervals. The UDPST capacity results present both
> > measurements:
> > > >>> steady-
> > > >>>>>> state first.
> > > >>>>>>
> > > >>>>>> Certainly we can expect bi-model distributions from many ISPs,
> as,
> > for
> > > >>>>>> one thing, the "speedboost" concept remains popular, except
> that it's
> > > >>>>>> misnamed, as it should be called speed-subtract or speed-lose.
> Worse,
> > > >>>>>> it is often configured "sneakily", in that it doesn't kick in
> for the
> > > >>>>>> typical observed duration of the test, for some, they cut the
> > > >>>>>> available bandwidth about 20s in, others, 1 or 5 minutes.
> > > >>>>>>
> > > >>>>>> One of my biggest issues with the rpm spec so far is that it
> should,
> > > >>>>>> at least, sometimes, run randomly longer than the overly short
> > > >>>>>> interval it runs for and the tools also allow for manual
> override of
> > > >>> length.
> > > >>>>>>
> > > >>>>>> we caught a lot of tomfoolery with flent's rrul test running by
> > > >> default
> > > >>> for
> > > >>>>>> 1m.
> > > >>>>>>
> > > >>>>>> Also, AQMs on the path can take a while to find the optimal
> drop or
> > > >> mark
> > > >>> rate.
> > > >>>>>>
> > > >>>>>>>
> > > >>>>>>> The capacity processing in networkQuality and Ookla appear to
> report
> > > >>> the
> > > >>>>>> steady-state result.
> > > >>>>>>
> > > >>>>>> Ookla used to basically report the last result. Also it's not a
> good
> > > >>>>>> indicator of web traffic behavior at all, watching the curve
> > > >>>>>> go up much more slowly in their test on say, fiber 2ms, vs
> starlink,
> > > >>>>>> (40ms)....
> > > >>>>>>
> > > >>>>>> So adding another mode - how quickly is peak bandwidth actually
> > > >>>>>> reached, would be nice.
> > > >>>>>>
> > > >>>>>> I haven't poked into the current iteration of the
> goresponsiveness
> > > >>>>>> test at all:
> https://urldefense.com/v3/__https://github.com/network-
> > > >>>>>>
> > > >>>
> > > >>
> > >
> >
> quality/goresponsiveness__;!!BhdT!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvb
> > > >>>>>> K2bOqw0uMPbFeJ7PxzxTc48iQFubYTxxmyA$   it
> > > >>>>>> would be good to try collecting more statistics and histograms
> and
> > > >>>>>> methods of analyzing the data in that libre-source version.
> > > >>>>>>
> > > >>>>>> How does networkQuality compare vs a vs your tool vs a vs
> > > >>> goresponsiveness?
> > > >>>>>>
> > > >>>>>>> I watched the upstream capacity measurements on the Ookla app,
> and
> > > >>> could
> > > >>>>>> easily see the initial rise to 40-50Mbps, then the drop to
> ~22Mbps
> > for
> > > >>> most of
> > > >>>>>> the test which determined the final result.
> > > >>>>>>
> > > >>>>>> I tend to get upset when I see ookla's new test flash a peak
> result
> > in
> > > >>>>>> the seconds and then settle on some lower number somehow.
> > > >>>>>> So far as I know they are only sampling the latency every 250ms.
> > > >>>>>>
> > > >>>>>>>
> > > >>>>>>> The working latency is about 200ms in networkQuality and about
> 280ms
> > > >>> as
> > > >>>>>> measured by UDPST (RFC9097). Note that the networkQuality
> minimum
> > > >> delay
> > > >>> is
> > > >>>>>> ~20ms lower than the UDPST RTTmin, so this accounts for some of
> the
> > > >>> difference
> > > >>>>>> in working latency.  Also, we used the very dynamic Type C load
> > > >>>>>> adjustment/search algorithm in UDPST during all of this testing,
> > which
> > > >>> could
> > > >>>>>> explain the higher working latency to some degree.
> > > >>>>>>>
> > > >>>>>>> So, it's worth noting that the measurements needed for
> assessing
> > > >>> working
> > > >>>>>> latency/responsiveness are available in the UDPST utility, and
> that
> > > >> the
> > > >>> UDPST
> > > >>>>>> measurements are conducted on UDP transport (used by a growing
> > > >> fraction
> > > >>> of
> > > >>>>>> Internet traffic).
> > > >>>>>>
> > > >>>>>> Thx, didn't know of this work til now!
> > > >>>>>>
> > > >>>>>> have you tried irtt?
> > > >>>>>>
> > > >>>>>>>
> > > >>>>>>> comments welcome of course,
> > > >>>>>>> Al
> > > >>>>>>>
> > > >>>>>>>> -----Original Message-----
> > > >>>>>>>> From: ippm <ippm-bounces at ietf.org> On Behalf Of MORTON JR.,
> AL
> > > >>>>>>>> Sent: Sunday, October 30, 2022 8:09 PM
> > > >>>>>>>> To: ippm at ietf.org
> > > >>>>>>>> Subject: Re: [ippm] Preliminary measurement comparison of
> "Working
> > > >>>>>> Latency"
> > > >>>>>>>> metrics
> > > >>>>>>>>
> > > >>>>>>>>
> > > >>>>>>>> Hi again RPM friends and IPPM'ers,
> > > >>>>>>>>
> > > >>>>>>>> As promised, I repeated the tests shared last week, this time
> > > >> using
> > > >>> both
> > > >>>>>> the
> > > >>>>>>>> verbose (-v) and sequential (-s) dwn/up test options of
> > > >>> networkQuality. I
> > > >>>>>>>> followed Sebastian's calculations as well.
> > > >>>>>>>>
> > > >>>>>>>> Working Latency & Capacity Summary
> > > >>>>>>>>
> > > >>>>>>>> Net Qual                           UDPST
> > > >>> Ookla
> > > >>>>>>>> DnCap     RPM    DelLD  DelMin     DnCap    RTTmin   RTTrange
> > > >>> DnCap
> > > >>>>>>>> Ping(no load)
> > > >>>>>>>> 885       916 m  66ms   8ms        970      28       0-20
> > > >> 940
> > > >>> 8
> > > >>>>>>>> 888      1355 h  44ms   8ms        966      28       0-23
> > > >> 940
> > > >>> 8
> > > >>>>>>>> 891      1109 h  54ms   8ms        968      27       0-19
> > > >> 940
> > > >>> 9
> > > >>>>>>>> 887      1141 h  53ms   11ms       966      27       0-18
> > > >> 937
> > > >>> 7
> > > >>>>>>>> 884      1151 h  52ms   9ms        968      28       0-20
> > > >> 937
> > > >>> 9
> > > >>>>>>>>
> > > >>>>>>>> With the sequential test option, I noticed that networkQuality
> > > >>> achieved
> > > >>>>>> nearly
> > > >>>>>>>> the maximum capacity reported almost immediately at the start
> of a
> > > >>> test.
> > > >>>>>>>> However, the reported capacities are low by about 60Mbps,
> > > >> especially
> > > >>> when
> > > >>>>>>>> compared to the Ookla TCP measurements.
> > > >>>>>>>>
> > > >>>>>>>> The loaded delay (DelLD) is similar to the UDPST RTTmin + (the
> > > >> high
> > > >>> end of
> > > >>>>>> the
> > > >>>>>>>> RTTrange), for example 54ms compared to (27+19=46). Most of
> the
> > > >>>>>> networkQuality
> > > >>>>>>>> RPM measurements were categorized as "High". There doesn't
> seem to
> > > >>> be much
> > > >>>>>>>> buffering in the downstream direction.
> > > >>>>>>>>
> > > >>>>>>>>
> > > >>>>>>>>
> > > >>>>>>>>> -----Original Message-----
> > > >>>>>>>>> From: ippm <ippm-bounces at ietf.org> On Behalf Of MORTON JR.,
> AL
> > > >>>>>>>>> Sent: Monday, October 24, 2022 6:36 PM
> > > >>>>>>>>> To: ippm at ietf.org
> > > >>>>>>>>> Subject: [ippm] Preliminary measurement comparison of
> "Working
> > > >>> Latency"
> > > >>>>>>>>> metrics
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> Hi RPM friends and IPPM'ers,
> > > >>>>>>>>>
> > > >>>>>>>>> I was wondering what a comparison of some of the "working
> > > >> latency"
> > > >>>>>> metrics
> > > >>>>>>>>> would look like, so I ran some tests using a service on
> DOCSIS
> > > >>> 3.1, with
> > > >>>>>> the
> > > >>>>>>>>> downlink provisioned for 1Gbps.
> > > >>>>>>>>>
> > > >>>>>>>>> I intended to run apple's networkQuality, UDPST (RFC9097),
> and
> > > >>> Ookla
> > > >>>>>>>> Speedtest
> > > >>>>>>>>> with as similar connectivity as possible (but we know that
> the
> > > >>> traffic
> > > >>>>>> will
> > > >>>>>>>>> diverge to different servers and we can't change that
> aspect).
> > > >>>>>>>>>
> > > >>>>>>>>> Here's a quick summary of yesterday's results:
> > > >>>>>>>>>
> > > >>>>>>>>> Working Latency & Capacity Summary
> > > >>>>>>>>>
> > > >>>>>>>>> Net Qual                UDPST                        Ookla
> > > >>>>>>>>> DnCap     RPM           DnCap    RTTmin   RTTVarRnge DnCap
> > > >>> Ping(no
> > > >>>>>> load)
> > > >>>>>>>>> 878       62            970      28       0-19       941
>   6
> > > >>>>>>>>> 891       92            970      27       0-20       940
>   7
> > > >>>>>>>>> 891       120           966      28       0-22       937
>   9
> > > >>>>>>>>> 890       112           970      28       0-21       940
>   8
> > > >>>>>>>>> 903       70            970      28       0-16       935
>   9
> > > >>>>>>>>>
> > > >>>>>>>>> Note: all RPM values were categorized as Low.
> > > >>>>>>>>>
> > > >>>>>>>>> networkQuality downstream capacities are always on the low
> side
> > > >>> compared
> > > >>>>>> to
> > > >>>>>>>>> others. We would expect about 940Mbps for TCP, and that's
> mostly
> > > >>> what
> > > >>>>>> Ookla
> > > >>>>>>>>> achieved. I think that a longer test duration might be
> needed to
> > > >>> achieve
> > > >>>>>> the
> > > >>>>>>>>> actual 1Gbps capacity with networkQuality; intermediate
> values
> > > >>> observed
> > > >>>>>> were
> > > >>>>>>>>> certainly headed in the right direction. (I recently
> upgraded to
> > > >>>>>> Monterey
> > > >>>>>>>> 12.6
> > > >>>>>>>>> on my MacBook, so should have the latest version.)
> > > >>>>>>>>>
> > > >>>>>>>>> Also, as Sebastian Moeller's message to the list reminded
> me, I
> > > >>> should
> > > >>>>>> have
> > > >>>>>>>>> run the tests with the -v option to help with comparisons.
> I'll
> > > >>> repeat
> > > >>>>>> this
> > > >>>>>>>>> test when I can make time.
> > > >>>>>>>>>
> > > >>>>>>>>> The UDPST measurements of RTTmin (minimum RTT observed during
> > > >> the
> > > >>> test)
> > > >>>>>> and
> > > >>>>>>>>> the range of variation above the minimum (RTTVarRnge) add-up
> to
> > > >>> very
> > > >>>>>>>>> reasonable responsiveness IMO, so I'm not clear why RPM
> graded
> > > >>> this
> > > >>>>>> access
> > > >>>>>>>> and
> > > >>>>>>>>> path as "Low". The UDPST server I'm using is in NJ, and I'm
> in
> > > >>> Chicago
> > > >>>>>>>>> conducting tests, so the minimum 28ms is typical. UDPST
> > > >>> measurements
> > > >>>>>> were
> > > >>>>>>>> run
> > > >>>>>>>>> on an Ubuntu VM in my MacBook.
> > > >>>>>>>>>
> > > >>>>>>>>> The big disappointment was that the Ookla desktop app I
> updated
> > > >>> over the
> > > >>>>>>>>> weekend did not include the new responsiveness metric! I
> > > >> included
> > > >>> the
> > > >>>>>> ping
> > > >>>>>>>>> results anyway, and it was clearly using a server in the
> nearby
> > > >>> area.
> > > >>>>>>>>>
> > > >>>>>>>>> So, I have some more work to do, but I hope this is
> interesting-
> > > >>> enough
> > > >>>>>> to
> > > >>>>>>>>> start some comparison discussions, and bring-out some
> > > >> suggestions.
> > > >>>>>>>>>
> > > >>>>>>>>> happy testing all,
> > > >>>>>>>>> Al
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> _______________________________________________
> > > >>>>>>>>> ippm mailing list
> > > >>>>>>>>> ippm at ietf.org
> > > >>>>>>>>>
> > > >>>>>>>>
> > > >>>>>>
> > > >>>
> > > >>
> > >
> >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
> > > >>>>>>>>>
> > > >>>>>>>>
> > > >>>>>>
> > > >>>
> > > >>
> > >
> >
> T!hd5MvMQw5eiICQbsfoNaZBUS38yP4YIodBvz1kV5VsX_cGIugVnz5iIkNqi6fRfIQzWef_xKqg4$
> > > >>>>>>>>
> > > >>>>>>>> _______________________________________________
> > > >>>>>>>> ippm mailing list
> > > >>>>>>>> ippm at ietf.org
> > > >>>>>>>>
> > > >>>>>>
> > > >>>
> > > >>
> > >
> >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
> > > >>>>>>>> T!g-
> > > >>>>>>
> > > >>>
> >
> FsktB_l9MMSGNUge6FXDkL1npaKtKcyDtWLcTZGpCunxNNCcTImH8YjC9eUT262Wd8q1EBpiw$
> > > >>>>>>>
> > > >>>>>>> _______________________________________________
> > > >>>>>>> ippm mailing list
> > > >>>>>>> ippm at ietf.org
> > > >>>>>>>
> > > >>>>>>
> > > >>>
> > > >>
> > >
> >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
> > > >>>>>>
> > > >>>
> > > >>
> > >
> >
> T!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvbK2bOqw0uMPbFeJ7PxzxTc48iQFub_gMs
> > > >>>>>> KXU$
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> --
> > > >>>>>> This song goes out to all the folk that thought Stadia would
> work:
> > > >>>>>>
> https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the-
> > > >>> mushroom-
> > > >>>>>> song-activity-6981366665607352320-
> > > >>>>>>
> > > >>>
> > > >>
> > >
> >
> FXtz__;!!BhdT!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvbK2bOqw0uMPbFeJ7PxzxT
> > > >>>>>> c48iQFub34zz4iE$
> > > >>>>>> Dave Täht CEO, TekLibre, LLC
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> --
> > > >>>> This song goes out to all the folk that thought Stadia would work:
> > > >>>>
> https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the-
> > > >>> mushroom-song-activity-6981366665607352320-
> > > >>>
> > > >>
> > >
> >
> FXtz__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO
> > > >>> IEUcSswHLHDpSWs$
> > > >>>> Dave Täht CEO, TekLibre, LLC
> > > >>>
> > > >>>
> > > >>>
> > > >>> --
> > > >>> This song goes out to all the folk that thought Stadia would work:
> > > >>>
> https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the-
> > > >> mushroom-
> > > >>> song-activity-6981366665607352320-
> > > >>>
> > > >>
> > >
> >
> FXtz__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO
> > > >>> IEUcSswHLHDpSWs$
> > > >>> Dave Täht CEO, TekLibre, LLC
> > > >> _______________________________________________
> > > >> ippm mailing list
> > > >> ippm at ietf.org
> > > >>
> > >
> >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
> > > >>
> T!jOVBx7DlKXbDMiZqaYSUhBtkSdUvfGpYUyGvLerdLsLBJZPMzEGcbhC9ZSzsZOd1dYC-
> > > rDt9HLI$
> > > > _______________________________________________
> > > > Rpm mailing list
> > > > Rpm at lists.bufferbloat.net
> > > >
> > >
> >
> https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/rpm__;!!Bhd
> > > T!h8K1vAtpaGSUHpuVMl5sZgi7k-f64BEaV91ypoUokPjn57v_79iCnp7W-
> > > mERYCyuCd9e9PY3aNLkSw$
> >
> > _______________________________________________
> > ippm mailing list
> > ippm at ietf.org
> >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
> >
> T!h-glaezufaCxidYk1xzTF48dbEz67JPIJjPA_nweL8YvDu6Z3TmG0A37k_DQ15FIzzwoeCLOEaw$
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/rpm/attachments/20221211/239dd303/attachment-0001.html>


More information about the Rpm mailing list