[Rpm] [ippm] Preliminary measurement comparison of "Working Latency" metrics

MORTON JR., AL acmorton at att.com
Tue Nov 1 10:51:10 EDT 2022


Hi Dave, 
Thanks for trying UDPST (RFC 9097)! 

Something you might try with starlink:
use the -X option and UDPST will generate random payloads.

The measurements with -X will reflect the uncompressed that are possible.
I tried this on a ship-board Internet access: uncompressed rate was 100kbps.

A few more quick replies below,
Al

> -----Original Message-----
> From: Dave Taht <dave.taht at gmail.com>
> Sent: Tuesday, November 1, 2022 12:22 AM
> To: MORTON JR., AL <acmorton at att.com>
> Cc: ippm at ietf.org; Rpm <rpm at lists.bufferbloat.net>
> Subject: Re: [ippm] Preliminary measurement comparison of "Working Latency"
> metrics
> 
> Dear Al:
> 
> OK, I took your udpst tool for a spin.
> 
> NICE! 120k binary (I STILL work on machines with only 4MB of flash),
> good feature set, VERY fast, 
[acm] 
Len Ciavattone (my partner in crime on several measurement projects) is the lead coder: he has implemented many measurement tools extremely efficiently, this one in C-lang.

> and in very brief testing, seemed
> to be accurate in the starlink case, though it's hard to tell with
> them as the rate changes every 15s.
[acm] 
Great! Our guiding principle developing UDPST has been to test the accuracy of measurements against a ground-truth. It pays-off.

> 
> I filed a couple bug reports on trivial stuff:
> https://urldefense.com/v3/__https://github.com/BroadbandForum/obudpst/issues/8
> __;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUc
> SswH_opYmEw$
[acm] 
much appreciated...  We have an OB-UDPST project meeting this Friday, can discuss then.

> 
> (Adding diffserv and ecn washing or marking detection would be a nice
> feature to have)
> 
> Aside from the sheer joy coming from the "it compiles! and runs!"
> phase I haven't looked much further.
> 
> I left a copy running on one of my starlink testbeds -
> fremont.starlink.taht.net - if anyone wants to try it. It's
> instrumented with netperf, flent, irtt, iperf2 (not quite the latest
> version from bob, but close), and now udpst, and good to about a gbit.
> 
> nice tool!
[acm] 
Thanks again!

> 
> Has anyone here played with crusader? (
> https://urldefense.com/v3/__https://github.com/Zoxc/crusader__;!!BhdT!iufMVqCy
> oH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHm4Wtzjc$   )
> 
> On Mon, Oct 31, 2022 at 4:30 PM Dave Taht <dave.taht at gmail.com> wrote:
> >
> > On Mon, Oct 31, 2022 at 1:41 PM MORTON JR., AL <acmorton at att.com> wrote:
> >
> > > > have you tried irtt?
> (https://urldefense.com/v3/__https://github.com/heistp/irtt__;!!BhdT!iufMVqCyo
> H_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHBSirIcE$   )
> > > I have not. Seems like a reasonable tool for UDP testing. The feature I
> didn't like in my scan of the documentation is the use of Inter-packet delay
> variation (IPDV) instead of packet delay variation (PDV): variation from the
> minimum (or reference) delay. The morbidly curious can find my analysis in RFC
> 5481:
> https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/rfc5481__;!!
> BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHt
> T7QSlc$
> >
> > irtt was meant to simulate high speed voip and one day
> > videoconferencing. Please inspect the json output
> > for other metrics. Due to OS limits it is typically only accurate to a
> > 3ms interval. One thing it does admirably is begin to expose the
> > sordid sump of L2 behaviors in 4g, 5g, wifi, and other wireless
> > technologies, as well as request/grant systems like cable and gpon,
> > especially when otherwise idle.
> >
> > Here is a highres plot of starlink's behaviors from last year:
> > https://urldefense.com/v3/__https://forum.openwrt.org/t/cake-w-adaptive-
> bandwidth-
> historic/108848/3238__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vle
> KCIKpL3PBLwci5nOIEUcSswH_5Sms3w$
> >
> > clearly showing them "optimizing for bandwidth" and changing next sat
> > hop, and about a 40ms interval of buffering between these switches.
> > I'd published elsewhere, if anyone cares, a preliminary study of what
> > starlink's default behaviors did to cubic and BBR...
> >
> > >
> > > irtt's use of IPDV means that the results won’t compare with UDPST, and
> possibly networkQuality. But I may give it a try anyway...
> >
> > The more the merrier! Someday the "right" metrics will arrive.
> >
> > As a side note, this paper focuses on RAN uplink latency
> >
> https://urldefense.com/v3/__https://dl.ifip.org/db/conf/itc/itc2021/1570740615
> .pdf__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO
> IEUcSswHgvqerjg$   which I think
> > is a major barrier to most forms of 5G actually achieving good
> > performance in a FPS game, if it is true for more RANs. I'd like more
> > to be testing uplink latencies idle and with load, on all
> > technologies.
> >
> > >
> > > thanks again, Dave.
> > > Al
> > >
> > > > -----Original Message-----
> > > > From: Dave Taht <dave.taht at gmail.com>
> > > > Sent: Monday, October 31, 2022 12:52 PM
> > > > To: MORTON JR., AL <acmorton at att.com>
> > > > Cc: ippm at ietf.org; Rpm <rpm at lists.bufferbloat.net>
> > > > Subject: Re: [ippm] Preliminary measurement comparison of "Working
> Latency"
> > > > metrics
> > > >
> > > > Thank you very much for the steer to RFC9097. I'd completely missed
> that.
> > > >
> > > > On Mon, Oct 31, 2022 at 9:04 AM MORTON JR., AL <acmorton at att.com> wrote:
> > > > >
> > > > > (astute readers may have guessed that I pressed "send" too soon on
> previous
> > > > message...)
> > > > >
> > > > > I also conducted upstream tests this time, here are the results:
> > > > > (capacity in Mbps, delays in ms, h and m are RPM categories, High and
> > > > Medium)
> > > > >
> > > > > Net Qual                           UDPST (RFC9097)              Ookla
> > > > > UpCap     RPM    DelLD  DelMin     UpCap    RTTmin   RTTrange   UpCap
> > > > Ping(no load)
> > > > > 34        1821 h 33ms   11ms       23 (42)  28       0-252      22
> 8
> > > > > 22         281 m 214ms  8ms        27 (52)  25       5-248      22
> 8
> > > > > 22         290 m 207ms  8ms        27 (55)  28       0-253      22
> 9
> > > > > 21         330 m 182ms  11ms       23 (44)  28       0-255      22
> 7
> > > > > 22         334 m 180ms  9ms        33 (56)  25       0-255      22
> 9
> > > > >
> > > > > The Upstream capacity measurements reflect an interesting feature that
> we
> > > > can reliably and repeatably measure with UDPST. The first ~3 seconds of
> > > > upstream data experience a "turbo mode" of ~50Mbps. UDPST displays this
> > > > behavior in its 1 second sub-interval measurements and has a bimodal
> reporting
> > > > option that divides the complete measurement interval in two time
> intervals to
> > > > report an initial (turbo) max capacity and a steady-state max capacity
> for the
> > > > later intervals. The UDPST capacity results present both measurements:
> steady-
> > > > state first.
> > > >
> > > > Certainly we can expect bi-model distributions from many ISPs, as, for
> > > > one thing, the "speedboost" concept remains popular, except that it's
> > > > misnamed, as it should be called speed-subtract or speed-lose. Worse,
> > > > it is often configured "sneakily", in that it doesn't kick in for the
> > > > typical observed duration of the test, for some, they cut the
> > > > available bandwidth about 20s in, others, 1 or 5 minutes.
> > > >
> > > > One of my biggest issues with the rpm spec so far is that it should,
> > > > at least, sometimes, run randomly longer than the overly short
> > > > interval it runs for and the tools also allow for manual override of
> length.
> > > >
> > > > we caught a lot of tomfoolery with flent's rrul test running by default
> for
> > > > 1m.
> > > >
> > > > Also, AQMs on the path can take a while to find the optimal drop or mark
> rate.
> > > >
> > > > >
> > > > > The capacity processing in networkQuality and Ookla appear to report
> the
> > > > steady-state result.
> > > >
> > > > Ookla used to basically report the last result. Also it's not a good
> > > > indicator of web traffic behavior at all, watching the curve
> > > > go up much more slowly in their test on say, fiber 2ms, vs starlink,
> > > > (40ms)....
> > > >
> > > > So adding another mode - how quickly is peak bandwidth actually
> > > > reached, would be nice.
> > > >
> > > > I haven't poked into the current iteration of the goresponsiveness
> > > > test at all: https://urldefense.com/v3/__https://github.com/network-
> > > >
> quality/goresponsiveness__;!!BhdT!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvb
> > > > K2bOqw0uMPbFeJ7PxzxTc48iQFubYTxxmyA$   it
> > > > would be good to try collecting more statistics and histograms and
> > > > methods of analyzing the data in that libre-source version.
> > > >
> > > > How does networkQuality compare vs a vs your tool vs a vs
> goresponsiveness?
> > > >
> > > > >I watched the upstream capacity measurements on the Ookla app, and
> could
> > > > easily see the initial rise to 40-50Mbps, then the drop to ~22Mbps for
> most of
> > > > the test which determined the final result.
> > > >
> > > > I tend to get upset when I see ookla's new test flash a peak result in
> > > > the seconds and then settle on some lower number somehow.
> > > > So far as I know they are only sampling the latency every 250ms.
> > > >
> > > > >
> > > > > The working latency is about 200ms in networkQuality and about 280ms
> as
> > > > measured by UDPST (RFC9097). Note that the networkQuality minimum delay
> is
> > > > ~20ms lower than the UDPST RTTmin, so this accounts for some of the
> difference
> > > > in working latency.  Also, we used the very dynamic Type C load
> > > > adjustment/search algorithm in UDPST during all of this testing, which
> could
> > > > explain the higher working latency to some degree.
> > > > >
> > > > > So, it's worth noting that the measurements needed for assessing
> working
> > > > latency/responsiveness are available in the UDPST utility, and that the
> UDPST
> > > > measurements are conducted on UDP transport (used by a growing fraction
> of
> > > > Internet traffic).
> > > >
> > > > Thx, didn't know of this work til now!
> > > >
> > > > have you tried irtt?
> > > >
> > > > >
> > > > > comments welcome of course,
> > > > > Al
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: ippm <ippm-bounces at ietf.org> On Behalf Of MORTON JR., AL
> > > > > > Sent: Sunday, October 30, 2022 8:09 PM
> > > > > > To: ippm at ietf.org
> > > > > > Subject: Re: [ippm] Preliminary measurement comparison of "Working
> > > > Latency"
> > > > > > metrics
> > > > > >
> > > > > >
> > > > > > Hi again RPM friends and IPPM'ers,
> > > > > >
> > > > > > As promised, I repeated the tests shared last week, this time using
> both
> > > > the
> > > > > > verbose (-v) and sequential (-s) dwn/up test options of
> networkQuality. I
> > > > > > followed Sebastian's calculations as well.
> > > > > >
> > > > > > Working Latency & Capacity Summary
> > > > > >
> > > > > > Net Qual                           UDPST
> Ookla
> > > > > > DnCap     RPM    DelLD  DelMin     DnCap    RTTmin   RTTrange
> DnCap
> > > > > > Ping(no load)
> > > > > > 885       916 m  66ms   8ms        970      28       0-20       940
> 8
> > > > > > 888      1355 h  44ms   8ms        966      28       0-23       940
> 8
> > > > > > 891      1109 h  54ms   8ms        968      27       0-19       940
> 9
> > > > > > 887      1141 h  53ms   11ms       966      27       0-18       937
> 7
> > > > > > 884      1151 h  52ms   9ms        968      28       0-20       937
> 9
> > > > > >
> > > > > > With the sequential test option, I noticed that networkQuality
> achieved
> > > > nearly
> > > > > > the maximum capacity reported almost immediately at the start of a
> test.
> > > > > > However, the reported capacities are low by about 60Mbps, especially
> when
> > > > > > compared to the Ookla TCP measurements.
> > > > > >
> > > > > > The loaded delay (DelLD) is similar to the UDPST RTTmin + (the high
> end of
> > > > the
> > > > > > RTTrange), for example 54ms compared to (27+19=46). Most of the
> > > > networkQuality
> > > > > > RPM measurements were categorized as "High". There doesn't seem to
> be much
> > > > > > buffering in the downstream direction.
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: ippm <ippm-bounces at ietf.org> On Behalf Of MORTON JR., AL
> > > > > > > Sent: Monday, October 24, 2022 6:36 PM
> > > > > > > To: ippm at ietf.org
> > > > > > > Subject: [ippm] Preliminary measurement comparison of "Working
> Latency"
> > > > > > > metrics
> > > > > > >
> > > > > > >
> > > > > > > Hi RPM friends and IPPM'ers,
> > > > > > >
> > > > > > > I was wondering what a comparison of some of the "working latency"
> > > > metrics
> > > > > > > would look like, so I ran some tests using a service on DOCSIS
> 3.1, with
> > > > the
> > > > > > > downlink provisioned for 1Gbps.
> > > > > > >
> > > > > > > I intended to run apple's networkQuality, UDPST (RFC9097), and
> Ookla
> > > > > > Speedtest
> > > > > > > with as similar connectivity as possible (but we know that the
> traffic
> > > > will
> > > > > > > diverge to different servers and we can't change that aspect).
> > > > > > >
> > > > > > > Here's a quick summary of yesterday's results:
> > > > > > >
> > > > > > > Working Latency & Capacity Summary
> > > > > > >
> > > > > > > Net Qual                UDPST                        Ookla
> > > > > > > DnCap     RPM           DnCap    RTTmin   RTTVarRnge DnCap
> Ping(no
> > > > load)
> > > > > > > 878       62            970      28       0-19       941      6
> > > > > > > 891       92            970      27       0-20       940      7
> > > > > > > 891       120           966      28       0-22       937      9
> > > > > > > 890       112           970      28       0-21       940      8
> > > > > > > 903       70            970      28       0-16       935      9
> > > > > > >
> > > > > > > Note: all RPM values were categorized as Low.
> > > > > > >
> > > > > > > networkQuality downstream capacities are always on the low side
> compared
> > > > to
> > > > > > > others. We would expect about 940Mbps for TCP, and that's mostly
> what
> > > > Ookla
> > > > > > > achieved. I think that a longer test duration might be needed to
> achieve
> > > > the
> > > > > > > actual 1Gbps capacity with networkQuality; intermediate values
> observed
> > > > were
> > > > > > > certainly headed in the right direction. (I recently upgraded to
> > > > Monterey
> > > > > > 12.6
> > > > > > > on my MacBook, so should have the latest version.)
> > > > > > >
> > > > > > > Also, as Sebastian Moeller's message to the list reminded me, I
> should
> > > > have
> > > > > > > run the tests with the -v option to help with comparisons. I'll
> repeat
> > > > this
> > > > > > > test when I can make time.
> > > > > > >
> > > > > > > The UDPST measurements of RTTmin (minimum RTT observed during the
> test)
> > > > and
> > > > > > > the range of variation above the minimum (RTTVarRnge) add-up to
> very
> > > > > > > reasonable responsiveness IMO, so I'm not clear why RPM graded
> this
> > > > access
> > > > > > and
> > > > > > > path as "Low". The UDPST server I'm using is in NJ, and I'm in
> Chicago
> > > > > > > conducting tests, so the minimum 28ms is typical. UDPST
> measurements
> > > > were
> > > > > > run
> > > > > > > on an Ubuntu VM in my MacBook.
> > > > > > >
> > > > > > > The big disappointment was that the Ookla desktop app I updated
> over the
> > > > > > > weekend did not include the new responsiveness metric! I included
> the
> > > > ping
> > > > > > > results anyway, and it was clearly using a server in the nearby
> area.
> > > > > > >
> > > > > > > So, I have some more work to do, but I hope this is interesting-
> enough
> > > > to
> > > > > > > start some comparison discussions, and bring-out some suggestions.
> > > > > > >
> > > > > > > happy testing all,
> > > > > > > Al
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > _______________________________________________
> > > > > > > ippm mailing list
> > > > > > > ippm at ietf.org
> > > > > > >
> > > > > >
> > > >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
> > > > > > >
> > > > > >
> > > >
> T!hd5MvMQw5eiICQbsfoNaZBUS38yP4YIodBvz1kV5VsX_cGIugVnz5iIkNqi6fRfIQzWef_xKqg4$
> > > > > >
> > > > > > _______________________________________________
> > > > > > ippm mailing list
> > > > > > ippm at ietf.org
> > > > > >
> > > >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
> > > > > > T!g-
> > > >
> FsktB_l9MMSGNUge6FXDkL1npaKtKcyDtWLcTZGpCunxNNCcTImH8YjC9eUT262Wd8q1EBpiw$
> > > > >
> > > > > _______________________________________________
> > > > > ippm mailing list
> > > > > ippm at ietf.org
> > > > >
> > > >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
> > > >
> T!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvbK2bOqw0uMPbFeJ7PxzxTc48iQFub_gMs
> > > > KXU$
> > > >
> > > >
> > > >
> > > > --
> > > > This song goes out to all the folk that thought Stadia would work:
> > > > https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the-
> mushroom-
> > > > song-activity-6981366665607352320-
> > > >
> FXtz__;!!BhdT!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvbK2bOqw0uMPbFeJ7PxzxT
> > > > c48iQFub34zz4iE$
> > > > Dave Täht CEO, TekLibre, LLC
> >
> >
> >
> > --
> > This song goes out to all the folk that thought Stadia would work:
> > https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the-
> mushroom-song-activity-6981366665607352320-
> FXtz__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO
> IEUcSswHLHDpSWs$
> > Dave Täht CEO, TekLibre, LLC
> 
> 
> 
> --
> This song goes out to all the folk that thought Stadia would work:
> https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the-mushroom-
> song-activity-6981366665607352320-
> FXtz__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO
> IEUcSswHLHDpSWs$
> Dave Täht CEO, TekLibre, LLC


More information about the Rpm mailing list