[LibreQoS] Fwd: [ippm] Preliminary measurement comparison of "Working Latency" metrics
Dave Taht
dave.taht at gmail.com
Mon Oct 31 19:10:31 EDT 2022
I'd be rather interested in a result from
https://github.com/BroadbandForum/obudpst
over a libreqos'd network. The IPPM meeting at ietf was rather
contentious, I'm told.
A bit more below:
---------- Forwarded message ---------
From: MORTON JR., AL <acmorton at att.com>
Date: Mon, Oct 31, 2022 at 1:41 PM
Subject: RE: [ippm] Preliminary measurement comparison of "Working
Latency" metrics
To: Dave Taht <dave.taht at gmail.com>
Cc: ippm at ietf.org <ippm at ietf.org>, Rpm <rpm at lists.bufferbloat.net>
Thanks for your read/reply/suggestions, Dave!
Allow me to pull a few of your comments forward to try for a more cogent reply.
Dave wrote:
> Thank you very much for the steer to RFC9097. I'd completely missed that.
You're quite welcome! We all have a hand on the elephant with eyes
closed, and only learn the whole story when we talk to each other...
The repo for UDPST is here: https://github.com/BroadbandForum/obudpst
We are also working to standardize the protocol the UDPST uses to
measure RFC 9097:
https://datatracker.ietf.org/doc/html/draft-ietf-ippm-capacity-protocol-03
and potentially many aspects of network and application performance.
Dave wrote:
> Certainly we can expect bi-modal distributions ... should be called ... speed-lose.
Agreed. I'm glad we added the bimodal analysis feature in UDPST. It
works with our max test duration (60s, we didn't want people to run
capacity tests ad nauseum), but we won't be able to detect speed-loose
beyond that.
Dave wrote:
> One of my biggest issues with the rpm spec so far is that it should,
> at least, sometimes, run randomly longer than the overly short interval <now> ...
We don't have adaptable duration either. Another perspective on
duration comes from folks who test paths with mobile access: they
prefer 5-7 second duration, and the Type C algo helps.
Dave wrote:
> So adding another mode - how quickly is peak bandwidth actually
> reached, would be nice.
I think we report this in our prolific JSON-formatted output
(#sub-interval with the Max Cap).
Dave wrote:
> How does networkQuality compare vs a vs your tool vs a vs goresponsiveness?
I'll try to install goresponsiveness later this week, so that we have
a view of this.
Dave wrote:
> have you tried irtt? (https://github.com/heistp/irtt )
I have not. Seems like a reasonable tool for UDP testing. The feature
I didn't like in my scan of the documentation is the use of
Inter-packet delay variation (IPDV) instead of packet delay variation
(PDV): variation from the minimum (or reference) delay. The morbidly
curious can find my analysis in RFC 5481:
https://datatracker.ietf.org/doc/html/rfc5481
irtt's use of IPDV means that the results won’t compare with UDPST,
and possibly networkQuality. But I may give it a try anyway...
thanks again, Dave.
Al
> -----Original Message-----
> From: Dave Taht <dave.taht at gmail.com>
> Sent: Monday, October 31, 2022 12:52 PM
> To: MORTON JR., AL <acmorton at att.com>
> Cc: ippm at ietf.org; Rpm <rpm at lists.bufferbloat.net>
> Subject: Re: [ippm] Preliminary measurement comparison of "Working Latency"
> metrics
>
> Thank you very much for the steer to RFC9097. I'd completely missed that.
>
> On Mon, Oct 31, 2022 at 9:04 AM MORTON JR., AL <acmorton at att.com> wrote:
> >
> > (astute readers may have guessed that I pressed "send" too soon on previous
> message...)
> >
> > I also conducted upstream tests this time, here are the results:
> > (capacity in Mbps, delays in ms, h and m are RPM categories, High and
> Medium)
> >
> > Net Qual UDPST (RFC9097) Ookla
> > UpCap RPM DelLD DelMin UpCap RTTmin RTTrange UpCap
> Ping(no load)
> > 34 1821 h 33ms 11ms 23 (42) 28 0-252 22 8
> > 22 281 m 214ms 8ms 27 (52) 25 5-248 22 8
> > 22 290 m 207ms 8ms 27 (55) 28 0-253 22 9
> > 21 330 m 182ms 11ms 23 (44) 28 0-255 22 7
> > 22 334 m 180ms 9ms 33 (56) 25 0-255 22 9
> >
> > The Upstream capacity measurements reflect an interesting feature that we
> can reliably and repeatably measure with UDPST. The first ~3 seconds of
> upstream data experience a "turbo mode" of ~50Mbps. UDPST displays this
> behavior in its 1 second sub-interval measurements and has a bimodal reporting
> option that divides the complete measurement interval in two time intervals to
> report an initial (turbo) max capacity and a steady-state max capacity for the
> later intervals. The UDPST capacity results present both measurements: steady-
> state first.
>
> Certainly we can expect bi-model distributions from many ISPs, as, for
> one thing, the "speedboost" concept remains popular, except that it's
> misnamed, as it should be called speed-subtract or speed-lose. Worse,
> it is often configured "sneakily", in that it doesn't kick in for the
> typical observed duration of the test, for some, they cut the
> available bandwidth about 20s in, others, 1 or 5 minutes.
>
> One of my biggest issues with the rpm spec so far is that it should,
> at least, sometimes, run randomly longer than the overly short
> interval it runs for and the tools also allow for manual override of length.
>
> we caught a lot of tomfoolery with flent's rrul test running by default for
> 1m.
>
> Also, AQMs on the path can take a while to find the optimal drop or mark rate.
>
> >
> > The capacity processing in networkQuality and Ookla appear to report the
> steady-state result.
>
> Ookla used to basically report the last result. Also it's not a good
> indicator of web traffic behavior at all, watching the curve
> go up much more slowly in their test on say, fiber 2ms, vs starlink,
> (40ms)....
>
> So adding another mode - how quickly is peak bandwidth actually
> reached, would be nice.
>
> I haven't poked into the current iteration of the goresponsiveness
> test at all: https://urldefense.com/v3/__https://github.com/network-
> quality/goresponsiveness__;!!BhdT!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvb
> K2bOqw0uMPbFeJ7PxzxTc48iQFubYTxxmyA$ it
> would be good to try collecting more statistics and histograms and
> methods of analyzing the data in that libre-source version.
>
> How does networkQuality compare vs a vs your tool vs a vs goresponsiveness?
>
> >I watched the upstream capacity measurements on the Ookla app, and could
> easily see the initial rise to 40-50Mbps, then the drop to ~22Mbps for most of
> the test which determined the final result.
>
> I tend to get upset when I see ookla's new test flash a peak result in
> the seconds and then settle on some lower number somehow.
> So far as I know they are only sampling the latency every 250ms.
>
> >
> > The working latency is about 200ms in networkQuality and about 280ms as
> measured by UDPST (RFC9097). Note that the networkQuality minimum delay is
> ~20ms lower than the UDPST RTTmin, so this accounts for some of the difference
> in working latency. Also, we used the very dynamic Type C load
> adjustment/search algorithm in UDPST during all of this testing, which could
> explain the higher working latency to some degree.
> >
> > So, it's worth noting that the measurements needed for assessing working
> latency/responsiveness are available in the UDPST utility, and that the UDPST
> measurements are conducted on UDP transport (used by a growing fraction of
> Internet traffic).
>
> Thx, didn't know of this work til now!
>
> have you tried irtt?
>
> >
> > comments welcome of course,
> > Al
> >
> > > -----Original Message-----
> > > From: ippm <ippm-bounces at ietf.org> On Behalf Of MORTON JR., AL
> > > Sent: Sunday, October 30, 2022 8:09 PM
> > > To: ippm at ietf.org
> > > Subject: Re: [ippm] Preliminary measurement comparison of "Working
> Latency"
> > > metrics
> > >
> > >
> > > Hi again RPM friends and IPPM'ers,
> > >
> > > As promised, I repeated the tests shared last week, this time using both
> the
> > > verbose (-v) and sequential (-s) dwn/up test options of networkQuality. I
> > > followed Sebastian's calculations as well.
> > >
> > > Working Latency & Capacity Summary
> > >
> > > Net Qual UDPST Ookla
> > > DnCap RPM DelLD DelMin DnCap RTTmin RTTrange DnCap
> > > Ping(no load)
> > > 885 916 m 66ms 8ms 970 28 0-20 940 8
> > > 888 1355 h 44ms 8ms 966 28 0-23 940 8
> > > 891 1109 h 54ms 8ms 968 27 0-19 940 9
> > > 887 1141 h 53ms 11ms 966 27 0-18 937 7
> > > 884 1151 h 52ms 9ms 968 28 0-20 937 9
> > >
> > > With the sequential test option, I noticed that networkQuality achieved
> nearly
> > > the maximum capacity reported almost immediately at the start of a test.
> > > However, the reported capacities are low by about 60Mbps, especially when
> > > compared to the Ookla TCP measurements.
> > >
> > > The loaded delay (DelLD) is similar to the UDPST RTTmin + (the high end of
> the
> > > RTTrange), for example 54ms compared to (27+19=46). Most of the
> networkQuality
> > > RPM measurements were categorized as "High". There doesn't seem to be much
> > > buffering in the downstream direction.
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: ippm <ippm-bounces at ietf.org> On Behalf Of MORTON JR., AL
> > > > Sent: Monday, October 24, 2022 6:36 PM
> > > > To: ippm at ietf.org
> > > > Subject: [ippm] Preliminary measurement comparison of "Working Latency"
> > > > metrics
> > > >
> > > >
> > > > Hi RPM friends and IPPM'ers,
> > > >
> > > > I was wondering what a comparison of some of the "working latency"
> metrics
> > > > would look like, so I ran some tests using a service on DOCSIS 3.1, with
> the
> > > > downlink provisioned for 1Gbps.
> > > >
> > > > I intended to run apple's networkQuality, UDPST (RFC9097), and Ookla
> > > Speedtest
> > > > with as similar connectivity as possible (but we know that the traffic
> will
> > > > diverge to different servers and we can't change that aspect).
> > > >
> > > > Here's a quick summary of yesterday's results:
> > > >
> > > > Working Latency & Capacity Summary
> > > >
> > > > Net Qual UDPST Ookla
> > > > DnCap RPM DnCap RTTmin RTTVarRnge DnCap Ping(no
> load)
> > > > 878 62 970 28 0-19 941 6
> > > > 891 92 970 27 0-20 940 7
> > > > 891 120 966 28 0-22 937 9
> > > > 890 112 970 28 0-21 940 8
> > > > 903 70 970 28 0-16 935 9
> > > >
> > > > Note: all RPM values were categorized as Low.
> > > >
> > > > networkQuality downstream capacities are always on the low side compared
> to
> > > > others. We would expect about 940Mbps for TCP, and that's mostly what
> Ookla
> > > > achieved. I think that a longer test duration might be needed to achieve
> the
> > > > actual 1Gbps capacity with networkQuality; intermediate values observed
> were
> > > > certainly headed in the right direction. (I recently upgraded to
> Monterey
> > > 12.6
> > > > on my MacBook, so should have the latest version.)
> > > >
> > > > Also, as Sebastian Moeller's message to the list reminded me, I should
> have
> > > > run the tests with the -v option to help with comparisons. I'll repeat
> this
> > > > test when I can make time.
> > > >
> > > > The UDPST measurements of RTTmin (minimum RTT observed during the test)
> and
> > > > the range of variation above the minimum (RTTVarRnge) add-up to very
> > > > reasonable responsiveness IMO, so I'm not clear why RPM graded this
> access
> > > and
> > > > path as "Low". The UDPST server I'm using is in NJ, and I'm in Chicago
> > > > conducting tests, so the minimum 28ms is typical. UDPST measurements
> were
> > > run
> > > > on an Ubuntu VM in my MacBook.
> > > >
> > > > The big disappointment was that the Ookla desktop app I updated over the
> > > > weekend did not include the new responsiveness metric! I included the
> ping
> > > > results anyway, and it was clearly using a server in the nearby area.
> > > >
> > > > So, I have some more work to do, but I hope this is interesting-enough
> to
> > > > start some comparison discussions, and bring-out some suggestions.
> > > >
> > > > happy testing all,
> > > > Al
> > > >
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > ippm mailing list
> > > > ippm at ietf.org
> > > >
> > >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
> > > >
> > >
> T!hd5MvMQw5eiICQbsfoNaZBUS38yP4YIodBvz1kV5VsX_cGIugVnz5iIkNqi6fRfIQzWef_xKqg4$
> > >
> > > _______________________________________________
> > > ippm mailing list
> > > ippm at ietf.org
> > >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
> > > T!g-
> FsktB_l9MMSGNUge6FXDkL1npaKtKcyDtWLcTZGpCunxNNCcTImH8YjC9eUT262Wd8q1EBpiw$
> >
> > _______________________________________________
> > ippm mailing list
> > ippm at ietf.org
> >
> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
> T!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvbK2bOqw0uMPbFeJ7PxzxTc48iQFub_gMs
> KXU$
>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
> https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the-mushroom-
> song-activity-6981366665607352320-
> FXtz__;!!BhdT!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvbK2bOqw0uMPbFeJ7PxzxT
> c48iQFub34zz4iE$
> Dave Täht CEO, TekLibre, LLC
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
More information about the LibreQoS
mailing list