[Rpm] [ippm] Preliminary measurement comparison of "Working Latency" metrics
Sebastian Moeller
moeller0 at gmx.de
Fri Nov 4 15:10:03 EDT 2022
Hi Al,
> On Nov 4, 2022, at 18:14, MORTON JR., AL via Rpm <rpm at lists.bufferbloat.net> wrote:
>
> Hi all,
>
> I have been working through the threads on misery metrics and lightweight sensing of bw and buffering, and with those insights and gold-nuggets in mind, I'm iterating through testing the combinations of tools that Dave That and Bob McMahon suggested.
>
> earlier this week, Dave wrote:
>> How does networkQuality compare vs a vs your tool vs a vs goresponsiveness?
>
> goresponsiveness installed flawlessly - very nice instructions and getting-started info.
>
> Comparison of NetQual (networkQuality -vs), UDPST -A c, gores/goresponsiveness
>
> Working Latency & Capacity Summary on DOCSIS 3.1 access with 1 Gbps down-stream service
> (capacity in Mbps, delays in ms, h and m are RPM categories, High and Medium)
>
> NetQual UDPST (RFC 9097) gores
> DnCap RPM DelLD DelMin DnCap RTTmin RTTrange DnCap RPM DelLD
> 882 788 m 76ms 8ms 967 28 0-16 127 (1382) 43ms
> 892 1036 h 58ms 8 966 27 0-18 128 (2124) 28ms
> 887 1304 h 46ms 6 969 27 0-18 130 (1478) 41ms
> 885 1008 h 60ms 8 967 28 0-22 127 (1490) 40ms
> 894 1383 h 43ms 11 967 28 0-15 133 (2731) 22ms
>
> NetQual UDPST (RFC 9097) gores
> UpCap RPM DelLD DelMin UpCap RTTmin RTTrange UpCap RPM DelLD
> 21 327 m 183ms 8ms 22 (51) 28 0-253 12
> 21 413 m 145ms 8 22 (43) 28 0-255 15
> 22 273 m 220ms 6 23 (53) 28 0-259 10
> 21 377 m 159ms 8 23 (51) 28 0-250 10
> 22 281 m 214ms 11 23 (52) 28 0-250 6
>
> These tests were conducted in a round-robin fashion to minimize the possibility of network variations between measurements:
> NetQual - rest - UDPST-Dn - rest- UDPST-Up - rest - gores - rest - repeat
>
> NetQual indicates the same reduced capacity in Downstream when compared to UDPST (940Mbps is the max for TCP payloads, while 967-970 is max for IP-layer capacity, dep. on VLAN tag).
[SM] DOCSIS ISPs traditionally provision higher peak rates than they advertise, with a DOCSIS modem/router with > 1 Gbps LAN capacity (even via bonded LAG) people in Germany routinely measure TCP/IPv4/HTTP goodput in the 1050 Mbps range. But typically gigabit ethernet limits the practically achievable throughput somewhat:
Ethernet payload rate: 1000 * ((1500)/(1500 + 38)) = 975.29 Mbps
Ethernet payload rate +VLAN: 1000 * ((1500)/(1500 + 38 + 4)) = 972.76 Mbps
IPv4 payload (ethernet+VLAN): 1000 * ((1500 - 20)/(1500 + 38 + 4)) = 959.79 Mbps
IPv6 payload (ethernet+VLAN): 1000 * ((1500 - 40)/(1500 + 38 + 4)) = 946.82 Mbps
IPv4/TCP payload (ethernet+VLAN): 1000 * ((1500 - 20 - 20)/(1500 + 38 + 4)) = 946.82 Mbps
IPv6/TCP payload (ethernet+VLAN): 1000 * ((1500 - 20 - 40)/(1500 + 38 + 4)) = 933.85 Mbps
IPv4/TCP/RFC1323 timestamps payload: 1000 * ((1500 - 12 - 20 - 20)/(1500 + 38 + 4)) = 939.04 Mbps
IPv6/TCP/RFC1323 timestamps payload: 1000 * ((1500 - 12 - 20 - 40)/(1500 + 38 + 4)) = 926.07 Mbps
Speedtests tend to report IPv4/TCP timestamps might be on depending on the OS, but on-line speedtest almost never return the simple average rate over the measurement duration, but play "clever" tricks to exclude the start-up phase and to aggregate over multiple flows that invariably ending with results that tend to exceed the hard limits shown above...
> Upstream capacities are not very different (a factor that made TCP methods more viable many years ago when most consumer access speeds were limited to 10's of Megabits).
[SM] My take on this is that this is partly due to the goal of ramping up very quickly and get away with a short measurement duration. That causes imprecision. As Dave said flent's RRUL test defaults to 60 seconds and I often ran/run it for 5 to 10 minutes to get somewhat more reliable numbers (and for timecourses to look at and reason about).
> gores reports significantly lower capacity in both downstream and upstream measurements, a factor of 7 less than NetQual for downstream. Interestingly, the reduced capacity (taken as the working load) results in higher responsiveness: RPM meas are higher and loaded delays are lower for downstream.
[SM] Yepp, if you only fill-up the queue partially you will harvest less queueing delay and hence retain more responsiveness, albeit this really just seems to be better interpreted as go-responsiveness failing to achieve "working-conditions".
I tend to run gores like this:
time ./networkQuality --config mensura.cdn-apple.com --port 443 --path /api/v1/gm/config --sattimeout 60 --extended-stats --logger-filename go_networkQuality_$(date +%Y%m%d_%H%M%S)
--sattimeout 60 extends the time out for the saturation measurement somewhat (before I saw it simply failing on a 1 Gbps access link, it did give some diagnostic message though).
>
> The comparison of networkQuality and goresponsiveness is somewhat confounded by the need to use the Apple server infrastructure for both these methods (the documentation provides this option - thanks!). I don't have admin access to our server at the moment. But the measured differences are large despite the confounding factor.
[SM] Puzzled, I thought when comparing the two ntworkQuality variants having the same back-end sort of helps be reducing the differences to the clients? But comparison to UDPST suffers somewhat.
>
> goresponsiveness has its own, very different output format than networkQuality. There isn't a comparable "-v" option other than -debug (which is extremely detailed). gores only reports RPM for the downstream.
[SM] I agree it would be nice if gores would grow a sequential mode as well.
> I hope that these results will prompt more of the principle evangelists and coders to weigh-in.
[SM] No luck ;) but at least "amateur hour" (aka me) is ready to discuss.
>
> It's also worth noting that the RTTrange reported by UDPST is the range above the minimum RTT, and represents the entire 10 second test. The consistency of the maximum of the range (~255ms) seems to indicate that UDPST has characterized the length of the upstream buffer during measurements.
[SM] thanks for explaining that.
Regards
Sebastian
>
> I'm sure there is more to observe that is prompted by these measurements; comments welcome!
>
> Al
>
>
>> -----Original Message-----
>> From: ippm <ippm-bounces at ietf.org> On Behalf Of MORTON JR., AL
>> Sent: Tuesday, November 1, 2022 10:51 AM
>> To: Dave Taht <dave.taht at gmail.com>
>> Cc: ippm at ietf.org; Rpm <rpm at lists.bufferbloat.net>
>> Subject: Re: [ippm] Preliminary measurement comparison of "Working Latency"
>> metrics
>>
>> *** Security Advisory: This Message Originated Outside of AT&T ***.
>> Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.
>>
>> Hi Dave,
>> Thanks for trying UDPST (RFC 9097)!
>>
>> Something you might try with starlink:
>> use the -X option and UDPST will generate random payloads.
>>
>> The measurements with -X will reflect the uncompressed that are possible.
>> I tried this on a ship-board Internet access: uncompressed rate was 100kbps.
>>
>> A few more quick replies below,
>> Al
>>
>>> -----Original Message-----
>>> From: Dave Taht <dave.taht at gmail.com>
>>> Sent: Tuesday, November 1, 2022 12:22 AM
>>> To: MORTON JR., AL <acmorton at att.com>
>>> Cc: ippm at ietf.org; Rpm <rpm at lists.bufferbloat.net>
>>> Subject: Re: [ippm] Preliminary measurement comparison of "Working Latency"
>>> metrics
>>>
>>> Dear Al:
>>>
>>> OK, I took your udpst tool for a spin.
>>>
>>> NICE! 120k binary (I STILL work on machines with only 4MB of flash),
>>> good feature set, VERY fast,
>> [acm]
>> Len Ciavattone (my partner in crime on several measurement projects) is the
>> lead coder: he has implemented many measurement tools extremely efficiently,
>> this one in C-lang.
>>
>>> and in very brief testing, seemed
>>> to be accurate in the starlink case, though it's hard to tell with
>>> them as the rate changes every 15s.
>> [acm]
>> Great! Our guiding principle developing UDPST has been to test the accuracy of
>> measurements against a ground-truth. It pays-off.
>>
>>>
>>> I filed a couple bug reports on trivial stuff:
>>>
>> https://urldefense.com/v3/__https://github.com/BroadbandForum/obudpst/issues/8
>>>
>> __;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUc
>>> SswH_opYmEw$
>> [acm]
>> much appreciated... We have an OB-UDPST project meeting this Friday, can
>> discuss then.
>>
>>>
>>> (Adding diffserv and ecn washing or marking detection would be a nice
>>> feature to have)
>>>
>>> Aside from the sheer joy coming from the "it compiles! and runs!"
>>> phase I haven't looked much further.
>>>
>>> I left a copy running on one of my starlink testbeds -
>>> fremont.starlink.taht.net - if anyone wants to try it. It's
>>> instrumented with netperf, flent, irtt, iperf2 (not quite the latest
>>> version from bob, but close), and now udpst, and good to about a gbit.
>>>
>>> nice tool!
>> [acm]
>> Thanks again!
>>
>>>
>>> Has anyone here played with crusader? (
>>>
>> https://urldefense.com/v3/__https://github.com/Zoxc/crusader__;!!BhdT!iufMVqCy
>>> oH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHm4Wtzjc$ )
>>>
>>> On Mon, Oct 31, 2022 at 4:30 PM Dave Taht <dave.taht at gmail.com> wrote:
>>>>
>>>> On Mon, Oct 31, 2022 at 1:41 PM MORTON JR., AL <acmorton at att.com> wrote:
>>>>
>>>>>> have you tried irtt?
>>>
>> (https://urldefense.com/v3/__https://github.com/heistp/irtt__;!!BhdT!iufMVqCyo
>>> H_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHBSirIcE$ )
>>>>> I have not. Seems like a reasonable tool for UDP testing. The feature I
>>> didn't like in my scan of the documentation is the use of Inter-packet delay
>>> variation (IPDV) instead of packet delay variation (PDV): variation from the
>>> minimum (or reference) delay. The morbidly curious can find my analysis in
>> RFC
>>> 5481:
>>>
>> https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/rfc5481__;!!
>>>
>> BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHt
>>> T7QSlc$
>>>>
>>>> irtt was meant to simulate high speed voip and one day
>>>> videoconferencing. Please inspect the json output
>>>> for other metrics. Due to OS limits it is typically only accurate to a
>>>> 3ms interval. One thing it does admirably is begin to expose the
>>>> sordid sump of L2 behaviors in 4g, 5g, wifi, and other wireless
>>>> technologies, as well as request/grant systems like cable and gpon,
>>>> especially when otherwise idle.
>>>>
>>>> Here is a highres plot of starlink's behaviors from last year:
>>>> https://urldefense.com/v3/__https://forum.openwrt.org/t/cake-w-adaptive-
>>> bandwidth-
>>>
>> historic/108848/3238__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vle
>>> KCIKpL3PBLwci5nOIEUcSswH_5Sms3w$
>>>>
>>>> clearly showing them "optimizing for bandwidth" and changing next sat
>>>> hop, and about a 40ms interval of buffering between these switches.
>>>> I'd published elsewhere, if anyone cares, a preliminary study of what
>>>> starlink's default behaviors did to cubic and BBR...
>>>>
>>>>>
>>>>> irtt's use of IPDV means that the results won’t compare with UDPST, and
>>> possibly networkQuality. But I may give it a try anyway...
>>>>
>>>> The more the merrier! Someday the "right" metrics will arrive.
>>>>
>>>> As a side note, this paper focuses on RAN uplink latency
>>>>
>>>
>> https://urldefense.com/v3/__https://dl.ifip.org/db/conf/itc/itc2021/1570740615
>>>
>> .pdf__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO
>>> IEUcSswHgvqerjg$ which I think
>>>> is a major barrier to most forms of 5G actually achieving good
>>>> performance in a FPS game, if it is true for more RANs. I'd like more
>>>> to be testing uplink latencies idle and with load, on all
>>>> technologies.
>>>>
>>>>>
>>>>> thanks again, Dave.
>>>>> Al
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Dave Taht <dave.taht at gmail.com>
>>>>>> Sent: Monday, October 31, 2022 12:52 PM
>>>>>> To: MORTON JR., AL <acmorton at att.com>
>>>>>> Cc: ippm at ietf.org; Rpm <rpm at lists.bufferbloat.net>
>>>>>> Subject: Re: [ippm] Preliminary measurement comparison of "Working
>>> Latency"
>>>>>> metrics
>>>>>>
>>>>>> Thank you very much for the steer to RFC9097. I'd completely missed
>>> that.
>>>>>>
>>>>>> On Mon, Oct 31, 2022 at 9:04 AM MORTON JR., AL <acmorton at att.com>
>> wrote:
>>>>>>>
>>>>>>> (astute readers may have guessed that I pressed "send" too soon on
>>> previous
>>>>>> message...)
>>>>>>>
>>>>>>> I also conducted upstream tests this time, here are the results:
>>>>>>> (capacity in Mbps, delays in ms, h and m are RPM categories, High
>> and
>>>>>> Medium)
>>>>>>>
>>>>>>> Net Qual UDPST (RFC9097)
>> Ookla
>>>>>>> UpCap RPM DelLD DelMin UpCap RTTmin RTTrange
>> UpCap
>>>>>> Ping(no load)
>>>>>>> 34 1821 h 33ms 11ms 23 (42) 28 0-252 22
>>> 8
>>>>>>> 22 281 m 214ms 8ms 27 (52) 25 5-248 22
>>> 8
>>>>>>> 22 290 m 207ms 8ms 27 (55) 28 0-253 22
>>> 9
>>>>>>> 21 330 m 182ms 11ms 23 (44) 28 0-255 22
>>> 7
>>>>>>> 22 334 m 180ms 9ms 33 (56) 25 0-255 22
>>> 9
>>>>>>>
>>>>>>> The Upstream capacity measurements reflect an interesting feature
>> that
>>> we
>>>>>> can reliably and repeatably measure with UDPST. The first ~3 seconds
>> of
>>>>>> upstream data experience a "turbo mode" of ~50Mbps. UDPST displays
>> this
>>>>>> behavior in its 1 second sub-interval measurements and has a bimodal
>>> reporting
>>>>>> option that divides the complete measurement interval in two time
>>> intervals to
>>>>>> report an initial (turbo) max capacity and a steady-state max capacity
>>> for the
>>>>>> later intervals. The UDPST capacity results present both measurements:
>>> steady-
>>>>>> state first.
>>>>>>
>>>>>> Certainly we can expect bi-model distributions from many ISPs, as, for
>>>>>> one thing, the "speedboost" concept remains popular, except that it's
>>>>>> misnamed, as it should be called speed-subtract or speed-lose. Worse,
>>>>>> it is often configured "sneakily", in that it doesn't kick in for the
>>>>>> typical observed duration of the test, for some, they cut the
>>>>>> available bandwidth about 20s in, others, 1 or 5 minutes.
>>>>>>
>>>>>> One of my biggest issues with the rpm spec so far is that it should,
>>>>>> at least, sometimes, run randomly longer than the overly short
>>>>>> interval it runs for and the tools also allow for manual override of
>>> length.
>>>>>>
>>>>>> we caught a lot of tomfoolery with flent's rrul test running by
>> default
>>> for
>>>>>> 1m.
>>>>>>
>>>>>> Also, AQMs on the path can take a while to find the optimal drop or
>> mark
>>> rate.
>>>>>>
>>>>>>>
>>>>>>> The capacity processing in networkQuality and Ookla appear to report
>>> the
>>>>>> steady-state result.
>>>>>>
>>>>>> Ookla used to basically report the last result. Also it's not a good
>>>>>> indicator of web traffic behavior at all, watching the curve
>>>>>> go up much more slowly in their test on say, fiber 2ms, vs starlink,
>>>>>> (40ms)....
>>>>>>
>>>>>> So adding another mode - how quickly is peak bandwidth actually
>>>>>> reached, would be nice.
>>>>>>
>>>>>> I haven't poked into the current iteration of the goresponsiveness
>>>>>> test at all: https://urldefense.com/v3/__https://github.com/network-
>>>>>>
>>>
>> quality/goresponsiveness__;!!BhdT!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvb
>>>>>> K2bOqw0uMPbFeJ7PxzxTc48iQFubYTxxmyA$ it
>>>>>> would be good to try collecting more statistics and histograms and
>>>>>> methods of analyzing the data in that libre-source version.
>>>>>>
>>>>>> How does networkQuality compare vs a vs your tool vs a vs
>>> goresponsiveness?
>>>>>>
>>>>>>> I watched the upstream capacity measurements on the Ookla app, and
>>> could
>>>>>> easily see the initial rise to 40-50Mbps, then the drop to ~22Mbps for
>>> most of
>>>>>> the test which determined the final result.
>>>>>>
>>>>>> I tend to get upset when I see ookla's new test flash a peak result in
>>>>>> the seconds and then settle on some lower number somehow.
>>>>>> So far as I know they are only sampling the latency every 250ms.
>>>>>>
>>>>>>>
>>>>>>> The working latency is about 200ms in networkQuality and about 280ms
>>> as
>>>>>> measured by UDPST (RFC9097). Note that the networkQuality minimum
>> delay
>>> is
>>>>>> ~20ms lower than the UDPST RTTmin, so this accounts for some of the
>>> difference
>>>>>> in working latency. Also, we used the very dynamic Type C load
>>>>>> adjustment/search algorithm in UDPST during all of this testing, which
>>> could
>>>>>> explain the higher working latency to some degree.
>>>>>>>
>>>>>>> So, it's worth noting that the measurements needed for assessing
>>> working
>>>>>> latency/responsiveness are available in the UDPST utility, and that
>> the
>>> UDPST
>>>>>> measurements are conducted on UDP transport (used by a growing
>> fraction
>>> of
>>>>>> Internet traffic).
>>>>>>
>>>>>> Thx, didn't know of this work til now!
>>>>>>
>>>>>> have you tried irtt?
>>>>>>
>>>>>>>
>>>>>>> comments welcome of course,
>>>>>>> Al
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: ippm <ippm-bounces at ietf.org> On Behalf Of MORTON JR., AL
>>>>>>>> Sent: Sunday, October 30, 2022 8:09 PM
>>>>>>>> To: ippm at ietf.org
>>>>>>>> Subject: Re: [ippm] Preliminary measurement comparison of "Working
>>>>>> Latency"
>>>>>>>> metrics
>>>>>>>>
>>>>>>>>
>>>>>>>> Hi again RPM friends and IPPM'ers,
>>>>>>>>
>>>>>>>> As promised, I repeated the tests shared last week, this time
>> using
>>> both
>>>>>> the
>>>>>>>> verbose (-v) and sequential (-s) dwn/up test options of
>>> networkQuality. I
>>>>>>>> followed Sebastian's calculations as well.
>>>>>>>>
>>>>>>>> Working Latency & Capacity Summary
>>>>>>>>
>>>>>>>> Net Qual UDPST
>>> Ookla
>>>>>>>> DnCap RPM DelLD DelMin DnCap RTTmin RTTrange
>>> DnCap
>>>>>>>> Ping(no load)
>>>>>>>> 885 916 m 66ms 8ms 970 28 0-20
>> 940
>>> 8
>>>>>>>> 888 1355 h 44ms 8ms 966 28 0-23
>> 940
>>> 8
>>>>>>>> 891 1109 h 54ms 8ms 968 27 0-19
>> 940
>>> 9
>>>>>>>> 887 1141 h 53ms 11ms 966 27 0-18
>> 937
>>> 7
>>>>>>>> 884 1151 h 52ms 9ms 968 28 0-20
>> 937
>>> 9
>>>>>>>>
>>>>>>>> With the sequential test option, I noticed that networkQuality
>>> achieved
>>>>>> nearly
>>>>>>>> the maximum capacity reported almost immediately at the start of a
>>> test.
>>>>>>>> However, the reported capacities are low by about 60Mbps,
>> especially
>>> when
>>>>>>>> compared to the Ookla TCP measurements.
>>>>>>>>
>>>>>>>> The loaded delay (DelLD) is similar to the UDPST RTTmin + (the
>> high
>>> end of
>>>>>> the
>>>>>>>> RTTrange), for example 54ms compared to (27+19=46). Most of the
>>>>>> networkQuality
>>>>>>>> RPM measurements were categorized as "High". There doesn't seem to
>>> be much
>>>>>>>> buffering in the downstream direction.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: ippm <ippm-bounces at ietf.org> On Behalf Of MORTON JR., AL
>>>>>>>>> Sent: Monday, October 24, 2022 6:36 PM
>>>>>>>>> To: ippm at ietf.org
>>>>>>>>> Subject: [ippm] Preliminary measurement comparison of "Working
>>> Latency"
>>>>>>>>> metrics
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hi RPM friends and IPPM'ers,
>>>>>>>>>
>>>>>>>>> I was wondering what a comparison of some of the "working
>> latency"
>>>>>> metrics
>>>>>>>>> would look like, so I ran some tests using a service on DOCSIS
>>> 3.1, with
>>>>>> the
>>>>>>>>> downlink provisioned for 1Gbps.
>>>>>>>>>
>>>>>>>>> I intended to run apple's networkQuality, UDPST (RFC9097), and
>>> Ookla
>>>>>>>> Speedtest
>>>>>>>>> with as similar connectivity as possible (but we know that the
>>> traffic
>>>>>> will
>>>>>>>>> diverge to different servers and we can't change that aspect).
>>>>>>>>>
>>>>>>>>> Here's a quick summary of yesterday's results:
>>>>>>>>>
>>>>>>>>> Working Latency & Capacity Summary
>>>>>>>>>
>>>>>>>>> Net Qual UDPST Ookla
>>>>>>>>> DnCap RPM DnCap RTTmin RTTVarRnge DnCap
>>> Ping(no
>>>>>> load)
>>>>>>>>> 878 62 970 28 0-19 941 6
>>>>>>>>> 891 92 970 27 0-20 940 7
>>>>>>>>> 891 120 966 28 0-22 937 9
>>>>>>>>> 890 112 970 28 0-21 940 8
>>>>>>>>> 903 70 970 28 0-16 935 9
>>>>>>>>>
>>>>>>>>> Note: all RPM values were categorized as Low.
>>>>>>>>>
>>>>>>>>> networkQuality downstream capacities are always on the low side
>>> compared
>>>>>> to
>>>>>>>>> others. We would expect about 940Mbps for TCP, and that's mostly
>>> what
>>>>>> Ookla
>>>>>>>>> achieved. I think that a longer test duration might be needed to
>>> achieve
>>>>>> the
>>>>>>>>> actual 1Gbps capacity with networkQuality; intermediate values
>>> observed
>>>>>> were
>>>>>>>>> certainly headed in the right direction. (I recently upgraded to
>>>>>> Monterey
>>>>>>>> 12.6
>>>>>>>>> on my MacBook, so should have the latest version.)
>>>>>>>>>
>>>>>>>>> Also, as Sebastian Moeller's message to the list reminded me, I
>>> should
>>>>>> have
>>>>>>>>> run the tests with the -v option to help with comparisons. I'll
>>> repeat
>>>>>> this
>>>>>>>>> test when I can make time.
>>>>>>>>>
>>>>>>>>> The UDPST measurements of RTTmin (minimum RTT observed during
>> the
>>> test)
>>>>>> and
>>>>>>>>> the range of variation above the minimum (RTTVarRnge) add-up to
>>> very
>>>>>>>>> reasonable responsiveness IMO, so I'm not clear why RPM graded
>>> this
>>>>>> access
>>>>>>>> and
>>>>>>>>> path as "Low". The UDPST server I'm using is in NJ, and I'm in
>>> Chicago
>>>>>>>>> conducting tests, so the minimum 28ms is typical. UDPST
>>> measurements
>>>>>> were
>>>>>>>> run
>>>>>>>>> on an Ubuntu VM in my MacBook.
>>>>>>>>>
>>>>>>>>> The big disappointment was that the Ookla desktop app I updated
>>> over the
>>>>>>>>> weekend did not include the new responsiveness metric! I
>> included
>>> the
>>>>>> ping
>>>>>>>>> results anyway, and it was clearly using a server in the nearby
>>> area.
>>>>>>>>>
>>>>>>>>> So, I have some more work to do, but I hope this is interesting-
>>> enough
>>>>>> to
>>>>>>>>> start some comparison discussions, and bring-out some
>> suggestions.
>>>>>>>>>
>>>>>>>>> happy testing all,
>>>>>>>>> Al
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> ippm mailing list
>>>>>>>>> ippm at ietf.org
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>
>> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>
>> T!hd5MvMQw5eiICQbsfoNaZBUS38yP4YIodBvz1kV5VsX_cGIugVnz5iIkNqi6fRfIQzWef_xKqg4$
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> ippm mailing list
>>>>>>>> ippm at ietf.org
>>>>>>>>
>>>>>>
>>>
>> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
>>>>>>>> T!g-
>>>>>>
>>> FsktB_l9MMSGNUge6FXDkL1npaKtKcyDtWLcTZGpCunxNNCcTImH8YjC9eUT262Wd8q1EBpiw$
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> ippm mailing list
>>>>>>> ippm at ietf.org
>>>>>>>
>>>>>>
>>>
>> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
>>>>>>
>>>
>> T!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvbK2bOqw0uMPbFeJ7PxzxTc48iQFub_gMs
>>>>>> KXU$
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>> https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the-
>>> mushroom-
>>>>>> song-activity-6981366665607352320-
>>>>>>
>>>
>> FXtz__;!!BhdT!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvbK2bOqw0uMPbFeJ7PxzxT
>>>>>> c48iQFub34zz4iE$
>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>
>>>>
>>>>
>>>> --
>>>> This song goes out to all the folk that thought Stadia would work:
>>>> https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the-
>>> mushroom-song-activity-6981366665607352320-
>>>
>> FXtz__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO
>>> IEUcSswHLHDpSWs$
>>>> Dave Täht CEO, TekLibre, LLC
>>>
>>>
>>>
>>> --
>>> This song goes out to all the folk that thought Stadia would work:
>>> https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the-
>> mushroom-
>>> song-activity-6981366665607352320-
>>>
>> FXtz__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO
>>> IEUcSswHLHDpSWs$
>>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> ippm mailing list
>> ippm at ietf.org
>> https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd
>> T!jOVBx7DlKXbDMiZqaYSUhBtkSdUvfGpYUyGvLerdLsLBJZPMzEGcbhC9ZSzsZOd1dYC-rDt9HLI$
> _______________________________________________
> Rpm mailing list
> Rpm at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
More information about the Rpm
mailing list