[Rpm] [LibreQoS] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA

rjmcmahon rjmcmahon at rjmcmahon.com
Mon Jan 9 16:00:11 EST 2023


The target audience for iperf 2 latency metrics is network engineers and 
not end users. My belief is that a latency complaint from an end user is 
a defect escape, i.e. it should have been caught earlier by experts in 
our industry. That's part of the reason why I think open source tooling 
that is accurate and trustworthy is critical to our industry moving 
forward & improving. Minimize barriers to measuring & understanding 
issues so to speak.

I do hope one day we move to segment routing where latency telemetry 
drives forwarding planes. The early days of the internet were about 
connectivity. Then came capacity as demand grew. Now we need to improve 
the speed of causality per what's become a massively distributed 
computer system owned by no one single entity.

https://www.segment-routing.net/tutorials/2018-03-06-sr-delay-measurement/

Unfortunately, the performance of e2e latency experiences a form of 
tragedy of the commons as each segment tends to be unaware of the full 
path and their relative contributions.

The ancient Greek philosopher Aristotle pointed out the problem with 
common resources: ‘What is common to many is taken least care of, for 
all men have greater regard for what is their own than for what they 
possess in common with others.’

Bob
> I'm not offering a complete solution here....  I'm not so keen on
> speed tests.  It's akin to testing your car's performance by flooring
> it til you hit the governor and hard breaking til you stop *while in
> traffic*.   That doesn't demonstrate the utility of the car.
> 
> Data is already being transferred, let's measure that.    Doing some
> routine simple tests intentionally during low, mid, high congestion
> periods to see how the service is actually performing for the end
> user.  You don't need to generate the traffic on a link to measure how
> much traffic a link can handle.  And determining congestion on a
> service in a fairly rudimentary way would be frequent latency tests to
> 'known good' service ie high capacity services that are unlikely to
> experience congestion.
> 
> There are few use cases that matche a 2 minute speed test outside of
> 'wonder what my internet connection can do'.  And in those few use
> cases such as a big file download, a routine latency test is a really
> great measure of the quality of a service.  Sure, troubleshooting by
> the ISP might include a full bore multi-minute speed test but that's
> really not useful for the consumer.
> 
> Further, exposing this data to the end users, IMO, is likely better as
> a chart of congestion and flow durations and some scoring.  ie, slice
> out 7-8pm, during this segment you were able to pull 427Mbps without
> congestion, netflix or streaming service use approximately 6% of
> capacity.  Your service was busy for 100% of this time ( likely
> measuring buffer bloat ).    Expressed as a pretty chart with consumer
> friendly language.
> 
> 
> When you guys are talking about per segment latency testing, you're
> really talking about metrics for operators to be concerned with, not
> end users.  It's useless information for them.  I had a woman about 2
> months ago complain about her frame rates because her internet
> connection was 15 emm ess's and that was terrible and I needed to fix
> it.  (slow computer was the problem, obviously) but that data from
> speedtest.net didn't actually help her at all, it just confused her.
> 
> Running timed speed tests at 3am (Eero, I'm looking at you) is pretty
> pointless.  Running speed tests during busy hours is a little bit
> harmful overall considering it's pushing into oversells on every ISP.
> 
> I could talk endlessly about how useless speed tests are to end user 
> experience.
> 
> 
> On Mon, Jan 9, 2023 at 12:20 PM rjmcmahon via LibreQoS
> <libreqos at lists.bufferbloat.net> wrote:
>> 
>> User based, long duration tests seem fundamentally flawed. QoE for 
>> users
>> is driven by user expectations. And if a user won't wait on a long 
>> test
>> they for sure aren't going to wait minutes for a web page download. If
>> it's a long duration use case, e.g. a file download, then latency 
>> isn't
>> typically driving QoE.
>> 
>> Not: Even for internal tests, we try to keep our automated tests down 
>> to
>> 2 seconds. There are reasons to test for minutes (things like phy cals
>> in our chips) but it's more of the exception than the rule.
>> 
>> Bob
>> >> 0) None of the tests last long enough.
>> >
>> > The user-initiated ones tend to be shorter - likely because the
>> > average user does not want to wait several minutes for a test to
>> > complete. But IMO this is where a test platform like SamKnows, Ookla's
>> > embedded client, NetMicroscope, and others can come in - since they
>> > run in the background on some randomized schedule w/o user
>> > intervention. Thus, the user's time-sensitivity is no longer a factor
>> > and a longer duration test can be performed.
>> >
>> >> 1) Not testing up + down + ping at the same time
>> >
>> > You should consider publishing a LUL BCP I-D in the IRTF/IETF - like in
>> > IPPM...
>> >
>> > JL
>> >
>> > _______________________________________________
>> > Rpm mailing list
>> > Rpm at lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> LibreQoS mailing list
>> LibreQoS at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/libreqos


More information about the Rpm mailing list