[Starlink] [Rpm] [LibreQoS] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA

Sebastian Moeller moeller0 at gmx.de
Mon Mar 13 06:02:15 EDT 2023


Hi Dan,


> On Jan 9, 2023, at 20:56, dan via Rpm <rpm at lists.bufferbloat.net> wrote:
> 
> I'm not offering a complete solution here....  I'm not so keen on
> speed tests.  It's akin to testing your car's performance by flooring
> it til you hit the governor and hard breaking til you stop *while in
> traffic*.   That doesn't demonstrate the utility of the car.
> 
> Data is already being transferred, let's measure that.  

	[SM] For a home link that means you need to measure on the router, as end-hosts will only ever see the fraction of traffic they sink/source themselves...

>  Doing some
> routine simple tests intentionally during low, mid, high congestion
> periods to see how the service is actually performing for the end
> user.

	[SM] No ISP I know of publishes which periods are low, mid, high congestion so end-users will need to make some assumptions here (e.g. by looking at per day load graphs of big traffic exchanges like DE-CIX here https://www.de-cix.net/en/locations/frankfurt/statistics )


>  You don't need to generate the traffic on a link to measure how
> much traffic a link can handle.

	[SM] OK, I will bite, how do you measure achievable throughput without actually generating it? Packet-pair techniques are notoriously imprecise and have funny failure modes.


>  And determining congestion on a
> service in a fairly rudimentary way would be frequent latency tests to
> 'known good' service ie high capacity services that are unlikely to
> experience congestion.

	[SM] Yes, that sort of works, see e.g. https://github.com/lynxthecat/cake-autorate for a home-made approach by non-networking people to estimate whether the immediate load is at capacity or not, and using that information to control a traffic shaper to "bound" latency under load.

> 
> There are few use cases that matche a 2 minute speed test outside of
> 'wonder what my internet connection can do'.

	[SM] I would have agreed some months ago, but ever since the kids started to play more modern games than tetris/minecraft long duration multi-flow downloads have become a staple in our networking. OK, noone really cares about the intra-flow latency of these download flows, but we do care that the rest of our traffic stays responsive.


>  And in those few use
> cases such as a big file download, a routine latency test is a really
> great measure of the quality of a service.  Sure, troubleshooting by
> the ISP might include a full bore multi-minute speed test but that's
> really not useful for the consumer.

	[SM] I mildly disagree, if it is informative for the ISP's technicians it is also informative for the end-customers; not all ISPs are so enlightened that they pro-actively solve issues for their customers (but some are!) so occasionally it helps to be able to do such diagnostic measurements one-self. 


> 
> Further, exposing this data to the end users, IMO, is likely better as
> a chart of congestion and flow durations and some scoring.  ie, slice
> out 7-8pm, during this segment you were able to pull 427Mbps without
> congestion, netflix or streaming service use approximately 6% of
> capacity.  Your service was busy for 100% of this time ( likely
> measuring buffer bloat ).    Expressed as a pretty chart with consumer
> friendly language.

	[SM] Sounds nice.


> When you guys are talking about per segment latency testing, you're
> really talking about metrics for operators to be concerned with, not
> end users.  It's useless information for them.

	[SM] Well is it really useless? If I know the to be expected latency-under-load increase I can eye-ball e.h. how far away a server I can still interact with in a "snappy" way. 


>  I had a woman about 2
> months ago complain about her frame rates because her internet
> connection was 15 emm ess's and that was terrible and I needed to fix
> it.  (slow computer was the problem, obviously) but that data from
> speedtest.net didn't actually help her at all, it just confused her.

	[SM The solution to lack of knowledge, IMHO should be to teach people what they need to know, not hide information that could be mis-interpreted (because that applies to all information).


> 
> Running timed speed tests at 3am (Eero, I'm looking at you) is pretty
> pointless.  

	[SM] I would argue that this is likely a decent period to establish baseline values for uncongested conditions (that is uncongested by other traffic sources than the measuring network).

> Running speed tests during busy hours is a little bit
> harmful overall considering it's pushing into oversells on every ISP.

	[SM] Oversell, or under-provisioning, IMHO is a viable technique to reduce costs, but it is not an excuse for short-shifting one's customers; if an ISP advertised and sells X Mbps, he/she needs to be willing to actually deliver independent on how "active" a given shared segment is. By this I do NOT mean that the contracted speed needs to be available 100% at all times, but that there is a reasonably high chance of getting close to the contracted rates. If that means either increasing prices to match cost targets or reduce maximally advertised contracted rates, or going to completely different kind of contracts (say, 1/Nth of a Gigabit link with equitable sharing among all N users on the link). Under-provisioning is fine as an optimization method to increase profitability, but IMHO no excuse on not delivering on one's contract.

> I could talk endlessly about how useless speed tests are to end user experience.

	[SM] My take on this is a satisfied customer is unlikely to make a big fuss. And delivering great responsiveness is a great way for an ISP to make end-customers care less about achievable throughput. Yes, some will, e.g. gamers that insist on loading multi-gigabit updates just before playing instead of over-night (a strategy I have some sympathy for, shutting down power consumers fully over night instead of wasting watts on "stand-by" of some sort is a more reliable way to save power/cost).

Regards
	Sebastian


> 
> 
> On Mon, Jan 9, 2023 at 12:20 PM rjmcmahon via LibreQoS
> <libreqos at lists.bufferbloat.net> wrote:
>> 
>> User based, long duration tests seem fundamentally flawed. QoE for users
>> is driven by user expectations. And if a user won't wait on a long test
>> they for sure aren't going to wait minutes for a web page download. If
>> it's a long duration use case, e.g. a file download, then latency isn't
>> typically driving QoE.
>> 
>> Not: Even for internal tests, we try to keep our automated tests down to
>> 2 seconds. There are reasons to test for minutes (things like phy cals
>> in our chips) but it's more of the exception than the rule.
>> 
>> Bob
>>>> 0) None of the tests last long enough.
>>> 
>>> The user-initiated ones tend to be shorter - likely because the
>>> average user does not want to wait several minutes for a test to
>>> complete. But IMO this is where a test platform like SamKnows, Ookla's
>>> embedded client, NetMicroscope, and others can come in - since they
>>> run in the background on some randomized schedule w/o user
>>> intervention. Thus, the user's time-sensitivity is no longer a factor
>>> and a longer duration test can be performed.
>>> 
>>>> 1) Not testing up + down + ping at the same time
>>> 
>>> You should consider publishing a LUL BCP I-D in the IRTF/IETF - like in
>>> IPPM...
>>> 
>>> JL
>>> 
>>> _______________________________________________
>>> Rpm mailing list
>>> Rpm at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> LibreQoS mailing list
>> LibreQoS at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/libreqos
> _______________________________________________
> Rpm mailing list
> Rpm at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm



More information about the Starlink mailing list