[Starlink] Researchers Seeking Probe Volunteers in USA
David Fernández
davidfdzp at gmail.com
Wed Jan 4 05:07:29 EST 2023
AFAIK, quality of service (QoS) refers to network characteristics you
can measure quantitatively without human opinion being involved, i.e.:
throughput, latency and packet losses, also availability (MTBF/(MTBF +
MTTR)). Then, quality of experience (QoE) refers to what the users
experience, it is subjective, it must be done using subjects that are
not engineers or telecom technicians, and it is defined by the ITU as
the MOS (Mean Opinion Score), in Recommendation ITU-T P.800.1.
As I am in Europe, I cannot get NetForecast device, but I suspect that
NetForecast, as SamKnows and others, measures QoS and maybe it
provides a dashboard, as SamKnows does with their whiteboxes. At
SamKnows website I can check the measurements of my whitebox in my
Internet access: upload, download speed, measured every hour or so
(when the Internet access is not used (they claim)). I can see also
measured latency (not under load, unfortunately) and packet losses. At
SamKnows dashboard I can see also periods of time when the access was
not working (disconnections), but you don't know if it is because
someone decided to reboot the router for any reason or if it just
happened something to the ISP.
Then, regarding the measurement of QoE, recently I read about the case
for video broadcasters, yes, not streamers, but the good old TV,
either terrestrial or via satellite. They are still, nowadays, making
people to stare at screens to see that the broadcasting is going on
well. They use a lot of redundancy for critical live events, so the
signal is not going to be lost if something fails, and they measure
quality of service parameters, but in the end, the only way to check
everything is going on well, the QoE, is to have somebody checking the
screens and to assume that your users are seeing the same as your
operators.
As MOS is complex to measure, some estimators exist based on QoS
measurements, such a the good old E-Model for voice calls. For video,
Netflix invented the VMAF, for example.
Interestingly, with the recent advances in AI, there is a company that
claims that they can replace the humans by AI analyzing the videos and
providing a MOS: https://www.video-mos.com
This is being considered by the Spanish national TV broadcaster for
all channels, including streaming on the Internet to any device, not
only TVs.
Maybe in the future NetForecast and SamKnows will use AI to provide a
QoE measure too, not only QoS.
Anyway, there is the question of how to relate the QoS (what you can
provision in the network, tuning the equipment configuration) with the
QoE, because the QoE degrades sharply when the QoS is not enough at
some point, unless the application or the transport detect and
mitigate reducing the bitrate, buffering (not too much), but providing
too much QoS (e.g. in terms of throughput) is a waste, as it is not
improving the QoE after a certain point very soon. Users are happy
when their websites load in under two seconds, but are not able to
appreciate sub-second loading times (most of them).
To make things more complex, applications are adaptive, trying to
provide the best quality they can give according to the network
conditions they measure, because, well, it is better to reduce video
resolution momentarily than losing the image completely.
The measurement of QoE and the QoS to QoE relationship is still a
research area, a fascinating one, I think, involving the whole stack,
from image and audio codecs and web technologies to the physical
layer, passing through transport protocols and the data link
technologies.
Regards,
David
> Date: Tue, 3 Jan 2023 17:58:32 -0500 (EST)
> From: "David P. Reed" <dpreed at deepplum.com>
> To: starlink at lists.bufferbloat.net
> Subject: Re: [Starlink] Researchers Seeking Probe Volunteers in USA
> Message-ID: <1672786712.106922180 at apps.rackspace.com>
> Content-Type: text/plain; charset="utf-8"
>
>
> A serious question:
>
>
>> ... The QMap Probe is
>> preconfigured to automatically test various network metrics with little
>> burden on
>> your available bandwidth.
>
> Those of us here, like me and Dave Taht, who have measured the big elephants
> in the room (esp. for Starlink) like "lag under load" and "fairness with
> respect to competing traffic on the same <link>" probably were not
> consulted, if the goal is "little burden on your available bandwidth".
>
> I've spent many years reading papers about "low cost metrics" for lag and
> fairness in the Internet context, starting with the old "packet-pair"
> techniques, and basically they are almost impossible to interpret and
> compare between service provider architectures. It's hard enough to compare
> DOCSIS 3 setups with other DOCSIS 3 CMTS's in other cities, or LTE data
> networks between different cities.
>
> Frankly, I expect the results will be treated like other "quality metrics" -
> J.D. Power comes to mind from consulting experience in the automotive
> industry - and be cherry-picked to distort the results.
>
>> This data is used to estimate the quality of internet
>> service.
>
>
> I have NEVER seen a technical definition of "quality of service" that made
> sense in terms of how the users experience their use of the Internet. It
> would be wonderful if such a definition actually existed. So how does
> relative "estimation" of an undefined concept work?
>
> What questions really matter to users, in particular? Well, "availability"
> might matter, but availability is relative to "momentary need". A network
> that is 99.9% available, but the 0.1% of the time that the user NEEDS it is
> what matters to the user, not the rest of the 86,400 seconds each day.
>
> [As an aside, in a proceeding I participated under the CRTC, Canada's "FCC"
> regulator on Network Management that focused on "quality", we queried that
> since none of the operators in Canada actually measured "response time" of
> their networks in any way, so how could they know that they were improving
> service? The response on the record from some of the largest Broadband ISPs
> in Canada was discouraging. They said I was wrong, and that they constantly
> measured "utilization" of the network capacity at every router, and the
> *average* utilization was almost always < 85%. They then invoked Little's
> Lemma in queueing theory to say that proved that the quality of service was
> *perfect*.
> This was in a legal regulatory proceeding, *under oath*. I just cannot
> understand how folks technical enough to invoke Little's Lemma could be so
> ignorant. Little's Lemma isn't at all good at converting "average
> utilization" to "user experienced lag under load", it's mathematically
> *invalid*. But what is worse is that they had no idea of what user
> experienced "quality" was. It's like a software vendor saying they have no
> bugs, that they know about, when they have no way for users to report bugs
> at all. OR in the modern context, where reporting bugs is a hassle and
> there's no "bug bounty" or expectation that the company will fix a reported
> bug in a timely manner.]
>
>
>
>
>> A public report of our findings will be published on our website in 2023.
>
> By all means participate if you want, but I suspect that the "raw data" will
> not be made available, and looking at the existing reports, it will be hard
> to extract meaningful comparisons relevant to real user experience at the
> test sites.
>
>
>> Please check out some of our existing reports to get a better feel of what
>> we
>> measure:
>> https://www.netforecast.com/audit-reports/<https://urldefense.com/v3/__https:/www.netforecast.com/audit-reports/__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFKpAJeQfQ$>.
>>
>> The QMap Probe requires no regular maintenance or monitoring by you. It
>> may
>> occasionally need rebooting (turning off and on), and we would contact you
>> via
>> email or text to request this. The device sends various test packets to
>> the
>> internet and records their response characteristics. Our device has no
>> knowledge
>> of -- and does not communicate out -- any information about you or any
>> devices in
>> your home.
>>
>> We will include a prepaid shipping label for returning the QMap Probe in
>> the
>> original box (please keep this!). Once we receive the device back, we will
>> send
>> you a $200 Amazon gift card (one per household).
>>
>> To volunteer, please fill out this relatively painless survey:
>> https://www.surveymonkey.com/r/8VZSB3M<https://urldefense.com/v3/__https:/www.surveymonkey.com/r/8VZSB3M__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFJ_EHoUnA$>.
>> Thank you!
>>
>> ///END///
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL:
>> <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230103/aa1e6019/attachment-0001.html>
>>
More information about the Starlink
mailing list