<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:"Times New Roman \(Body CS\)";
panose-1:2 11 6 4 2 2 2 2 2 4;}
/* Style Definitions */
span.EmailStyle19
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style>
</head>
<body lang="EN-US" link="blue" vlink="purple" style="word-wrap:break-word">
<div class="WordSection1">
<div id="SafeStyles1672784864">
<p style="margin:0in"><span style="font-size:10.0pt;font-family:"Arial",sans-serif">> Those of us here, like me and Dave Taht, who have measured the big elephants in the room (esp. for Starlink) like "lag under load" and "fairness with respect to competing
traffic on the same <link>" probably were not consulted, if the goal is "little burden on your available bandwidth".<o:p></o:p></span></p>
<p style="margin:0in;overflow-wrap: break-word"><span style="font-size:10.0pt;font-family:"Arial",sans-serif"> <o:p></o:p></span></p>
<p style="margin:0in;overflow-wrap: break-word"><span style="font-size:10.0pt;font-family:"Arial",sans-serif">I don’t have specifics for their test config, but most of the platforms would determine ‘little burden’ by looking for cross traffic (aka user demand
on the connection) and if it is non-existent/low then running tests that can highly utilize the link capacity – whether for a working latency test or whatever.
<o:p></o:p></span></p>
<p style="margin:0in"><span style="font-size:10.0pt;font-family:"Arial",sans-serif"> <o:p></o:p></span></p>
<p style="margin:0in;overflow-wrap: break-word"><span style="font-size:10.0pt;font-family:"Arial",sans-serif">> Frankly, I expect the results will be treated like other "quality metrics" - J.D. Power comes to mind from consulting experience in the automotive
industry - and be cherry-picked to distort the results.<o:p></o:p></span></p>
<p style="margin:0in;overflow-wrap: break-word"><span style="font-size:10.0pt;font-family:"Arial",sans-serif"><o:p> </o:p></span></p>
<p style="margin:0in"><span style="font-size:14.0pt">I dunno – I think the research & measurement community seems to be coalescing around certain types of working latency / responsiveness measures as being pretty good & predictive of real end user application
QoE. <o:p></o:p></span></p>
<p style="margin:0in"><span style="font-size:10.0pt;font-family:"Arial",sans-serif"> <o:p></o:p></span></p>
<p style="margin:0in;overflow-wrap: break-word"><span style="font-size:10.0pt;font-family:"Arial",sans-serif">> By all means participate if you want, but I suspect that the "raw data" will not be made available, and looking at the existing reports, it will
be hard to extract meaningful comparisons relevant to real user experience at the test sites.<o:p></o:p></span></p>
<p style="margin:0in"><span style="font-size:14.0pt"><o:p> </o:p></span></p>
<p style="margin:0in"><span style="font-size:14.0pt">Not sure if the raw data will be available. Even if not, they may publish the parameters of the tests themselves.
<o:p></o:p></span></p>
<p style="margin:0in"><span style="font-size:14.0pt"><o:p> </o:p></span></p>
<p style="margin:0in"><span style="font-size:14.0pt">JL<o:p></o:p></span></p>
<p style="margin:0in"><span style="font-size:10.0pt;font-family:"Arial",sans-serif"><o:p></o:p></span></p>
</div>
</div>
</body>
</html>