* Re: [Starlink] Researchers Seeking Probe Volunteers in USA
@ 2023-01-04 10:07 David Fernández
2023-01-09 14:50 ` Livingood, Jason
0 siblings, 1 reply; 11+ messages in thread
From: David Fernández @ 2023-01-04 10:07 UTC (permalink / raw)
To: starlink
AFAIK, quality of service (QoS) refers to network characteristics you
can measure quantitatively without human opinion being involved, i.e.:
throughput, latency and packet losses, also availability (MTBF/(MTBF +
MTTR)). Then, quality of experience (QoE) refers to what the users
experience, it is subjective, it must be done using subjects that are
not engineers or telecom technicians, and it is defined by the ITU as
the MOS (Mean Opinion Score), in Recommendation ITU-T P.800.1.
As I am in Europe, I cannot get NetForecast device, but I suspect that
NetForecast, as SamKnows and others, measures QoS and maybe it
provides a dashboard, as SamKnows does with their whiteboxes. At
SamKnows website I can check the measurements of my whitebox in my
Internet access: upload, download speed, measured every hour or so
(when the Internet access is not used (they claim)). I can see also
measured latency (not under load, unfortunately) and packet losses. At
SamKnows dashboard I can see also periods of time when the access was
not working (disconnections), but you don't know if it is because
someone decided to reboot the router for any reason or if it just
happened something to the ISP.
Then, regarding the measurement of QoE, recently I read about the case
for video broadcasters, yes, not streamers, but the good old TV,
either terrestrial or via satellite. They are still, nowadays, making
people to stare at screens to see that the broadcasting is going on
well. They use a lot of redundancy for critical live events, so the
signal is not going to be lost if something fails, and they measure
quality of service parameters, but in the end, the only way to check
everything is going on well, the QoE, is to have somebody checking the
screens and to assume that your users are seeing the same as your
operators.
As MOS is complex to measure, some estimators exist based on QoS
measurements, such a the good old E-Model for voice calls. For video,
Netflix invented the VMAF, for example.
Interestingly, with the recent advances in AI, there is a company that
claims that they can replace the humans by AI analyzing the videos and
providing a MOS: https://www.video-mos.com
This is being considered by the Spanish national TV broadcaster for
all channels, including streaming on the Internet to any device, not
only TVs.
Maybe in the future NetForecast and SamKnows will use AI to provide a
QoE measure too, not only QoS.
Anyway, there is the question of how to relate the QoS (what you can
provision in the network, tuning the equipment configuration) with the
QoE, because the QoE degrades sharply when the QoS is not enough at
some point, unless the application or the transport detect and
mitigate reducing the bitrate, buffering (not too much), but providing
too much QoS (e.g. in terms of throughput) is a waste, as it is not
improving the QoE after a certain point very soon. Users are happy
when their websites load in under two seconds, but are not able to
appreciate sub-second loading times (most of them).
To make things more complex, applications are adaptive, trying to
provide the best quality they can give according to the network
conditions they measure, because, well, it is better to reduce video
resolution momentarily than losing the image completely.
The measurement of QoE and the QoS to QoE relationship is still a
research area, a fascinating one, I think, involving the whole stack,
from image and audio codecs and web technologies to the physical
layer, passing through transport protocols and the data link
technologies.
Regards,
David
> Date: Tue, 3 Jan 2023 17:58:32 -0500 (EST)
> From: "David P. Reed" <dpreed@deepplum.com>
> To: starlink@lists.bufferbloat.net
> Subject: Re: [Starlink] Researchers Seeking Probe Volunteers in USA
> Message-ID: <1672786712.106922180@apps.rackspace.com>
> Content-Type: text/plain; charset="utf-8"
>
>
> A serious question:
>
>
>> ... The QMap Probe is
>> preconfigured to automatically test various network metrics with little
>> burden on
>> your available bandwidth.
>
> Those of us here, like me and Dave Taht, who have measured the big elephants
> in the room (esp. for Starlink) like "lag under load" and "fairness with
> respect to competing traffic on the same <link>" probably were not
> consulted, if the goal is "little burden on your available bandwidth".
>
> I've spent many years reading papers about "low cost metrics" for lag and
> fairness in the Internet context, starting with the old "packet-pair"
> techniques, and basically they are almost impossible to interpret and
> compare between service provider architectures. It's hard enough to compare
> DOCSIS 3 setups with other DOCSIS 3 CMTS's in other cities, or LTE data
> networks between different cities.
>
> Frankly, I expect the results will be treated like other "quality metrics" -
> J.D. Power comes to mind from consulting experience in the automotive
> industry - and be cherry-picked to distort the results.
>
>> This data is used to estimate the quality of internet
>> service.
>
>
> I have NEVER seen a technical definition of "quality of service" that made
> sense in terms of how the users experience their use of the Internet. It
> would be wonderful if such a definition actually existed. So how does
> relative "estimation" of an undefined concept work?
>
> What questions really matter to users, in particular? Well, "availability"
> might matter, but availability is relative to "momentary need". A network
> that is 99.9% available, but the 0.1% of the time that the user NEEDS it is
> what matters to the user, not the rest of the 86,400 seconds each day.
>
> [As an aside, in a proceeding I participated under the CRTC, Canada's "FCC"
> regulator on Network Management that focused on "quality", we queried that
> since none of the operators in Canada actually measured "response time" of
> their networks in any way, so how could they know that they were improving
> service? The response on the record from some of the largest Broadband ISPs
> in Canada was discouraging. They said I was wrong, and that they constantly
> measured "utilization" of the network capacity at every router, and the
> *average* utilization was almost always < 85%. They then invoked Little's
> Lemma in queueing theory to say that proved that the quality of service was
> *perfect*.
> This was in a legal regulatory proceeding, *under oath*. I just cannot
> understand how folks technical enough to invoke Little's Lemma could be so
> ignorant. Little's Lemma isn't at all good at converting "average
> utilization" to "user experienced lag under load", it's mathematically
> *invalid*. But what is worse is that they had no idea of what user
> experienced "quality" was. It's like a software vendor saying they have no
> bugs, that they know about, when they have no way for users to report bugs
> at all. OR in the modern context, where reporting bugs is a hassle and
> there's no "bug bounty" or expectation that the company will fix a reported
> bug in a timely manner.]
>
>
>
>
>> A public report of our findings will be published on our website in 2023.
>
> By all means participate if you want, but I suspect that the "raw data" will
> not be made available, and looking at the existing reports, it will be hard
> to extract meaningful comparisons relevant to real user experience at the
> test sites.
>
>
>> Please check out some of our existing reports to get a better feel of what
>> we
>> measure:
>> https://www.netforecast.com/audit-reports/<https://urldefense.com/v3/__https:/www.netforecast.com/audit-reports/__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFKpAJeQfQ$>.
>>
>> The QMap Probe requires no regular maintenance or monitoring by you. It
>> may
>> occasionally need rebooting (turning off and on), and we would contact you
>> via
>> email or text to request this. The device sends various test packets to
>> the
>> internet and records their response characteristics. Our device has no
>> knowledge
>> of -- and does not communicate out -- any information about you or any
>> devices in
>> your home.
>>
>> We will include a prepaid shipping label for returning the QMap Probe in
>> the
>> original box (please keep this!). Once we receive the device back, we will
>> send
>> you a $200 Amazon gift card (one per household).
>>
>> To volunteer, please fill out this relatively painless survey:
>> https://www.surveymonkey.com/r/8VZSB3M<https://urldefense.com/v3/__https:/www.surveymonkey.com/r/8VZSB3M__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFJ_EHoUnA$>.
>> Thank you!
>>
>> ///END///
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL:
>> <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230103/aa1e6019/attachment-0001.html>
>>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Starlink] Researchers Seeking Probe Volunteers in USA
2023-01-04 10:07 [Starlink] Researchers Seeking Probe Volunteers in USA David Fernández
@ 2023-01-09 14:50 ` Livingood, Jason
2023-01-09 15:45 ` Doc Searls
0 siblings, 1 reply; 11+ messages in thread
From: Livingood, Jason @ 2023-01-09 14:50 UTC (permalink / raw)
To: David Fernández, starlink
> AFAIK, quality of service (QoS) refers to network characteristics you
can measure quantitatively without human opinion being involved, i.e.:
throughput, latency and packet losses, also availability (MTBF/(MTBF +
MTTR)). Then, quality of experience (QoE) refers to what the users
experience, it is subjective, it must be done using subjects that are
not engineers or telecom technicians, and it is defined by the ITU as
the MOS (Mean Opinion Score), in Recommendation ITU-T P.800.1.
ISTM that everyone has a different view of QoS & QoE. My view is that QoS refers to DSCP marking and such (so best effort, priority, less than best effort, etc.) and/or some metric that the *network* is configured to deliver. But...these are all proxies for end user QoE, which used to be difficult to measure individually but is now easy/affordable to do at scale. IMO all that really matters is the end user experience, and that can be quantitatively measured (link capacity at peak hour, responsiveness/working latency, uptime) and qualitatively measured. After all, the end user does not care about what the network is in theory configured to delivery but only their actual experience using the Internet. __
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Starlink] Researchers Seeking Probe Volunteers in USA
2023-01-09 14:50 ` Livingood, Jason
@ 2023-01-09 15:45 ` Doc Searls
2023-01-09 17:11 ` Sebastian Moeller
0 siblings, 1 reply; 11+ messages in thread
From: Doc Searls @ 2023-01-09 15:45 UTC (permalink / raw)
To: Livingood, Jason; +Cc: David Fernández, starlink
Experience is also based on expectation, and nearly all the ISPs advertise downstream speed, and compete on that. This state of things reminds me of the TV business in the 50s and 60s, when RCA, GE and Zenith competed on picture size (21 inches was tops) more than picture quality. (Sony changed the game with Trinitron in 1968.) So everybody naturally assumes that the quality of their Internet service is almost entirely a matter of downstream speed.
While there is now a widespread understanding that fiber is best, some ISPs talk a fiber game but actually do hybrid fiber coax, delivering essentially coax's asymmetrical speeds. My sister has that with her "fiber" AT&T service in North Carolina, and I have it here in Santa Barbara with Cox.Neither are bad, but neither are FTTH.
Until the ISPs begin to promote and compete on some kind of normative metric for QoE (or other initialism), customers will continue to think by default that downstream speed is the whole game.
An interesting thing with Starlink is that people in rural areas migrating off the likes of HughesNet care more about latency (or the experience of its relative absence) than any other factor. Example: https://www.reddit.com/r/Starlink/comments/t5rx0s/switching_from_hughesnet/
Doc
> On Jan 9, 2023, at 6:50 AM, Livingood, Jason via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>> AFAIK, quality of service (QoS) refers to network characteristics you
> can measure quantitatively without human opinion being involved, i.e.:
> throughput, latency and packet losses, also availability (MTBF/(MTBF +
> MTTR)). Then, quality of experience (QoE) refers to what the users
> experience, it is subjective, it must be done using subjects that are
> not engineers or telecom technicians, and it is defined by the ITU as
> the MOS (Mean Opinion Score), in Recommendation ITU-T P.800.1.
>
> ISTM that everyone has a different view of QoS & QoE. My view is that QoS refers to DSCP marking and such (so best effort, priority, less than best effort, etc.) and/or some metric that the *network* is configured to deliver. But...these are all proxies for end user QoE, which used to be difficult to measure individually but is now easy/affordable to do at scale. IMO all that really matters is the end user experience, and that can be quantitatively measured (link capacity at peak hour, responsiveness/working latency, uptime) and qualitatively measured. After all, the end user does not care about what the network is in theory configured to delivery but only their actual experience using the Internet. __
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Starlink] Researchers Seeking Probe Volunteers in USA
2023-01-09 15:45 ` Doc Searls
@ 2023-01-09 17:11 ` Sebastian Moeller
0 siblings, 0 replies; 11+ messages in thread
From: Sebastian Moeller @ 2023-01-09 17:11 UTC (permalink / raw)
To: Doc Searls; +Cc: Livingood, Jason, starlink, David Fernández
Hi Doc,
> On Jan 9, 2023, at 16:45, Doc Searls via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> Experience is also based on expectation, and nearly all the ISPs advertise downstream speed, and compete on that.
[SM] My experience when in southern CA was that there was little true competition, in my case only charter delivered anything above ADSL at all. But sure they advertised "up to XX Mbps".
> This state of things reminds me of the TV business in the 50s and 60s, when RCA, GE and Zenith competed on picture size (21 inches was tops) more than picture quality. (Sony changed the game with Trinitron in 1968.) So everybody naturally assumes that the quality of their Internet service is almost entirely a matter of downstream speed.
>
> While there is now a widespread understanding that fiber is best, some ISPs talk a fiber game but actually do hybrid fiber coax, delivering essentially coax's asymmetrical speeds. My sister has that with her "fiber" AT&T service in North Carolina, and I have it here in Santa Barbara with Cox.Neither are bad, but neither are FTTH.
[SM] I am less discriminating, if an ISP can deliver sufficiently low latency/jitter and high enough throughput, I could not care less whether this is via photons in glass or via rfc1149 avian carriers.
>
> Until the ISPs begin to promote and compete on some kind of normative metric for QoE (or other initialism), customers will continue to think by default that downstream speed is the whole game.
[SM] That is an issue best not left to the ISPs... in Germany the national network regulatory agency (based on EU rules) defined a mandatory set of numbers ISPs need to give to end users pre-sale and created a method with which consumers can control whether the contracted rates are actually delivered. These numbers contain three different quality grades for up- and download respectively (out of the three one is more important the "normally available data transfer rate"). However where the so far have dropped the ball completely is in regards to latency. (To illustrate how badly, the same agency recently defined the minimum internet quality consumers are "guaranteed", but somehow considered RTTs (the the agencies reference servers in Frankfurt) of <= 150ms as acceptable)...
> An interesting thing with Starlink is that people in rural areas migrating off the likes of HughesNet care more about latency (or the experience of its relative absence) than any other factor. Example: https://www.reddit.com/r/Starlink/comments/t5rx0s/switching_from_hughesnet/
[SM] Given the large propagation delay for geostationary orbits, as well as prices and volume caps, I am not amazed that (at least for some) current GEO users LEO seems $DEITY-sent.
Regards
Sebastian
>
> Doc
>
>> On Jan 9, 2023, at 6:50 AM, Livingood, Jason via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>>> AFAIK, quality of service (QoS) refers to network characteristics you
>> can measure quantitatively without human opinion being involved, i.e.:
>> throughput, latency and packet losses, also availability (MTBF/(MTBF +
>> MTTR)). Then, quality of experience (QoE) refers to what the users
>> experience, it is subjective, it must be done using subjects that are
>> not engineers or telecom technicians, and it is defined by the ITU as
>> the MOS (Mean Opinion Score), in Recommendation ITU-T P.800.1.
>>
>> ISTM that everyone has a different view of QoS & QoE. My view is that QoS refers to DSCP marking and such (so best effort, priority, less than best effort, etc.) and/or some metric that the *network* is configured to deliver. But...these are all proxies for end user QoE, which used to be difficult to measure individually but is now easy/affordable to do at scale. IMO all that really matters is the end user experience, and that can be quantitatively measured (link capacity at peak hour, responsiveness/working latency, uptime) and qualitatively measured. After all, the end user does not care about what the network is in theory configured to delivery but only their actual experience using the Internet. __
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <mailman.2651.1672779463.1281.starlink@lists.bufferbloat.net>]
* Re: [Starlink] Researchers Seeking Probe Volunteers in USA
[not found] <mailman.2651.1672779463.1281.starlink@lists.bufferbloat.net>
@ 2023-01-03 22:58 ` David P. Reed
2023-01-09 14:44 ` Livingood, Jason
0 siblings, 1 reply; 11+ messages in thread
From: David P. Reed @ 2023-01-03 22:58 UTC (permalink / raw)
To: starlink
[-- Attachment #1: Type: text/plain, Size: 9233 bytes --]
A serious question:
> ... The QMap Probe is
> preconfigured to automatically test various network metrics with little burden on
> your available bandwidth.
Those of us here, like me and Dave Taht, who have measured the big elephants in the room (esp. for Starlink) like "lag under load" and "fairness with respect to competing traffic on the same <link>" probably were not consulted, if the goal is "little burden on your available bandwidth".
I've spent many years reading papers about "low cost metrics" for lag and fairness in the Internet context, starting with the old "packet-pair" techniques, and basically they are almost impossible to interpret and compare between service provider architectures. It's hard enough to compare DOCSIS 3 setups with other DOCSIS 3 CMTS's in other cities, or LTE data networks between different cities.
Frankly, I expect the results will be treated like other "quality metrics" - J.D. Power comes to mind from consulting experience in the automotive industry - and be cherry-picked to distort the results.
> This data is used to estimate the quality of internet
> service.
I have NEVER seen a technical definition of "quality of service" that made sense in terms of how the users experience their use of the Internet. It would be wonderful if such a definition actually existed. So how does relative "estimation" of an undefined concept work?
What questions really matter to users, in particular? Well, "availability" might matter, but availability is relative to "momentary need". A network that is 99.9% available, but the 0.1% of the time that the user NEEDS it is what matters to the user, not the rest of the 86,400 seconds each day.
[As an aside, in a proceeding I participated under the CRTC, Canada's "FCC" regulator on Network Management that focused on "quality", we queried that since none of the operators in Canada actually measured "response time" of their networks in any way, so how could they know that they were improving service? The response on the record from some of the largest Broadband ISPs in Canada was discouraging. They said I was wrong, and that they constantly measured "utilization" of the network capacity at every router, and the *average* utilization was almost always < 85%. They then invoked Little's Lemma in queueing theory to say that proved that the quality of service was *perfect*.
This was in a legal regulatory proceeding, *under oath*. I just cannot understand how folks technical enough to invoke Little's Lemma could be so ignorant. Little's Lemma isn't at all good at converting "average utilization" to "user experienced lag under load", it's mathematically *invalid*. But what is worse is that they had no idea of what user experienced "quality" was. It's like a software vendor saying they have no bugs, that they know about, when they have no way for users to report bugs at all. OR in the modern context, where reporting bugs is a hassle and there's no "bug bounty" or expectation that the company will fix a reported bug in a timely manner.]
> A public report of our findings will be published on our website in 2023.
By all means participate if you want, but I suspect that the "raw data" will not be made available, and looking at the existing reports, it will be hard to extract meaningful comparisons relevant to real user experience at the test sites.
> Please check out some of our existing reports to get a better feel of what we
> measure:
> https://www.netforecast.com/audit-reports/<https://urldefense.com/v3/__https:/www.netforecast.com/audit-reports/__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFKpAJeQfQ$>.
>
> The QMap Probe requires no regular maintenance or monitoring by you. It may
> occasionally need rebooting (turning off and on), and we would contact you via
> email or text to request this. The device sends various test packets to the
> internet and records their response characteristics. Our device has no knowledge
> of -- and does not communicate out -- any information about you or any devices in
> your home.
>
> We will include a prepaid shipping label for returning the QMap Probe in the
> original box (please keep this!). Once we receive the device back, we will send
> you a $200 Amazon gift card (one per household).
>
> To volunteer, please fill out this relatively painless survey:
> https://www.surveymonkey.com/r/8VZSB3M<https://urldefense.com/v3/__https:/www.surveymonkey.com/r/8VZSB3M__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFJ_EHoUnA$>.
> Thank you!
>
> ///END///
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230103/aa1e6019/attachment-0001.html>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 3 Jan 2023 15:57:30 -0500
> From: Vint Cerf <vint@google.com>
> To: "Livingood, Jason" <Jason_Livingood@comcast.com>
> Cc: Dave Taht via Starlink <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] Researchers Seeking Probe Volunteers in USA
> Message-ID:
> <CAHxHgge6sTffAqaMLv7z1k0ZnYtWw7s+OgXBJOBnm5zAwHjR+w@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> netforecast was started by a good friend of mine - they are first rate.
>
> v
>
>
> On Tue, Jan 3, 2023 at 3:53 PM Livingood, Jason via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
> > Forwarding on from a group doing some Starlink research. I am aware of at
> > least one other researcher that will soon do the same and will forward that
> > later (guessing a week or two).
> >
> >
> >
> > Jason
> >
> > ///FORWARD///
> >
> > We need volunteers! NetForecast, a leader in measuring the quality of
> > internet service, is conducting a performance study of several types of
> > internet delivery technologies, including low-earth-orbit satellite like
> > Starlink.
> >
> >
> >
> > As a volunteer, you will host one of our proprietary QMap Probes in your
> > home network. This will connect to the internet via an available ethernet
> > port on your gateway/router and plug into a standard power outlet. The QMap
> > Probe is preconfigured to automatically test various network metrics with
> > little burden on your available bandwidth. This data is used to estimate
> > the quality of internet service. A public report of our findings will be
> > published on our website in 2023. Please check out some of our existing
> > reports to get a better feel of what we measure:
> > https://www.netforecast.com/audit-reports/
> >
> <https://urldefense.com/v3/__https:/www.netforecast.com/audit-reports/__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFKpAJeQfQ$>
> > .
> >
> >
> >
> > The QMap Probe requires no regular maintenance or monitoring by you. It
> > may occasionally need rebooting (turning off and on), and we would contact
> > you via email or text to request this. The device sends various test
> > packets to the internet and records their response characteristics. Our
> > device has no knowledge of -- and does not communicate out -- any
> > information about you or any devices in your home.
> >
> >
> >
> > We will include a prepaid shipping label for returning the QMap Probe in
> > the original box (please keep this!). Once we receive the device back, we
> > will send you a $200 Amazon gift card (one per household).
> >
> >
> >
> > To volunteer, please fill out this relatively painless survey:
> > https://www.surveymonkey.com/r/8VZSB3M
> >
> <https://urldefense.com/v3/__https:/www.surveymonkey.com/r/8VZSB3M__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFJ_EHoUnA$>.
> > Thank you!
> >
> >
> >
> > ///END///
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
> >
>
>
> --
> Please send any postal/overnight deliveries to:
> Vint Cerf
> Google, LLC
> 1900 Reston Metro Plaza, 16th Floor
> Reston, VA 20190
> +1 (571) 213 1346
>
>
> until further notice
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230103/e49cfc9f/attachment.html>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 3995 bytes
> Desc: S/MIME Cryptographic Signature
> URL:
> <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230103/e49cfc9f/attachment.bin>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
> ------------------------------
>
> End of Starlink Digest, Vol 22, Issue 6
> ***************************************
>
[-- Attachment #2: Type: text/html, Size: 13560 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Starlink] Researchers Seeking Probe Volunteers in USA
2023-01-03 22:58 ` David P. Reed
@ 2023-01-09 14:44 ` Livingood, Jason
2023-01-09 15:26 ` Dave Taht
0 siblings, 1 reply; 11+ messages in thread
From: Livingood, Jason @ 2023-01-09 14:44 UTC (permalink / raw)
To: David P. Reed, starlink; +Cc: mike.reynolds
[-- Attachment #1: Type: text/plain, Size: 1428 bytes --]
> Those of us here, like me and Dave Taht, who have measured the big elephants in the room (esp. for Starlink) like "lag under load" and "fairness with respect to competing traffic on the same <link>" probably were not consulted, if the goal is "little burden on your available bandwidth".
I don’t have specifics for their test config, but most of the platforms would determine ‘little burden’ by looking for cross traffic (aka user demand on the connection) and if it is non-existent/low then running tests that can highly utilize the link capacity – whether for a working latency test or whatever.
> Frankly, I expect the results will be treated like other "quality metrics" - J.D. Power comes to mind from consulting experience in the automotive industry - and be cherry-picked to distort the results.
I dunno – I think the research & measurement community seems to be coalescing around certain types of working latency / responsiveness measures as being pretty good & predictive of real end user application QoE.
> By all means participate if you want, but I suspect that the "raw data" will not be made available, and looking at the existing reports, it will be hard to extract meaningful comparisons relevant to real user experience at the test sites.
Not sure if the raw data will be available. Even if not, they may publish the parameters of the tests themselves.
JL
[-- Attachment #2: Type: text/html, Size: 4222 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Starlink] Researchers Seeking Probe Volunteers in USA
2023-01-09 14:44 ` Livingood, Jason
@ 2023-01-09 15:26 ` Dave Taht
2023-01-09 17:00 ` Sebastian Moeller
0 siblings, 1 reply; 11+ messages in thread
From: Dave Taht @ 2023-01-09 15:26 UTC (permalink / raw)
To: Livingood, Jason
Cc: David P. Reed, starlink, mike.reynolds, Rpm, bloat, libreqos
I have many kvetches about the new latency under load tests being
designed and distributed over the past year. I am delighted! that they
are happening, but most really need third party evaluation, and
calibration, and a solid explanation of what network pathologies they
do and don't cover. Also a RED team attitude towards them, as well as
thinking hard about what you are not measuring (operations research).
I actually rather love the new cloudflare speedtest, because it tests
a single TCP connection, rather than dozens, and at the same time folk
are complaining that it doesn't find the actual "speed!". yet... the
test itself more closely emulates a user experience than speedtest.net
does. I am personally pretty convinced that the fewer numbers of flows
that a web page opens improves the likelihood of a good user
experience, but lack data on it.
To try to tackle the evaluation and calibration part, I've reached out
to all the new test designers in the hope that we could get together
and produce a report of what each new test is actually doing. I've
tweeted, linked in, emailed, and spammed every measurement list I know
of, and only to some response, please reach out to other test designer
folks and have them join the rpm email list?
My principal kvetches in the new tests so far are:
0) None of the tests last long enough.
Ideally there should be a mode where they at least run to "time of
first loss", or periodically, just run longer than the
industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
there! It's really bad science to optimize the internet for 20
seconds. It's like optimizing a car, to handle well, for just 20
seconds.
1) Not testing up + down + ping at the same time
None of the new tests actually test the same thing that the infamous
rrul test does - all the others still test up, then down, and ping. It
was/remains my hope that the simpler parts of the flent test suite -
such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
tests would provide calibration to the test designers.
we've got zillions of flent results in the archive published here:
https://blog.cerowrt.org/post/found_in_flent/
The new tests have all added up + ping and down + ping, but not up +
down + ping. Why??
The behaviors of what happens in that case are really non-intuitive, I
know, but... it's just one more phase to add to any one of those new
tests. I'd be deliriously happy if someone(s) new to the field
started doing that, even optionally, and boggled at how it defeated
their assumptions.
Among other things that would show...
It's the home router industry's dirty secret than darn few "gigabit"
home routers can actually forward in both directions at a gigabit. I'd
like to smash that perception thoroughly, but given our starting point
is a gigabit router was a "gigabit switch" - and historically been
something that couldn't even forward at 200Mbit - we have a long way
to go there.
Only in the past year have non-x86 home routers appeared that could
actually do a gbit in both directions.
2) Few are actually testing within-stream latency
Apple's rpm project is making a stab in that direction. It looks
highly likely, that with a little more work, crusader and
go-responsiveness can finally start sampling the tcp RTT, loss and
markings, more directly. As for the rest... sampling TCP_INFO on
windows, and Linux, at least, always appeared simple to me, but I'm
discovering how hard it is by delving deep into the rust behind
crusader.
the goresponsiveness thing is also IMHO running WAY too many streams
at the same time, I guess motivated by an attempt to have the test
complete quickly?
B) To try and tackle the validation problem:
In the libreqos.io project we've established a testbed where tests can
be plunked through various ISP plan network emulations. It's here:
https://payne.taht.net (run bandwidth test for what's currently hooked
up)
We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
leverage with that, so I don't have to nat the various emulations.
(and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
to see more test designers setup a testbed like this to calibrate
their own stuff.
Presently we're able to test:
flent
netperf
iperf2
iperf3
speedtest-cli
crusader
the broadband forum udp based test:
https://github.com/BroadbandForum/obudpst
trexx
There's also a virtual machine setup that we can remotely drive a web
browser from (but I didn't want to nat the results to the world) to
test other web services.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Starlink] Researchers Seeking Probe Volunteers in USA
2023-01-09 15:26 ` Dave Taht
@ 2023-01-09 17:00 ` Sebastian Moeller
0 siblings, 0 replies; 11+ messages in thread
From: Sebastian Moeller @ 2023-01-09 17:00 UTC (permalink / raw)
To: Dave Täht
Cc: Livingood, Jason, Rpm, mike.reynolds, libreqos, David P. Reed,
starlink, bloat
Hi Dave,
just a data point, apples networkQuality on Monterey (12.6.2, x86) defaults to bi-directionally saturating traffic. Your argument about the duration still holds though the test is really short. While I understand the motivation behind that, I think it would to the internet much better if all such tests would randomly offer users extended test duration of, say a minute. Users need to opt-in, but that would at least collect some longer duration data. Now, I have no idea whether apple actually keeps results on their server side (Ookla sure does, but given Apples applaudable privacy stance they might not do so) if not it would do little good to run extended tests, but for "players" like Ookla that do keep some logs interspersing longer running tests would offer a great way to test ISPs outside the "magic 20 seconds".
> On Jan 9, 2023, at 16:26, Dave Taht via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> I have many kvetches about the new latency under load tests being
> designed and distributed over the past year. I am delighted! that they
> are happening, but most really need third party evaluation, and
> calibration, and a solid explanation of what network pathologies they
> do and don't cover. Also a RED team attitude towards them, as well as
> thinking hard about what you are not measuring (operations research).
[SM] RED as in RED/BLUE team or as in random early detection? ;)
>
> I actually rather love the new cloudflare speedtest, because it tests
> a single TCP connection, rather than dozens, and at the same time folk
> are complaining that it doesn't find the actual "speed!".
[SM] Ookla's on-line test can be toggled between multi and single flow mode (which is good, the default is multi) but e.g. the official macos client application from Ookla does not offer this toggle and defaults to multi-flow (which is less good). Fast.com ca be configured for single flow tests, but defaults to multi-flow.
> yet... the
> test itself more closely emulates a user experience than speedtest.net
> does.
[SM] I like the separate reporting for transfer rates for objects of different sizes. I would argue that both single and multi-flow tests have merit, but I agree with you that if only one test is performed a single-flow test seems somewhat better.
> I am personally pretty convinced that the fewer numbers of flows
> that a web page opens improves the likelihood of a good user
> experience, but lack data on it.
>
> To try to tackle the evaluation and calibration part, I've reached out
> to all the new test designers in the hope that we could get together
> and produce a report of what each new test is actually doing.
[SM] +1; and probably part of your questionaire already, what measures are actually reported back to the user.
> I've
> tweeted, linked in, emailed, and spammed every measurement list I know
> of, and only to some response, please reach out to other test designer
> folks and have them join the rpm email list?
>
> My principal kvetches in the new tests so far are:
>
> 0) None of the tests last long enough.
>
> Ideally there should be a mode where they at least run to "time of
> first loss", or periodically, just run longer than the
> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
> there! It's really bad science to optimize the internet for 20
> seconds. It's like optimizing a car, to handle well, for just 20
> seconds.
[SM] ++1
> 1) Not testing up + down + ping at the same time
>
> None of the new tests actually test the same thing that the infamous
> rrul test does - all the others still test up, then down, and ping. It
> was/remains my hope that the simpler parts of the flent test suite -
> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
> tests would provide calibration to the test designers.
>
> we've got zillions of flent results in the archive published here:
> https://blog.cerowrt.org/post/found_in_flent/
>
> The new tests have all added up + ping and down + ping, but not up +
> down + ping. Why??
[SM] I think at least on Monterey Apple's networkQuality does bidirectional tests (I just confirmed that via packet-capture, but it is already visible in iftop (but hobbled by iftop's relative high default hysteresis)). You actually need to manually intervene to get a sequential test:
laptop:~ user$ networkQuality -h
USAGE: networkQuality [-C <configuration_url>] [-c] [-h] [-I <interfaceName>] [-s] [-v]
-C: override Configuration URL
-c: Produce computer-readable output
-h: Show help (this message)
-I: Bind test to interface (e.g., en0, pdp_ip0,...)
-s: Run tests sequentially instead of parallel upload/download
-v: Verbose output
laptop:~ user $ networkQuality -v
==== SUMMARY ====
Upload capacity: 194.988 Mbps
Download capacity: 894.162 Mbps
Upload flows: 16
Download flows: 12
Responsiveness: High (2782 RPM)
Base RTT: 8
Start: 1/9/23, 17:45:57
End: 1/9/23, 17:46:12
OS Version: Version 12.6.2 (Build 21G320)
laptop:~ user $ networkQuality -v -s
==== SUMMARY ====
Upload capacity: 641.206 Mbps
Download capacity: 883.787 Mbps
Upload flows: 16
Download flows: 12
Upload Responsiveness: High (3529 RPM)
Download Responsiveness: High (1939 RPM)
Base RTT: 8
Start: 1/9/23, 17:46:17
End: 1/9/23, 17:46:41
OS Version: Version 12.6.2 (Build 21G320)
(this is alas not my home link...)
>
> The behaviors of what happens in that case are really non-intuitive, I
> know, but... it's just one more phase to add to any one of those new
> tests. I'd be deliriously happy if someone(s) new to the field
> started doing that, even optionally, and boggled at how it defeated
> their assumptions.
[SM] Someone at Apple apparently listened ;)
>
> Among other things that would show...
>
> It's the home router industry's dirty secret than darn few "gigabit"
> home routers can actually forward in both directions at a gigabit.
[SM] That is going to be remedied in the near future, the first batch of nominal Gigabit links were mostly asymmetric, e.g. often something like 1000/50 over DOCSIS or 1000/500 over GPON (reflecting the asymmetric nature of the these media in the field). But with symmetric XGS-PON being deployed by more and more (still a low absolute number) ISPs symmetric performance is going to move into the spot-light. However my guess is that the first few generations of home routers for these speedgrades will rely heavily on accelerator engines.
> I'd
> like to smash that perception thoroughly, but given our starting point
> is a gigabit router was a "gigabit switch" - and historically been
> something that couldn't even forward at 200Mbit - we have a long way
> to go there.
>
> Only in the past year have non-x86 home routers appeared that could
> actually do a gbit in both directions.
>
> 2) Few are actually testing within-stream latency
>
> Apple's rpm project is making a stab in that direction. It looks
> highly likely, that with a little more work, crusader and
> go-responsiveness can finally start sampling the tcp RTT, loss and
> markings, more directly. As for the rest... sampling TCP_INFO on
> windows, and Linux, at least, always appeared simple to me, but I'm
> discovering how hard it is by delving deep into the rust behind
> crusader.
[SM] I think go-responsiveness looks at TCP_INFO already (on request) but will report an aggregate info block over all flows, which can get interesting as in my testing I often see a mix of IPv4 and IPv6 flows within individual tests, with noticeably different numbers for e.g. MSS. (Yes, MSS is not what you are asking for here, but I think flent does it right by diligently reporting all such measures flow-by-flow, but that will explode pretty quickly if say a test uses 32/32 flows by direction).
>
> the goresponsiveness thing is also IMHO running WAY too many streams
> at the same time, I guess motivated by an attempt to have the test
> complete quickly?
[SM] I can only guess, but that goal is to saturate the link persistently (and getting to that sate fast) and for that goal parallel flows seem to be OK, especially as that will reduce the server load for each of these flows a bit, no?
>
> B) To try and tackle the validation problem:
>
> In the libreqos.io project we've established a testbed where tests can
> be plunked through various ISP plan network emulations. It's here:
> https://payne.taht.net (run bandwidth test for what's currently hooked
> up)
>
> We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
> leverage with that, so I don't have to nat the various emulations.
> (and funding, anyone got funding?) Or, as the code is GPLv2 licensed,
> to see more test designers setup a testbed like this to calibrate
> their own stuff.
>
> Presently we're able to test:
> flent
> netperf
> iperf2
> iperf3
> speedtest-cli
> crusader
> the broadband forum udp based test:
> https://github.com/BroadbandForum/obudpst
> trexx
>
> There's also a virtual machine setup that we can remotely drive a web
> browser from (but I didn't want to nat the results to the world) to
> test other web services.
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 11+ messages in thread
* [Starlink] Researchers Seeking Probe Volunteers in USA
@ 2023-01-03 20:53 Livingood, Jason
2023-01-03 20:57 ` Vint Cerf
2023-01-03 21:23 ` Eric
0 siblings, 2 replies; 11+ messages in thread
From: Livingood, Jason @ 2023-01-03 20:53 UTC (permalink / raw)
To: Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 2159 bytes --]
Forwarding on from a group doing some Starlink research. I am aware of at least one other researcher that will soon do the same and will forward that later (guessing a week or two).
Jason
///FORWARD///
We need volunteers! NetForecast, a leader in measuring the quality of internet service, is conducting a performance study of several types of internet delivery technologies, including low-earth-orbit satellite like Starlink.
As a volunteer, you will host one of our proprietary QMap Probes in your home network. This will connect to the internet via an available ethernet port on your gateway/router and plug into a standard power outlet. The QMap Probe is preconfigured to automatically test various network metrics with little burden on your available bandwidth. This data is used to estimate the quality of internet service. A public report of our findings will be published on our website in 2023. Please check out some of our existing reports to get a better feel of what we measure: https://www.netforecast.com/audit-reports/<https://urldefense.com/v3/__https:/www.netforecast.com/audit-reports/__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFKpAJeQfQ$>.
The QMap Probe requires no regular maintenance or monitoring by you. It may occasionally need rebooting (turning off and on), and we would contact you via email or text to request this. The device sends various test packets to the internet and records their response characteristics. Our device has no knowledge of -- and does not communicate out -- any information about you or any devices in your home.
We will include a prepaid shipping label for returning the QMap Probe in the original box (please keep this!). Once we receive the device back, we will send you a $200 Amazon gift card (one per household).
To volunteer, please fill out this relatively painless survey: https://www.surveymonkey.com/r/8VZSB3M<https://urldefense.com/v3/__https:/www.surveymonkey.com/r/8VZSB3M__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFJ_EHoUnA$>. Thank you!
///END///
[-- Attachment #2: Type: text/html, Size: 6408 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Starlink] Researchers Seeking Probe Volunteers in USA
2023-01-03 20:53 Livingood, Jason
@ 2023-01-03 20:57 ` Vint Cerf
2023-01-03 21:23 ` Eric
1 sibling, 0 replies; 11+ messages in thread
From: Vint Cerf @ 2023-01-03 20:57 UTC (permalink / raw)
To: Livingood, Jason; +Cc: Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 2758 bytes --]
netforecast was started by a good friend of mine - they are first rate.
v
On Tue, Jan 3, 2023 at 3:53 PM Livingood, Jason via Starlink <
starlink@lists.bufferbloat.net> wrote:
> Forwarding on from a group doing some Starlink research. I am aware of at
> least one other researcher that will soon do the same and will forward that
> later (guessing a week or two).
>
>
>
> Jason
>
> ///FORWARD///
>
> We need volunteers! NetForecast, a leader in measuring the quality of
> internet service, is conducting a performance study of several types of
> internet delivery technologies, including low-earth-orbit satellite like
> Starlink.
>
>
>
> As a volunteer, you will host one of our proprietary QMap Probes in your
> home network. This will connect to the internet via an available ethernet
> port on your gateway/router and plug into a standard power outlet. The QMap
> Probe is preconfigured to automatically test various network metrics with
> little burden on your available bandwidth. This data is used to estimate
> the quality of internet service. A public report of our findings will be
> published on our website in 2023. Please check out some of our existing
> reports to get a better feel of what we measure:
> https://www.netforecast.com/audit-reports/
> <https://urldefense.com/v3/__https:/www.netforecast.com/audit-reports/__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFKpAJeQfQ$>
> .
>
>
>
> The QMap Probe requires no regular maintenance or monitoring by you. It
> may occasionally need rebooting (turning off and on), and we would contact
> you via email or text to request this. The device sends various test
> packets to the internet and records their response characteristics. Our
> device has no knowledge of -- and does not communicate out -- any
> information about you or any devices in your home.
>
>
>
> We will include a prepaid shipping label for returning the QMap Probe in
> the original box (please keep this!). Once we receive the device back, we
> will send you a $200 Amazon gift card (one per household).
>
>
>
> To volunteer, please fill out this relatively painless survey:
> https://www.surveymonkey.com/r/8VZSB3M
> <https://urldefense.com/v3/__https:/www.surveymonkey.com/r/8VZSB3M__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFJ_EHoUnA$>.
> Thank you!
>
>
>
> ///END///
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
--
Please send any postal/overnight deliveries to:
Vint Cerf
Google, LLC
1900 Reston Metro Plaza, 16th Floor
Reston, VA 20190
+1 (571) 213 1346
until further notice
[-- Attachment #1.2: Type: text/html, Size: 5842 bytes --]
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3995 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Starlink] Researchers Seeking Probe Volunteers in USA
2023-01-03 20:53 Livingood, Jason
2023-01-03 20:57 ` Vint Cerf
@ 2023-01-03 21:23 ` Eric
1 sibling, 0 replies; 11+ messages in thread
From: Eric @ 2023-01-03 21:23 UTC (permalink / raw)
To: Livingood, Jason; +Cc: Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 2732 bytes --]
Jason,
(Replying on-list in case anyone else has the same question.)
Questionnaire #4:
Is the "port available on the router" a hard requirement, or is a port on a switch acceptable? My gateway router only has three ports, so one to WAN, other two to LAN switches (which connect to servers, workstations, WAPs and another switch). Would connecting the probe to a single-hop switch cause any issues?
Thanks,
Eric
------- Original Message -------
On Tuesday, January 3rd, 2023 at 12:53, Livingood, Jason via Starlink <starlink@lists.bufferbloat.net> wrote:
> Forwarding on from a group doing some Starlink research. I am aware of at least one other researcher that will soon do the same and will forward that later (guessing a week or two).
>
> Jason
>
> ///FORWARD///
>
> We need volunteers! NetForecast, a leader in measuring the quality of internet service, is conducting a performance study of several types of internet delivery technologies, including low-earth-orbit satellite like Starlink.
>
> As a volunteer, you will host one of our proprietary QMap Probes in your home network. This will connect to the internet via an available ethernet port on your gateway/router and plug into a standard power outlet. The QMap Probe is preconfigured to automatically test various network metrics with little burden on your available bandwidth. This data is used to estimate the quality of internet service. A public report of our findings will be published on our website in 2023. Please check out some of our existing reports to get a better feel of what we measure:[https://www.netforecast.com/audit-reports/](https://urldefense.com/v3/__https:/www.netforecast.com/audit-reports/__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFKpAJeQfQ$).
>
> The QMap Probe requires no regular maintenance or monitoring by you. It may occasionally need rebooting (turning off and on), and we would contact you via email or text to request this. The device sends various test packets to the internet and records their response characteristics. Our device has no knowledge of -- and does not communicate out -- any information about you or any devices in your home.
>
> We will include a prepaid shipping label for returning the QMap Probe in the original box (please keep this!). Once we receive the device back, we will send you a $200 Amazon gift card (one per household).
>
> To volunteer, please fill out this relatively painless survey:[https://www.surveymonkey.com/r/8VZSB3M](https://urldefense.com/v3/__https:/www.surveymonkey.com/r/8VZSB3M__;!!CQl3mcHX2A!FrL2Yijo-63gS4PMToq0adfntj2fhza8ekyba1EbS8-tCgsQpg5MsIAYAvP5xUzLdDRa667bslUTtw_s0WpvvBpJEFJ_EHoUnA$). Thank you!
>
> ///END///
[-- Attachment #2: Type: text/html, Size: 6732 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-01-09 17:12 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-04 10:07 [Starlink] Researchers Seeking Probe Volunteers in USA David Fernández
2023-01-09 14:50 ` Livingood, Jason
2023-01-09 15:45 ` Doc Searls
2023-01-09 17:11 ` Sebastian Moeller
[not found] <mailman.2651.1672779463.1281.starlink@lists.bufferbloat.net>
2023-01-03 22:58 ` David P. Reed
2023-01-09 14:44 ` Livingood, Jason
2023-01-09 15:26 ` Dave Taht
2023-01-09 17:00 ` Sebastian Moeller
-- strict thread matches above, loose matches on Subject: below --
2023-01-03 20:53 Livingood, Jason
2023-01-03 20:57 ` Vint Cerf
2023-01-03 21:23 ` Eric
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox