* Re: [Starlink] It's still the starlink latency...
@ 2022-09-29 9:10 David Fernández
2022-09-29 19:34 ` Eugene Chang
0 siblings, 1 reply; 56+ messages in thread
From: David Fernández @ 2022-09-29 9:10 UTC (permalink / raw)
To: starlink
I made this video some time ago to illustrate the impact of latency on
what can you achieve during a web browsing session depending on the
latency you have, in this case MEO vs. GEO satellite connection (delay
emulated with netem).
https://www.youtube.com/watch?v=WEl_ud4ME4E
In the end, the latency is accumulated into a delay and it makes you
take more time to achieve the same.
This was done in the frame of the ESA MTAILS project:
https://artes.esa.int/projects/mtails
Off-topic: People poor performance must be first addressed, if not
improved because of a bad attitude, then you lay off, but there is a
whole spectrum of capabilities for people and everybody should be
entitled to contribute as much as they can (and get a reward in
proportion). Otherwise, nobody would employ old people or people with
disabilities, just sharp and smart people in their thirties. Half of
the population starts suffering cognitive decline after the age of
50... Very cruel.
Regards,
David
> Date: Mon, 26 Sep 2022 19:59:04 +0000
> From: Eugene Chang <eugene.chang@alum.mit.edu>
> To: Bruce Perens <bruce@perens.com>
> Cc: Eugene Chang <eugene.chang@alum.mit.edu>, Dave Taht
> <dave.taht@gmail.com>, Dave Taht via Starlink
> <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] It's still the starlink latency...
> Message-ID: <07C46DD5-7359-410E-8820-82B319944618@alum.mit.edu>
> Content-Type: text/plain; charset="utf-8"
>
> The key issue is most people don’t understand why latency matters. They
> don’t see it or feel it’s impact.
>
> First, we have to help people see the symptoms of latency and how it impacts
> something they care about.
> - gamers care but most people may think it is frivolous.
> - musicians care but that is mostly for a hobby.
> - business should care because of productivity but they don’t know how to
> “see” the impact.
>
> Second, there needs to be a “OMG, I have been seeing the action of latency
> all this time and never knew it! I was being shafted.” Once you have this
> awakening, you can get all the press you want for free.
>
> Most of the time when business apps are developed, “we” hide the impact of
> poor performance (aka latency) or they hide from the discussion because the
> developers don’t have a way to fix the latency. Maybe businesses don’t care
> because any employees affected are just considered poor performers. (In bad
> economic times, the poor performers are just laid off.) For employees, if
> they happen to be at a location with bad latency, they don’t know that
> latency is hurting them. Unfair but most people don’t know the issue is
> latency.
>
> Talking and explaining why latency is bad is not as effective as showing why
> latency is bad. Showing has to be with something that has a person impact.
>
> Gene
> -----------------------------------
> Eugene Chang
> eugene.chang@alum.mit.edu
> +1-781-799-0233 (in Honolulu)
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-29 9:10 [Starlink] It's still the starlink latency David Fernández
@ 2022-09-29 19:34 ` Eugene Chang
0 siblings, 0 replies; 56+ messages in thread
From: Eugene Chang @ 2022-09-29 19:34 UTC (permalink / raw)
To: David Fernández; +Cc: Eugene Chang, starlink
[-- Attachment #1.1: Type: text/plain, Size: 5346 bytes --]
It is common for a lot of business apps to be built on a web platform.
If you look at the network traffic of the web pages, there are a lot of packet exchanges for each retrieval and update of a web page. Each of these exchanges experience the latency. It makes a big difference if the latency is 20ms vs 100ms.
Subjectively, the productivity is dramatically different if the use has a snappy interaction with the web app vs one with 100-200ms. Remember, each packet exchange has this latency tax.
Quantitatively, we can look at the complexity of each web page and update. The total delay for each interaction can be computed. Some of this latency tax is why moving the web app closer to the user with a CDN helps. That part is under control of the app deployment. The poor user stuck with high latency will be penalized.
The performance penalty is not an old vs young issue. Yes some people are mentally quicker and sharper. But the latency tax is imposed on a user and it is likely the lower performance from the latency tax will be blamed on the user. As the world expands deployment to the underserved, there is very low awareness that some of these deployments have longer latency. These new network users will be penalized by the latency. Most of the world will assume they are slower because of their low digital literacy.
There is another group of people that also pay a high latency tax. Subscribers in the less wealthy sections of the city. There I see symptoms of digital redlining, less wealthy sections of the city where the network is configured to be highly oversubscribed (I say, under provisioned). These subscribers have a high latency tax. Yeah, we can blame these subscribers from less wealthy sections for being less productive because of there personal attitudes.
Sorry, this is my latency and digital equity peeve.
Gene
-----------------------------------
Eugene Chang
eugene.chang@alum.mit.edu
+1-781-799-0233 (in Honolulu)
> On Sep 28, 2022, at 11:10 PM, David Fernández via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> I made this video some time ago to illustrate the impact of latency on
> what can you achieve during a web browsing session depending on the
> latency you have, in this case MEO vs. GEO satellite connection (delay
> emulated with netem).
>
> https://www.youtube.com/watch?v=WEl_ud4ME4E
>
> In the end, the latency is accumulated into a delay and it makes you
> take more time to achieve the same.
>
> This was done in the frame of the ESA MTAILS project:
> https://artes.esa.int/projects/mtails
>
> Off-topic: People poor performance must be first addressed, if not
> improved because of a bad attitude, then you lay off, but there is a
> whole spectrum of capabilities for people and everybody should be
> entitled to contribute as much as they can (and get a reward in
> proportion). Otherwise, nobody would employ old people or people with
> disabilities, just sharp and smart people in their thirties. Half of
> the population starts suffering cognitive decline after the age of
> 50... Very cruel.
>
> Regards,
>
> David
>
>> Date: Mon, 26 Sep 2022 19:59:04 +0000
>> From: Eugene Chang <eugene.chang@alum.mit.edu>
>> To: Bruce Perens <bruce@perens.com>
>> Cc: Eugene Chang <eugene.chang@alum.mit.edu>, Dave Taht
>> <dave.taht@gmail.com>, Dave Taht via Starlink
>> <starlink@lists.bufferbloat.net>
>> Subject: Re: [Starlink] It's still the starlink latency...
>> Message-ID: <07C46DD5-7359-410E-8820-82B319944618@alum.mit.edu>
>> Content-Type: text/plain; charset="utf-8"
>>
>> The key issue is most people don’t understand why latency matters. They
>> don’t see it or feel it’s impact.
>>
>> First, we have to help people see the symptoms of latency and how it impacts
>> something they care about.
>> - gamers care but most people may think it is frivolous.
>> - musicians care but that is mostly for a hobby.
>> - business should care because of productivity but they don’t know how to
>> “see” the impact.
>>
>> Second, there needs to be a “OMG, I have been seeing the action of latency
>> all this time and never knew it! I was being shafted.” Once you have this
>> awakening, you can get all the press you want for free.
>>
>> Most of the time when business apps are developed, “we” hide the impact of
>> poor performance (aka latency) or they hide from the discussion because the
>> developers don’t have a way to fix the latency. Maybe businesses don’t care
>> because any employees affected are just considered poor performers. (In bad
>> economic times, the poor performers are just laid off.) For employees, if
>> they happen to be at a location with bad latency, they don’t know that
>> latency is hurting them. Unfair but most people don’t know the issue is
>> latency.
>>
>> Talking and explaining why latency is bad is not as effective as showing why
>> latency is bad. Showing has to be with something that has a person impact.
>>
>> Gene
>> -----------------------------------
>> Eugene Chang
>> eugene.chang@alum.mit.edu
>> +1-781-799-0233 (in Honolulu)
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #1.2: Type: text/html, Size: 17484 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-10-17 13:50 David Fernández
@ 2022-10-17 16:53 ` David Fernández
0 siblings, 0 replies; 56+ messages in thread
From: David Fernández @ 2022-10-17 16:53 UTC (permalink / raw)
To: starlink
Another report, linked from the MIT one previously mentioned is also
very interesting. In page 29 it compiles a series of latency
measurement tools, including flent, that has been mentioned in this
list before:
https://www.bitag.org/latency-explained.php
2022-10-17 15:50 GMT+02:00, David Fernández <davidfdzp@gmail.com>:
> That MIT paper is really interesting, yes, thank you.
>
> Have you ever heard about the Network Performance Score (NPS), based
> on ETSI TR 103 559?
>
> That could be the kind of nutriscore equivalent for networks that they
> are searching for on that paper.
>
> The NPS is proposed for 5G networks, but well, Starlink could be
> scored too, as 5G networks are mobile wireless Internet access
> networks, like Starlink.
>
> https://www.rohde-schwarz.com/nl/solutions/test-and-measurement/mobile-network-testing/network-performance-score/network-performance-score_250678.html
>
> Unfortunately, I am not aware of any tool, besides that one from R&S,
> to measure the NPS or the interactivity score of a network:
> https://www.rohde-schwarz.com/uk/solutions/test-and-measurement/mobile-network-testing/stories-insights/article-interactivity-test-concept-and-kpis-part-2-_253245.html
>
> I am aware only of similar tools to measure interactivity, such as
> these for TWAMP (RFC 5357):
> https://github.com/nokia/twampy
> https://github.com/tcaine/twamp
> https://github.com/emirica/twamp-protocol
>
> Then, the UDP ping: http://perform.wpi.edu/downloads/#udp
>
> Regards,
>
> David
>
>> Date: Thu, 22 Sep 2022 08:26:08 -0700
>> From: Dave Taht <dave.taht@gmail.com>
>> To: warren ponder <wponder11@gmail.com>
>> Cc: Mike Puchol <mike@starlink.sx>, Dave Täht via Starlink
>> <starlink@lists.bufferbloat.net>
>> Subject: Re: [Starlink] It's still the starlink latency...
>> Message-ID:
>> <CAA93jw4V_3qmdcwMnTrRw0x18DSwds1OG1sh0aY3+aL+niSVEA@mail.gmail.com>
>> Content-Type: text/plain; charset="UTF-8"
>>
>> This MIT paper went by today. It's really good.
>>
>> https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4178804
>>
>>
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
@ 2022-10-17 13:50 David Fernández
2022-10-17 16:53 ` David Fernández
0 siblings, 1 reply; 56+ messages in thread
From: David Fernández @ 2022-10-17 13:50 UTC (permalink / raw)
To: starlink
That MIT paper is really interesting, yes, thank you.
Have you ever heard about the Network Performance Score (NPS), based
on ETSI TR 103 559?
That could be the kind of nutriscore equivalent for networks that they
are searching for on that paper.
The NPS is proposed for 5G networks, but well, Starlink could be
scored too, as 5G networks are mobile wireless Internet access
networks, like Starlink.
https://www.rohde-schwarz.com/nl/solutions/test-and-measurement/mobile-network-testing/network-performance-score/network-performance-score_250678.html
Unfortunately, I am not aware of any tool, besides that one from R&S,
to measure the NPS or the interactivity score of a network:
https://www.rohde-schwarz.com/uk/solutions/test-and-measurement/mobile-network-testing/stories-insights/article-interactivity-test-concept-and-kpis-part-2-_253245.html
I am aware only of similar tools to measure interactivity, such as
these for TWAMP (RFC 5357):
https://github.com/nokia/twampy
https://github.com/tcaine/twamp
https://github.com/emirica/twamp-protocol
Then, the UDP ping: http://perform.wpi.edu/downloads/#udp
Regards,
David
> Date: Thu, 22 Sep 2022 08:26:08 -0700
> From: Dave Taht <dave.taht@gmail.com>
> To: warren ponder <wponder11@gmail.com>
> Cc: Mike Puchol <mike@starlink.sx>, Dave Täht via Starlink
> <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] It's still the starlink latency...
> Message-ID:
> <CAA93jw4V_3qmdcwMnTrRw0x18DSwds1OG1sh0aY3+aL+niSVEA@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> This MIT paper went by today. It's really good.
>
> https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4178804
>
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-28 9:54 ` Sebastian Moeller
@ 2022-09-28 18:49 ` Eugene Y Chang
0 siblings, 0 replies; 56+ messages in thread
From: Eugene Y Chang @ 2022-09-28 18:49 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Eugene Chang, Bruce Perens, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 21932 bytes --]
FYI, I believe the old 150ms was at the threshold where a person would hear the telephony far-end echo. At the (approx) 250ms from the geos, the far-end echo would interfere with speaking.
What is interesting is the speed of sound in air (STP) is about 1ms/foot (about 35 cm). On a big stage that is about 60 ms stage left to stage right. The internet transporting sound would be awesome without jitter.
Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honolulu)
> On Sep 27, 2022, at 11:54 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Gene,
>
>> On Sep 28, 2022, at 00:46, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>
>> Sebastian,
>> Good to know. I haven’t had time to follow all the work going on globally.
>> I was only commenting on how the ISP support team behaves.
>> How do we bring this better attitude to the US.
>
> [SM] Good question. In the EU it was actually actually an official EU regulation by the european council and the parliament (https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32015R2120&from=de) that is the basis for the national regulators to act.
>
>> Sadly the FCC seems stuck accommodating the big ISPs.
>
> [SM] Similar in Germany the regulator always sees to see itself both as represntative of the end-user and at the same time as partner of the ISPs and since ISP have better communication channels to the regulators they also often seem to accomodate the big ISPs. However if the relevant law is clear and unambiguous they act according to it.
> Not sure though whether "go through the politicians" to improve the directives of the FCC is a better approach given the state of USian politics. However it should be conceptually easy to convince politicians of the issue, it is not that they do not use the internet themselves and might be open for a demonstration (nicely balanced between the two sides of the aisle to avoid making this a partisan issue).
>
>> Of course, we want the FCC, the ISPs, and the networking world to go beyond speedtest.
>
> [SM] +1; for all my praise for the local regulators action, they also have dropped the ball on acceptable latency; they just defined the minimum internet access people living in Germany are entitled to by law (something like 10/1.7 Mbps but up to 150ms latency, all measured against the regulators testing systems in the internet), the rates while not great seem OK (as an absolute floor) but 150ms latency? What where they thinking? I bet this comes from the ITU's old characterization of mouth to ear latencies <= 150ms being OK, ignoring that mouth to ear contains more than pure network delay and that if two of such users actually try a VoIP call they end up with 300ms delay, which is deeply in the awkward territory IIRC.
>
> Regards
> Sebastian
>
>
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>>> On Sep 26, 2022, at 9:09 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>
>>> Hi Gene,
>>>
>>>
>>>> On Sep 27, 2022, at 05:50, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>>>
>>>> Of course…. But the ISP’s maxim is "don’t believe the customer’s speedtest numbers if it is not with their host".
>>>
>>> [SM] In an act of reasonable regulation did the EU give national regulatory agencies the right to define and set-up national "speed-tests" (located outside of the access ISPs networks) which ISPs effectively need to accept. In Germany the local NRA (Bundes-Netz-Agentur, BNetzA) created a speedtest application against servers operated on their behalf and will, if a customers demonstrates the ISP falling short of the contracted rates (according to a somewhat complicated definition and measurement rules). convince ISPs to follow the law and either release customers from their contracts immediately or lower the price in relation to the under-fulfillment. (Unfortunately all related official web pages are in German only).
>>> This put a hold on the practice of gaming speedtests, like DOCSIS ISPs measuring an in-segment speedtest server which conveniently hides if a segment's uplink is congested... Not all ISPs gamed their speedtests, and even the in-segment speedtest can be justified for some measurements (e.g. when trying to figure out whether congestion is in-segment or out-of-segment), but the temptation must have been large to not set-up the most objective spedtest. (We have a saying along the lines of "making a goat your gardener" which generally is considered a sub-optimal approach).
>>>
>>> Regards
>>> Sebastian
>>>
>>>
>>>>
>>>>
>>>> Gene
>>>> ----------------------------------------------
>>>> Eugene Chang
>>>> IEEE Senior Life Member
>>>> eugene.chang@ieee.org
>>>> 781-799-0233 (in Honolulu)
>>>>
>>>>
>>>>
>>>>> On Sep 26, 2022, at 11:44 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>
>>>>> That's a good maxim: Don't believe a speed test that is hosted by your own ISP.
>>>>>
>>>>> On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>> Thank you for the dialog,.
>>>>> This discussion with regards to Starlink is interesting as it confirms my guesses about the gap between Starlinks overly simplified, over optimistic marketing and the reality as they acquire subscribers.
>>>>>
>>>>> I am actually interested in a more perverse issue. I am seeing latency and bufferbloat as a consequence from significant under provisioning. It doesn’t matter that the ISP is selling a fiber drop, if (parts) of their network is under provisioned. Two end points can be less than 5 mile apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency was 230+ ms. The pattern I see suggest digital redlining. The older communities appear to have much more severe under provisioning.
>>>>>
>>>>> Another observation. Running speedtest appears to go from the edge of the network by layer 2 to the speedtest host operated by the ISP. Yup, bypasses the (suspected overloaded) routers.
>>>>>
>>>>> Anyway, just observing.
>>>>>
>>>>> Gene
>>>>> ----------------------------------------------
>>>>> Eugene Chang
>>>>> IEEE Senior Life Member
>>>>> eugene.chang@ieee.org
>>>>> 781-799-0233 (in Honolulu)
>>>>>
>>>>>
>>>>>
>>>>>> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>>>
>>>>>> Hi Gene,
>>>>>>
>>>>>>
>>>>>>> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>>>>>>
>>>>>>> Comments inline below.
>>>>>>>
>>>>>>> Gene
>>>>>>> ----------------------------------------------
>>>>>>> Eugene Chang
>>>>>>> IEEE Senior Life Member
>>>>>>> eugene.chang@ieee.org
>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>>>>>
>>>>>>>> Hi Eugene,
>>>>>>>>
>>>>>>>>
>>>>>>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>
>>>>>>>>> Ok, we are getting into the details. I agree.
>>>>>>>>>
>>>>>>>>> Every node in the path has to implement this to be effective.
>>>>>>>>
>>>>>>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>>>>>>
>>>>>>>
>>>>>>> This is not completely true.
>>>>>>
>>>>>> [SM] You are likely right, trying to summarize things leads to partially incorrect generalizations.
>>>>>>
>>>>>>
>>>>>>> Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>>>>>>
>>>>>> [SM] It is the node that builds up the queue that profits most from better queue management.... (again I generalize, the node with the queue itself probably does not care all that much, but the endpoints will profit if the queue experiencing node deals with that queue more gracefully).
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>>>>>>
>>>>>>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>>>>>>
>>>>>>> The hardest part is getting competing ISPs to implement and coordinate.
>>>>>>
>>>>>> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot end-users can do unilaterally on their side to improve both ingress and egress congestion. Admittedly especially ingress congestion would be even better handled with cooperation of the ISP.
>>>>>>
>>>>>>> Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>>>>>
>>>>>> [SM] Well we do this one home network at a time (not because that is efficient or ideal, but simply because it is possible). Maybe, if you have not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in addition) on your home internet access link for say a week and let us know ih/how your experience changed?
>>>>>>
>>>>>> Regards
>>>>>> Sebastian
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> Sebastian
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Gene
>>>>>>>>> ----------------------------------------------
>>>>>>>>> Eugene Chang
>>>>>>>>> IEEE Senior Life Member
>>>>>>>>> eugene.chang@ieee.org
>>>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>>>>>>>
>>>>>>>>>> software updates can do far more than just improve recovery.
>>>>>>>>>>
>>>>>>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>>>>>>>
>>>>>>>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>>>>>>>
>>>>>>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>>>>>>>
>>>>>>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>>>>>>>
>>>>>>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>>>>>>>
>>>>>>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>>>>>>>
>>>>>>>>>> David Lang
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>>>>>>
>>>>>>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>>>>>>
>>>>>>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>>>>>>>
>>>>>>>>>>> Gene
>>>>>>>>>>> ----------------------------------------------
>>>>>>>>>>> Eugene Chang
>>>>>>>>>>> IEEE Senior Life Member
>>>>>>>>>>> eugene.chang@ieee.org
>>>>>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>>>>>>
>>>>>>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>>>>>>>
>>>>>>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>>>>>>>
>>>>>>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>>>>>>>
>>>>>>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>>>>>>>
>>>>>>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>>>>>>>
>>>>>>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>>>>>>>
>>>>>>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>>>>>>>
>>>>>>>>>>>> Gene
>>>>>>>>>>>> -----------------------------------
>>>>>>>>>>>> Eugene Chang
>>>>>>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>
>>>>>>>>>>>>> Bruce
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Bruce Perens K6BP
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Bruce Perens K6BP
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>>>
>>>>> --
>>>>> Bruce Perens K6BP
>>>>
>>>
>>
>
[-- Attachment #1.2: Type: text/html, Size: 26590 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-27 22:46 ` Eugene Y Chang
@ 2022-09-28 9:54 ` Sebastian Moeller
2022-09-28 18:49 ` Eugene Y Chang
0 siblings, 1 reply; 56+ messages in thread
From: Sebastian Moeller @ 2022-09-28 9:54 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: Bruce Perens, Dave Taht via Starlink
Hi Gene,
> On Sep 28, 2022, at 00:46, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>
> Sebastian,
> Good to know. I haven’t had time to follow all the work going on globally.
> I was only commenting on how the ISP support team behaves.
> How do we bring this better attitude to the US.
[SM] Good question. In the EU it was actually actually an official EU regulation by the european council and the parliament (https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32015R2120&from=de) that is the basis for the national regulators to act.
> Sadly the FCC seems stuck accommodating the big ISPs.
[SM] Similar in Germany the regulator always sees to see itself both as represntative of the end-user and at the same time as partner of the ISPs and since ISP have better communication channels to the regulators they also often seem to accomodate the big ISPs. However if the relevant law is clear and unambiguous they act according to it.
Not sure though whether "go through the politicians" to improve the directives of the FCC is a better approach given the state of USian politics. However it should be conceptually easy to convince politicians of the issue, it is not that they do not use the internet themselves and might be open for a demonstration (nicely balanced between the two sides of the aisle to avoid making this a partisan issue).
> Of course, we want the FCC, the ISPs, and the networking world to go beyond speedtest.
[SM] +1; for all my praise for the local regulators action, they also have dropped the ball on acceptable latency; they just defined the minimum internet access people living in Germany are entitled to by law (something like 10/1.7 Mbps but up to 150ms latency, all measured against the regulators testing systems in the internet), the rates while not great seem OK (as an absolute floor) but 150ms latency? What where they thinking? I bet this comes from the ITU's old characterization of mouth to ear latencies <= 150ms being OK, ignoring that mouth to ear contains more than pure network delay and that if two of such users actually try a VoIP call they end up with 300ms delay, which is deeply in the awkward territory IIRC.
Regards
Sebastian
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
>> On Sep 26, 2022, at 9:09 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Gene,
>>
>>
>>> On Sep 27, 2022, at 05:50, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>>
>>> Of course…. But the ISP’s maxim is "don’t believe the customer’s speedtest numbers if it is not with their host".
>>
>> [SM] In an act of reasonable regulation did the EU give national regulatory agencies the right to define and set-up national "speed-tests" (located outside of the access ISPs networks) which ISPs effectively need to accept. In Germany the local NRA (Bundes-Netz-Agentur, BNetzA) created a speedtest application against servers operated on their behalf and will, if a customers demonstrates the ISP falling short of the contracted rates (according to a somewhat complicated definition and measurement rules). convince ISPs to follow the law and either release customers from their contracts immediately or lower the price in relation to the under-fulfillment. (Unfortunately all related official web pages are in German only).
>> This put a hold on the practice of gaming speedtests, like DOCSIS ISPs measuring an in-segment speedtest server which conveniently hides if a segment's uplink is congested... Not all ISPs gamed their speedtests, and even the in-segment speedtest can be justified for some measurements (e.g. when trying to figure out whether congestion is in-segment or out-of-segment), but the temptation must have been large to not set-up the most objective spedtest. (We have a saying along the lines of "making a goat your gardener" which generally is considered a sub-optimal approach).
>>
>> Regards
>> Sebastian
>>
>>
>>>
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>>> On Sep 26, 2022, at 11:44 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>
>>>> That's a good maxim: Don't believe a speed test that is hosted by your own ISP.
>>>>
>>>> On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>> Thank you for the dialog,.
>>>> This discussion with regards to Starlink is interesting as it confirms my guesses about the gap between Starlinks overly simplified, over optimistic marketing and the reality as they acquire subscribers.
>>>>
>>>> I am actually interested in a more perverse issue. I am seeing latency and bufferbloat as a consequence from significant under provisioning. It doesn’t matter that the ISP is selling a fiber drop, if (parts) of their network is under provisioned. Two end points can be less than 5 mile apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency was 230+ ms. The pattern I see suggest digital redlining. The older communities appear to have much more severe under provisioning.
>>>>
>>>> Another observation. Running speedtest appears to go from the edge of the network by layer 2 to the speedtest host operated by the ISP. Yup, bypasses the (suspected overloaded) routers.
>>>>
>>>> Anyway, just observing.
>>>>
>>>> Gene
>>>> ----------------------------------------------
>>>> Eugene Chang
>>>> IEEE Senior Life Member
>>>> eugene.chang@ieee.org
>>>> 781-799-0233 (in Honolulu)
>>>>
>>>>
>>>>
>>>>> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>>
>>>>> Hi Gene,
>>>>>
>>>>>
>>>>>> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>>>>>
>>>>>> Comments inline below.
>>>>>>
>>>>>> Gene
>>>>>> ----------------------------------------------
>>>>>> Eugene Chang
>>>>>> IEEE Senior Life Member
>>>>>> eugene.chang@ieee.org
>>>>>> 781-799-0233 (in Honolulu)
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>>>>
>>>>>>> Hi Eugene,
>>>>>>>
>>>>>>>
>>>>>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>>>>>
>>>>>>>> Ok, we are getting into the details. I agree.
>>>>>>>>
>>>>>>>> Every node in the path has to implement this to be effective.
>>>>>>>
>>>>>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>>>>>
>>>>>>
>>>>>> This is not completely true.
>>>>>
>>>>> [SM] You are likely right, trying to summarize things leads to partially incorrect generalizations.
>>>>>
>>>>>
>>>>>> Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>>>>>
>>>>> [SM] It is the node that builds up the queue that profits most from better queue management.... (again I generalize, the node with the queue itself probably does not care all that much, but the endpoints will profit if the queue experiencing node deals with that queue more gracefully).
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>>>>>
>>>>>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>>>>>
>>>>>> The hardest part is getting competing ISPs to implement and coordinate.
>>>>>
>>>>> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot end-users can do unilaterally on their side to improve both ingress and egress congestion. Admittedly especially ingress congestion would be even better handled with cooperation of the ISP.
>>>>>
>>>>>> Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>>>>
>>>>> [SM] Well we do this one home network at a time (not because that is efficient or ideal, but simply because it is possible). Maybe, if you have not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in addition) on your home internet access link for say a week and let us know ih/how your experience changed?
>>>>>
>>>>> Regards
>>>>> Sebastian
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> Sebastian
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Gene
>>>>>>>> ----------------------------------------------
>>>>>>>> Eugene Chang
>>>>>>>> IEEE Senior Life Member
>>>>>>>> eugene.chang@ieee.org
>>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>>>>>>
>>>>>>>>> software updates can do far more than just improve recovery.
>>>>>>>>>
>>>>>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>>>>>>
>>>>>>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>>>>>>
>>>>>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>>>>>>
>>>>>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>>>>>>
>>>>>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>>>>>>
>>>>>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>>>>>>
>>>>>>>>> David Lang
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>>>>>
>>>>>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>>>>>
>>>>>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>>>>>>
>>>>>>>>>> Gene
>>>>>>>>>> ----------------------------------------------
>>>>>>>>>> Eugene Chang
>>>>>>>>>> IEEE Senior Life Member
>>>>>>>>>> eugene.chang@ieee.org
>>>>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>>>>>
>>>>>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>>>>>>
>>>>>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>>>>>>
>>>>>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>>>>>>
>>>>>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>>>>>>
>>>>>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>>>>>>
>>>>>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>>>>>>
>>>>>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>>>>>>
>>>>>>>>>>> Gene
>>>>>>>>>>> -----------------------------------
>>>>>>>>>>> Eugene Chang
>>>>>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>>>>>>
>>>>>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks
>>>>>>>>>>>>
>>>>>>>>>>>> Bruce
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>>>>>>
>>>>>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Bruce Perens K6BP
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Bruce Perens K6BP
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>>>
>>>> --
>>>> Bruce Perens K6BP
>>>
>>
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-27 13:55 ` Dave Taht
@ 2022-09-28 0:20 ` Eugene Y Chang
0 siblings, 0 replies; 56+ messages in thread
From: Eugene Y Chang @ 2022-09-28 0:20 UTC (permalink / raw)
To: Dave Taht; +Cc: Eugene Chang, Sebastian Moeller, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 18511 bytes --]
Yup, all true.
I got lots of things to rant about too.
Now, Mr. ISP, you signed me up for 1Gbps service. You told me it would fix my performance problem. You bundled a kit with the Wi-Fi gateway (router). And you give me bad support if I provide my own router. So why should you have to know the stuff that Dave is telling me.
Yeah, and when I first complained about sub-300Mbps service on my 1Gbps service, after 3 months of helping the ISP support escalate the problem, they eventually reported “oh, we didn’t notice the OLT was overloaded”. Eh, don’t they have equipment monitoring and all that basic stuff. Then they reported it will take 6 months to get a new OLT shelf added. …. So lame. So sad. Deliberately incompetent because it saves them money.
Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honolulu)
> On Sep 27, 2022, at 3:55 AM, Dave Taht <dave.taht@gmail.com> wrote:
>
> On Mon, Sep 26, 2022 at 11:36 PM Sebastian Moeller via Starlink
> <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>
>> Worst, I waited 9 months for them to resolve why I was only getting 300Mbps on my 1Gbps service (oops, sorry my 700Mbps service).
>
> Nothing I've seen to date below $400 dollars can actually push a gbit
> in both directions at the same time, with
> NAT, firewalling, etc. A lot of gear can't even push a gbit in one direction.
>
> The long accepted fallacy that a gigabit "router", is just a gigabit
> switch, bothers me, but it is just one piece of a long line of
> orwellian doublethink about how the net works but is so pervasive we
> ended up publishing:
>
> https://forum.openwrt.org/t/so-you-have-500mbps-1gbps-fiber-and-need-a-router-read-this-first/90305 <https://forum.openwrt.org/t/so-you-have-500mbps-1gbps-fiber-and-need-a-router-read-this-first/90305>
>
> I've seen an ostensibly serious proposal that budgeted about $4k/sub
> for a symmetric gig fiber rollout that budgeted...
> wait for it... $75 dollars for the router, and other proposals that
> wanted to punt the firewalling etc to a more centralized server, and
> just put a switch at the sub.
>
>> [SM] I guess mass market internet access is a low margin business, where expert debugging time is precious. I hope they at least gave you a rebate for that time.
>
> Why is it so few take packet captures anymore? (I think I will rant on
> that at more length elsewhere)
>
>>
>> Regards
>> Sebastian
>>
>>
>>
>>>
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>>> On Sep 26, 2022, at 11:29 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>
>>>> Hi David,
>>>>
>>>>> On Sep 26, 2022, at 23:22, David Lang <david@lang.hm> wrote:
>>>>>
>>>>> On Mon, 26 Sep 2022, Eugene Y Chang wrote:
>>>>>
>>>>>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>>>>
>>>>>>> Hi Eugene,
>>>>>>>
>>>>>>>
>>>>>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>
>>>>>>>> Ok, we are getting into the details. I agree.
>>>>>>>>
>>>>>>>> Every node in the path has to implement this to be effective.
>>>>>>>
>>>>>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>>>>>
>>>>>>
>>>>>> This is not completely true. Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>>>>>
>>>>> only if node N and node N-1 handle the same traffic with the same link speeds. In practice this is almost never the case.
>>>>
>>>> [SM] I note that typically for ingress shaping a post-true-bottleneck shaper will not work unless we create an artificial bottleneck by shaping the traffic to below true bottleneck (thereby creating a new true but artificial bottleneck so the queue develops at a point where we can control it).
>>>> Also if the difference between "true structural" and artificial bottleneck is small in comparison to the traffic inrush we can see "traffic back-spill" into the typically oversized and under-managed upstream buffers, but for reasonably well behaved that happens relatively rarely. Rarely enough that ingress traffic shaping noticeably improves latency-under-load in spite of not beeing a guranteed solution.
>>>>
>>>>
>>>>> Until you get to gigabit last-mile links, the last mile is almost always the bottleneck from both sides, so implementing cake on the home router makes a huge improvement (and if you can get it on the last-mile ISP router, even better). Once you get into the Internet fabric, bottlenecks are fairly rare, they do happen, but ISPs carefully watch for those and add additional paths and/or increase bandwith to avoid them.
>>>>
>>>> [SM] Well, sometimes such links are congested too for economic reasons...
>>>>
>>>> Regards
>>>> Sebastian
>>>>
>>>>
>>>>>
>>>>> David Lang
>>>>>
>>>>>>>
>>>>>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>>>>>
>>>>>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>>>>>
>>>>>> The hardest part is getting competing ISPs to implement and coordinate. Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> Sebastian
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Gene
>>>>>>>> ----------------------------------------------
>>>>>>>> Eugene Chang
>>>>>>>> IEEE Senior Life Member
>>>>>>>> eugene.chang@ieee.org
>>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>>>>>>
>>>>>>>>> software updates can do far more than just improve recovery.
>>>>>>>>>
>>>>>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>>>>>>
>>>>>>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>>>>>>
>>>>>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>>>>>>
>>>>>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>>>>>>
>>>>>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>>>>>>
>>>>>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>>>>>>
>>>>>>>>> David Lang
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>>>>>
>>>>>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>>>>>
>>>>>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>>>>>>
>>>>>>>>>> Gene
>>>>>>>>>> ----------------------------------------------
>>>>>>>>>> Eugene Chang
>>>>>>>>>> IEEE Senior Life Member
>>>>>>>>>> eugene.chang@ieee.org
>>>>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>>>>>
>>>>>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>>>>>>
>>>>>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>>>>>>
>>>>>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>>>>>>
>>>>>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>>>>>>
>>>>>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>>>>>>
>>>>>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>>>>>>
>>>>>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>>>>>>
>>>>>>>>>>> Gene
>>>>>>>>>>> -----------------------------------
>>>>>>>>>>> Eugene Chang
>>>>>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>>>>>>
>>>>>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks
>>>>>>>>>>>>
>>>>>>>>>>>> Bruce
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>>>>>>
>>>>>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Bruce Perens K6BP
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Bruce Perens K6BP
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>
>>
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>
>
>
> --
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>
> Dave Täht CEO, TekLibre, LLC
[-- Attachment #1.2: Type: text/html, Size: 41823 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-27 7:09 ` Sebastian Moeller
@ 2022-09-27 22:46 ` Eugene Y Chang
2022-09-28 9:54 ` Sebastian Moeller
0 siblings, 1 reply; 56+ messages in thread
From: Eugene Y Chang @ 2022-09-27 22:46 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Eugene Chang, Bruce Perens, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 18775 bytes --]
Sebastian,
Good to know. I haven’t had time to follow all the work going on globally.
I was only commenting on how the ISP support team behaves.
How do we bring this better attitude to the US.
Sadly the FCC seems stuck accommodating the big ISPs.
Of course, we want the FCC, the ISPs, and the networking world to go beyond speedtest.
Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honolulu)
> On Sep 26, 2022, at 9:09 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Gene,
>
>
>> On Sep 27, 2022, at 05:50, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>
>> Of course…. But the ISP’s maxim is "don’t believe the customer’s speedtest numbers if it is not with their host".
>
> [SM] In an act of reasonable regulation did the EU give national regulatory agencies the right to define and set-up national "speed-tests" (located outside of the access ISPs networks) which ISPs effectively need to accept. In Germany the local NRA (Bundes-Netz-Agentur, BNetzA) created a speedtest application against servers operated on their behalf and will, if a customers demonstrates the ISP falling short of the contracted rates (according to a somewhat complicated definition and measurement rules). convince ISPs to follow the law and either release customers from their contracts immediately or lower the price in relation to the under-fulfillment. (Unfortunately all related official web pages are in German only).
> This put a hold on the practice of gaming speedtests, like DOCSIS ISPs measuring an in-segment speedtest server which conveniently hides if a segment's uplink is congested... Not all ISPs gamed their speedtests, and even the in-segment speedtest can be justified for some measurements (e.g. when trying to figure out whether congestion is in-segment or out-of-segment), but the temptation must have been large to not set-up the most objective spedtest. (We have a saying along the lines of "making a goat your gardener" which generally is considered a sub-optimal approach).
>
> Regards
> Sebastian
>
>
>>
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>>> On Sep 26, 2022, at 11:44 AM, Bruce Perens <bruce@perens.com> wrote:
>>>
>>> That's a good maxim: Don't believe a speed test that is hosted by your own ISP.
>>>
>>> On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>> Thank you for the dialog,.
>>> This discussion with regards to Starlink is interesting as it confirms my guesses about the gap between Starlinks overly simplified, over optimistic marketing and the reality as they acquire subscribers.
>>>
>>> I am actually interested in a more perverse issue. I am seeing latency and bufferbloat as a consequence from significant under provisioning. It doesn’t matter that the ISP is selling a fiber drop, if (parts) of their network is under provisioned. Two end points can be less than 5 mile apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency was 230+ ms. The pattern I see suggest digital redlining. The older communities appear to have much more severe under provisioning.
>>>
>>> Another observation. Running speedtest appears to go from the edge of the network by layer 2 to the speedtest host operated by the ISP. Yup, bypasses the (suspected overloaded) routers.
>>>
>>> Anyway, just observing.
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>>> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>
>>>> Hi Gene,
>>>>
>>>>
>>>>> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>>>>
>>>>> Comments inline below.
>>>>>
>>>>> Gene
>>>>> ----------------------------------------------
>>>>> Eugene Chang
>>>>> IEEE Senior Life Member
>>>>> eugene.chang@ieee.org
>>>>> 781-799-0233 (in Honolulu)
>>>>>
>>>>>
>>>>>
>>>>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>>>
>>>>>> Hi Eugene,
>>>>>>
>>>>>>
>>>>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>>>>
>>>>>>> Ok, we are getting into the details. I agree.
>>>>>>>
>>>>>>> Every node in the path has to implement this to be effective.
>>>>>>
>>>>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>>>>
>>>>>
>>>>> This is not completely true.
>>>>
>>>> [SM] You are likely right, trying to summarize things leads to partially incorrect generalizations.
>>>>
>>>>
>>>>> Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>>>>
>>>> [SM] It is the node that builds up the queue that profits most from better queue management.... (again I generalize, the node with the queue itself probably does not care all that much, but the endpoints will profit if the queue experiencing node deals with that queue more gracefully).
>>>>
>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>>>>
>>>>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>>>>
>>>>> The hardest part is getting competing ISPs to implement and coordinate.
>>>>
>>>> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot end-users can do unilaterally on their side to improve both ingress and egress congestion. Admittedly especially ingress congestion would be even better handled with cooperation of the ISP.
>>>>
>>>>> Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>>>
>>>> [SM] Well we do this one home network at a time (not because that is efficient or ideal, but simply because it is possible). Maybe, if you have not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in addition) on your home internet access link for say a week and let us know ih/how your experience changed?
>>>>
>>>> Regards
>>>> Sebastian
>>>>
>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> Regards
>>>>>> Sebastian
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Gene
>>>>>>> ----------------------------------------------
>>>>>>> Eugene Chang
>>>>>>> IEEE Senior Life Member
>>>>>>> eugene.chang@ieee.org
>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>>>>>
>>>>>>>> software updates can do far more than just improve recovery.
>>>>>>>>
>>>>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>>>>>
>>>>>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>>>>>
>>>>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>>>>>
>>>>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>>>>>
>>>>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>>>>>
>>>>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>>>>>
>>>>>>>> David Lang
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>>>>
>>>>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>>>>
>>>>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>>>>>
>>>>>>>>> Gene
>>>>>>>>> ----------------------------------------------
>>>>>>>>> Eugene Chang
>>>>>>>>> IEEE Senior Life Member
>>>>>>>>> eugene.chang@ieee.org
>>>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>>>>>
>>>>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>>>>
>>>>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>>>>>
>>>>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>>>>>
>>>>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>>>>>
>>>>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>>>>>
>>>>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>>>>>
>>>>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>>>>>
>>>>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>>>>>
>>>>>>>>>> Gene
>>>>>>>>>> -----------------------------------
>>>>>>>>>> Eugene Chang
>>>>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>>>>>
>>>>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>>>>>
>>>>>>>>>>> Thanks
>>>>>>>>>>>
>>>>>>>>>>> Bruce
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>>>>>
>>>>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Bruce Perens K6BP
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Bruce Perens K6BP
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>>
>>> --
>>> Bruce Perens K6BP
>>
>
[-- Attachment #1.2: Type: text/html, Size: 23138 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:28 ` warren ponder
2022-09-26 21:34 ` Bruce Perens
@ 2022-09-27 16:14 ` Dave Taht
1 sibling, 0 replies; 56+ messages in thread
From: Dave Taht @ 2022-09-27 16:14 UTC (permalink / raw)
To: warren ponder; +Cc: Eugene Y Chang, Dave Taht via Starlink
On Mon, Sep 26, 2022 at 2:28 PM warren ponder <wponder11@gmail.com> wrote:
>
> Dave what do you need in order to add sites to the data collection. Feel free to reply separate or link to a previous thread
Hey, thx for being willing to wade in! Simplest place to start is with
packet captures of a long running iperf/iperf3/netperf, or even
speedtest.net, up and own, and tear 'em apart in wireshark or tcptrace
-G & xplot.org
Here's some history for y'all:
https://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-throw-stones-at-another/
We've discussed a lot of tools on this list, my preferred one is
flent, which lets you see details without needing packet captures. We
have some great plots of irtt with a 3ms interval that I'd love to be
doing more regularly, worldwide.
Over here I'd listed a bunch of (to me at least) useful research
topics that I'd hoped would attract a student or three to pursue more
fully. This was the draft plan we'd had tackling ALL the problems as
we saw then. Finding some aspect of this we could function together as
a group on, would be great.
https://docs.google.com/document/d/1rVGC-iNq2NZ0jk4f3IAiVUHz2S9O6P-F3vVZU2yBYtw/edit#
> Thx
>
> WP
>
> On Mon, Sep 26, 2022, 2:10 PM Dave Taht via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>> I tend to cite rfc7567 (published 2015) a lot, which replaces rfc2309
>> (published 1992!).
>>
>> Thing is, long before that, I'd come to the conclusion that fair
>> queuing was a requirement for
>> sustaining the right throughput for low rate flows in wildly variable
>> bandwidth. At certain places in
>> LTE/5g/starlink networks the payload is encrypted and the header info
>> required unavailable, and my advocacy of fq is certainly not shared by
>> everyone.
>>
>> We don't know enough about the actual points of congestion in starlink
>> to know if fq could be applied,
>> and although aqm is a very good idea everywhere, is also largely
>> undeployed where it would matter most.
>>
>> I focused my initial analysis of starlink on just uplink congestion,
>> which I believe can be easily improved given about 20 minutes with a
>> cross compiler for the dishy. We have a very good proof of concept as
>> to how to improve starlinks behavior over here:
>> https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379/87 and
>> ironically the same script could be run on their router as it is based
>> on a 6 year old version of openwrt in the first place.
>>
>> I have plenty of data later than this (
>> https://docs.google.com/document/d/1puRjUVxJ6cCv-rgQ_zn-jWZU9ae0jZbFATLf4PQKblM/edit
>> ) but I would like to be collecting it from at least six sites around
>> the world.
>>
>> On Mon, Sep 26, 2022 at 1:54 PM Eugene Y Chang via Starlink
>> <starlink@lists.bufferbloat.net> wrote:
>> >
>> > Ok, we are getting into the details. I agree.
>> >
>> > Every node in the path has to implement this to be effective.
>> > In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>> >
>> > Gene
>> > ----------------------------------------------
>> > Eugene Chang
>> > IEEE Senior Life Member
>> > eugene.chang@ieee.org
>> > 781-799-0233 (in Honolulu)
>> >
>> >
>> >
>> > On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>> >
>> > software updates can do far more than just improve recovery.
>> >
>> > In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>> >
>> > (the example below is not completely accurate, but I think it gets the point across)
>> >
>> > When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>> >
>> > If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>> >
>> > But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>> >
>> > The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>> >
>> > David Lang
>> >
>> >
>> > On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>> >
>> > You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>> >
>> > If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>> >
>> > Gene
>> > ----------------------------------------------
>> > Eugene Chang
>> > IEEE Senior Life Member
>> > eugene.chang@ieee.org
>> > 781-799-0233 (in Honolulu)
>> >
>> >
>> >
>> > On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>> >
>> > Please help to explain. Here's a draft to start with:
>> >
>> > Starlink Performance Not Sufficient for Military Applications, Say Scientists
>> >
>> > The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>> >
>> > We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>> > Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>> >
>> > On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>> > The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>> >
>> > First, we have to help people see the symptoms of latency and how it impacts something they care about.
>> > - gamers care but most people may think it is frivolous.
>> > - musicians care but that is mostly for a hobby.
>> > - business should care because of productivity but they don’t know how to “see” the impact.
>> >
>> > Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>> >
>> > Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>> >
>> > Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>> >
>> > Gene
>> > -----------------------------------
>> > Eugene Chang
>> > eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>> > +1-781-799-0233 (in Honolulu)
>> >
>> >
>> >
>> >
>> >
>> > On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>> >
>> > If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>> >
>> > Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>> >
>> > Thanks
>> >
>> > Bruce
>> >
>> > On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>> > These days, if you want attention, you gotta buy it. A 50k half page
>> > ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>> > signed by the kinds of luminaries we got for the fcc wifi fight, would
>> > go a long way towards shifting the tide.
>> >
>> > On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>> >
>> >
>> > On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>> > <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>> >
>> >
>> > The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>> >
>> >
>> > That's a great idea. I have visions of crashing the washington
>> > correspondents dinner, but perhaps
>> > there is some set of gatherings journalists regularly attend?
>> >
>> >
>> > On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>> >
>> > I still find it remarkable that reporters are still missing the
>> > meaning of the huge latencies for starlink, under load.
>> >
>> >
>> >
>> >
>> > --
>> > FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>> > Dave Täht CEO, TekLibre, LLC
>> >
>> >
>> >
>> >
>> > --
>> > FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>> > Dave Täht CEO, TekLibre, LLC
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> > https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>> >
>> >
>> > --
>> > Bruce Perens K6BP
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> > https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>> >
>> >
>> >
>> >
>> > --
>> > Bruce Perens K6BP
>> >
>> >
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-27 6:36 ` Sebastian Moeller
@ 2022-09-27 13:55 ` Dave Taht
2022-09-28 0:20 ` Eugene Y Chang
0 siblings, 1 reply; 56+ messages in thread
From: Dave Taht @ 2022-09-27 13:55 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Eugene Y Chang, Dave Taht via Starlink
On Mon, Sep 26, 2022 at 11:36 PM Sebastian Moeller via Starlink
<starlink@lists.bufferbloat.net> wrote:
> Worst, I waited 9 months for them to resolve why I was only getting 300Mbps on my 1Gbps service (oops, sorry my 700Mbps service).
Nothing I've seen to date below $400 dollars can actually push a gbit
in both directions at the same time, with
NAT, firewalling, etc. A lot of gear can't even push a gbit in one direction.
The long accepted fallacy that a gigabit "router", is just a gigabit
switch, bothers me, but it is just one piece of a long line of
orwellian doublethink about how the net works but is so pervasive we
ended up publishing:
https://forum.openwrt.org/t/so-you-have-500mbps-1gbps-fiber-and-need-a-router-read-this-first/90305
I've seen an ostensibly serious proposal that budgeted about $4k/sub
for a symmetric gig fiber rollout that budgeted...
wait for it... $75 dollars for the router, and other proposals that
wanted to punt the firewalling etc to a more centralized server, and
just put a switch at the sub.
> [SM] I guess mass market internet access is a low margin business, where expert debugging time is precious. I hope they at least gave you a rebate for that time.
Why is it so few take packet captures anymore? (I think I will rant on
that at more length elsewhere)
>
> Regards
> Sebastian
>
>
>
> >
> >
> >Gene
> >----------------------------------------------
> >Eugene Chang
> >IEEE Senior Life Member
> >eugene.chang@ieee.org
> >781-799-0233 (in Honolulu)
> >
> >
> >
> >> On Sep 26, 2022, at 11:29 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
> >>
> >> Hi David,
> >>
> >>> On Sep 26, 2022, at 23:22, David Lang <david@lang.hm> wrote:
> >>>
> >>> On Mon, 26 Sep 2022, Eugene Y Chang wrote:
> >>>
> >>>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
> >>>>>
> >>>>> Hi Eugene,
> >>>>>
> >>>>>
> >>>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
> >>>>>>
> >>>>>> Ok, we are getting into the details. I agree.
> >>>>>>
> >>>>>> Every node in the path has to implement this to be effective.
> >>>>>
> >>>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
> >>>>>
> >>>>
> >>>> This is not completely true. Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
> >>>
> >>> only if node N and node N-1 handle the same traffic with the same link speeds. In practice this is almost never the case.
> >>
> >> [SM] I note that typically for ingress shaping a post-true-bottleneck shaper will not work unless we create an artificial bottleneck by shaping the traffic to below true bottleneck (thereby creating a new true but artificial bottleneck so the queue develops at a point where we can control it).
> >> Also if the difference between "true structural" and artificial bottleneck is small in comparison to the traffic inrush we can see "traffic back-spill" into the typically oversized and under-managed upstream buffers, but for reasonably well behaved that happens relatively rarely. Rarely enough that ingress traffic shaping noticeably improves latency-under-load in spite of not beeing a guranteed solution.
> >>
> >>
> >>> Until you get to gigabit last-mile links, the last mile is almost always the bottleneck from both sides, so implementing cake on the home router makes a huge improvement (and if you can get it on the last-mile ISP router, even better). Once you get into the Internet fabric, bottlenecks are fairly rare, they do happen, but ISPs carefully watch for those and add additional paths and/or increase bandwith to avoid them.
> >>
> >> [SM] Well, sometimes such links are congested too for economic reasons...
> >>
> >> Regards
> >> Sebastian
> >>
> >>
> >>>
> >>> David Lang
> >>>
> >>>>>
> >>>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
> >>>>>
> >>>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
> >>>>
> >>>> The hardest part is getting competing ISPs to implement and coordinate. Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
> >>>>
> >>>>
> >>>>
> >>>>>
> >>>>> Regards
> >>>>> Sebastian
> >>>>>
> >>>>>
> >>>>>>
> >>>>>> Gene
> >>>>>> ----------------------------------------------
> >>>>>> Eugene Chang
> >>>>>> IEEE Senior Life Member
> >>>>>> eugene.chang@ieee.org
> >>>>>> 781-799-0233 (in Honolulu)
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
> >>>>>>>
> >>>>>>> software updates can do far more than just improve recovery.
> >>>>>>>
> >>>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
> >>>>>>>
> >>>>>>> (the example below is not completely accurate, but I think it gets the point across)
> >>>>>>>
> >>>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
> >>>>>>>
> >>>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
> >>>>>>>
> >>>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
> >>>>>>>
> >>>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
> >>>>>>>
> >>>>>>> David Lang
> >>>>>>>
> >>>>>>>
> >>>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
> >>>>>>>
> >>>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
> >>>>>>>>
> >>>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
> >>>>>>>>
> >>>>>>>> Gene
> >>>>>>>> ----------------------------------------------
> >>>>>>>> Eugene Chang
> >>>>>>>> IEEE Senior Life Member
> >>>>>>>> eugene.chang@ieee.org
> >>>>>>>> 781-799-0233 (in Honolulu)
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
> >>>>>>>>>
> >>>>>>>>> Please help to explain. Here's a draft to start with:
> >>>>>>>>>
> >>>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
> >>>>>>>>>
> >>>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
> >>>>>>>>>
> >>>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
> >>>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
> >>>>>>>>>
> >>>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
> >>>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
> >>>>>>>>>
> >>>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
> >>>>>>>>> - gamers care but most people may think it is frivolous.
> >>>>>>>>> - musicians care but that is mostly for a hobby.
> >>>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
> >>>>>>>>>
> >>>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
> >>>>>>>>>
> >>>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
> >>>>>>>>>
> >>>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
> >>>>>>>>>
> >>>>>>>>> Gene
> >>>>>>>>> -----------------------------------
> >>>>>>>>> Eugene Chang
> >>>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
> >>>>>>>>> +1-781-799-0233 (in Honolulu)
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
> >>>>>>>>>>
> >>>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
> >>>>>>>>>>
> >>>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
> >>>>>>>>>>
> >>>>>>>>>> Thanks
> >>>>>>>>>>
> >>>>>>>>>> Bruce
> >>>>>>>>>>
> >>>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
> >>>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
> >>>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
> >>>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
> >>>>>>>>>> go a long way towards shifting the tide.
> >>>>>>>>>>
> >>>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
> >>>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
> >>>>>>>>>>>
> >>>>>>>>>>> That's a great idea. I have visions of crashing the washington
> >>>>>>>>>>> correspondents dinner, but perhaps
> >>>>>>>>>>> there is some set of gatherings journalists regularly attend?
> >>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>> I still find it remarkable that reporters are still missing the
> >>>>>>>>>>>> meaning of the huge latencies for starlink, under load.
> >>>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> --
> >>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
> >>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> --
> >>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
> >>>>>>>>>> Dave Täht CEO, TekLibre, LLC
> >>>>>>>>>> _______________________________________________
> >>>>>>>>>> Starlink mailing list
> >>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> >>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> --
> >>>>>>>>>> Bruce Perens K6BP
> >>>>>>>>>> _______________________________________________
> >>>>>>>>>> Starlink mailing list
> >>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> >>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> --
> >>>>>>>>> Bruce Perens K6BP
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> Starlink mailing list
> >>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> >>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
> >
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-27 3:50 ` Eugene Y Chang
@ 2022-09-27 7:09 ` Sebastian Moeller
2022-09-27 22:46 ` Eugene Y Chang
0 siblings, 1 reply; 56+ messages in thread
From: Sebastian Moeller @ 2022-09-27 7:09 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: Bruce Perens, Dave Taht via Starlink
Hi Gene,
> On Sep 27, 2022, at 05:50, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>
> Of course…. But the ISP’s maxim is "don’t believe the customer’s speedtest numbers if it is not with their host".
[SM] In an act of reasonable regulation did the EU give national regulatory agencies the right to define and set-up national "speed-tests" (located outside of the access ISPs networks) which ISPs effectively need to accept. In Germany the local NRA (Bundes-Netz-Agentur, BNetzA) created a speedtest application against servers operated on their behalf and will, if a customers demonstrates the ISP falling short of the contracted rates (according to a somewhat complicated definition and measurement rules). convince ISPs to follow the law and either release customers from their contracts immediately or lower the price in relation to the under-fulfillment. (Unfortunately all related official web pages are in German only).
This put a hold on the practice of gaming speedtests, like DOCSIS ISPs measuring an in-segment speedtest server which conveniently hides if a segment's uplink is congested... Not all ISPs gamed their speedtests, and even the in-segment speedtest can be justified for some measurements (e.g. when trying to figure out whether congestion is in-segment or out-of-segment), but the temptation must have been large to not set-up the most objective spedtest. (We have a saying along the lines of "making a goat your gardener" which generally is considered a sub-optimal approach).
Regards
Sebastian
>
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
>> On Sep 26, 2022, at 11:44 AM, Bruce Perens <bruce@perens.com> wrote:
>>
>> That's a good maxim: Don't believe a speed test that is hosted by your own ISP.
>>
>> On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>> Thank you for the dialog,.
>> This discussion with regards to Starlink is interesting as it confirms my guesses about the gap between Starlinks overly simplified, over optimistic marketing and the reality as they acquire subscribers.
>>
>> I am actually interested in a more perverse issue. I am seeing latency and bufferbloat as a consequence from significant under provisioning. It doesn’t matter that the ISP is selling a fiber drop, if (parts) of their network is under provisioned. Two end points can be less than 5 mile apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency was 230+ ms. The pattern I see suggest digital redlining. The older communities appear to have much more severe under provisioning.
>>
>> Another observation. Running speedtest appears to go from the edge of the network by layer 2 to the speedtest host operated by the ISP. Yup, bypasses the (suspected overloaded) routers.
>>
>> Anyway, just observing.
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>>> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>
>>> Hi Gene,
>>>
>>>
>>>> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>>>
>>>> Comments inline below.
>>>>
>>>> Gene
>>>> ----------------------------------------------
>>>> Eugene Chang
>>>> IEEE Senior Life Member
>>>> eugene.chang@ieee.org
>>>> 781-799-0233 (in Honolulu)
>>>>
>>>>
>>>>
>>>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>>
>>>>> Hi Eugene,
>>>>>
>>>>>
>>>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>>>
>>>>>> Ok, we are getting into the details. I agree.
>>>>>>
>>>>>> Every node in the path has to implement this to be effective.
>>>>>
>>>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>>>
>>>>
>>>> This is not completely true.
>>>
>>> [SM] You are likely right, trying to summarize things leads to partially incorrect generalizations.
>>>
>>>
>>>> Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>>>
>>> [SM] It is the node that builds up the queue that profits most from better queue management.... (again I generalize, the node with the queue itself probably does not care all that much, but the endpoints will profit if the queue experiencing node deals with that queue more gracefully).
>>>
>>>
>>>>
>>>>
>>>>>
>>>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>>>
>>>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>>>
>>>> The hardest part is getting competing ISPs to implement and coordinate.
>>>
>>> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot end-users can do unilaterally on their side to improve both ingress and egress congestion. Admittedly especially ingress congestion would be even better handled with cooperation of the ISP.
>>>
>>>> Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>>
>>> [SM] Well we do this one home network at a time (not because that is efficient or ideal, but simply because it is possible). Maybe, if you have not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in addition) on your home internet access link for say a week and let us know ih/how your experience changed?
>>>
>>> Regards
>>> Sebastian
>>>
>>>
>>>>
>>>>
>>>>
>>>>>
>>>>> Regards
>>>>> Sebastian
>>>>>
>>>>>
>>>>>>
>>>>>> Gene
>>>>>> ----------------------------------------------
>>>>>> Eugene Chang
>>>>>> IEEE Senior Life Member
>>>>>> eugene.chang@ieee.org
>>>>>> 781-799-0233 (in Honolulu)
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>>>>
>>>>>>> software updates can do far more than just improve recovery.
>>>>>>>
>>>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>>>>
>>>>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>>>>
>>>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>>>>
>>>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>>>>
>>>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>>>>
>>>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>>>>
>>>>>>> David Lang
>>>>>>>
>>>>>>>
>>>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>>>
>>>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>>>
>>>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>>>>
>>>>>>>> Gene
>>>>>>>> ----------------------------------------------
>>>>>>>> Eugene Chang
>>>>>>>> IEEE Senior Life Member
>>>>>>>> eugene.chang@ieee.org
>>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>>>>
>>>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>>>
>>>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>>>>
>>>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>>>>
>>>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>>>>
>>>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>>>>
>>>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>>>>
>>>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>>>>
>>>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>>>>
>>>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>>>>
>>>>>>>>> Gene
>>>>>>>>> -----------------------------------
>>>>>>>>> Eugene Chang
>>>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>
>>>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>>>>
>>>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>> Bruce
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>>>>
>>>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Starlink mailing list
>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Bruce Perens K6BP
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Starlink mailing list
>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Bruce Perens K6BP
>>>>>>
>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>> --
>> Bruce Perens K6BP
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-27 3:47 ` Eugene Y Chang
@ 2022-09-27 6:36 ` Sebastian Moeller
2022-09-27 13:55 ` Dave Taht
0 siblings, 1 reply; 56+ messages in thread
From: Sebastian Moeller @ 2022-09-27 6:36 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: Eugene Chang, David Lang, Dave Taht via Starlink
Hi Gene,
On 27 September 2022 05:47:48 CEST, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>> [SM] I note that typically for ingress shaping a post-true-bottleneck shaper will not work unless we create an artificial bottleneck by shaping the traffic to below true bottleneck (thereby creating a new true but artificial bottleneck so the queue develops at a point where we can control it).
>> Also if the difference between "true structural" and artificial bottleneck is small in comparison to the traffic inrush we can see "traffic back-spill" into the typically oversized and under-managed upstream buffers, but for reasonably well behaved that happens relatively rarely. Rarely enough that ingress traffic shaping noticeably improves latency-under-load in spite of not beeing a guranteed solution.
>
>Perhaps I am overthinking this. In general, I don’t think this really works.
[SM] Yes this method's effectiveness depends on having control over the bottleneck and that in turn requires seeing all traffic traversing the bottleneck link. It turns out that for a competent ISP the relevant bottleneck is often the actual internet access link itself, that is why sqm on a home router in practice works often very well even though in theory it looks unlikely to do so.
Consider a router with M ports from the network edge and N ports toward the network core. You are only trying to influence one of the M ports. Even if N=1, what actually happens with the buffering at N depends on all the traffic, not just the traffic you are shaping.
[SM] Not that an uncommon situation, thinking e.g. of a DSLAM, OLT or CMTS where the aggregate of access speeds exceeds the uplink capacity, and the ISP misjudged the usage and under-provided the uplink*. The more remote the bottleneck is and the less of the traversing flows we can control the more problematic the debloating exercise gets....
*) I use this phrasing because these uplinks are effectively always smaller than the aggregate marketed rates on such a device, aka the ISP oversubscribes the unit, but that is a theoretical problem as long as the actual aggregated traffic does not exceed the units uplink capacity. Dimensioning such uplinks to the worst case would likely either mean really high prices or really low contracted rates. So oversubscription per se is fine with me, as long as the ISP manages to handle the real traffic for most of the time, but I digress.
>
>> [SM] Well, sometimes such links are congested too for economic reasons…
>
>It is always economic. The network is always undersized because of some economic (or management) policy.
[SM] I was thinking about a specific case of a T1-ISP that notoriously let's his links to other T1 run hot during peak usage times, to incentivise content providers to buy direct access (technically they sell 'transit' but at a above market price so content providers are unlikely to use this link for traffic not intended for that ISP's AS or its single-homed customers.
>
>These days it is more and more true with the preference of taking fiber to the subscriber. It is physically possible to send more traffic to the network than the router can handle.
[SM] That depends on the router ;), but yes with access speeds reaching close to 10 Gbps for some european ISPs home neyworking gear tends to be out of its league, with accelerators helping at least speed tests to limp along well enough.
My ISP happily markets 1Gbps service but the fine print of their contract says they don’t promise more than 700Mbps.
[SM] The question is what is the consequence/remedy if the ISP does not fulfill that promise? Over here ISPs need to publish a set of rates in a standardized format and in case of not delivering these rates customers can either opt for canceling the contract or adjusting the price they pay to the relative fulfillment of the contracted rates. The details however are being worked out ATM.
Worst, I waited 9 months for them to resolve why I was only getting 300Mbps on my 1Gbps service (oops, sorry my 700Mbps service).
[SM] I guess mass market internet access is a low margin business, where expert debugging time is precious. I hope they at least gave you a rebate for that time.
Regards
Sebastian
>
>
>Gene
>----------------------------------------------
>Eugene Chang
>IEEE Senior Life Member
>eugene.chang@ieee.org
>781-799-0233 (in Honolulu)
>
>
>
>> On Sep 26, 2022, at 11:29 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi David,
>>
>>> On Sep 26, 2022, at 23:22, David Lang <david@lang.hm> wrote:
>>>
>>> On Mon, 26 Sep 2022, Eugene Y Chang wrote:
>>>
>>>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>>
>>>>> Hi Eugene,
>>>>>
>>>>>
>>>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>
>>>>>> Ok, we are getting into the details. I agree.
>>>>>>
>>>>>> Every node in the path has to implement this to be effective.
>>>>>
>>>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>>>
>>>>
>>>> This is not completely true. Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>>>
>>> only if node N and node N-1 handle the same traffic with the same link speeds. In practice this is almost never the case.
>>
>> [SM] I note that typically for ingress shaping a post-true-bottleneck shaper will not work unless we create an artificial bottleneck by shaping the traffic to below true bottleneck (thereby creating a new true but artificial bottleneck so the queue develops at a point where we can control it).
>> Also if the difference between "true structural" and artificial bottleneck is small in comparison to the traffic inrush we can see "traffic back-spill" into the typically oversized and under-managed upstream buffers, but for reasonably well behaved that happens relatively rarely. Rarely enough that ingress traffic shaping noticeably improves latency-under-load in spite of not beeing a guranteed solution.
>>
>>
>>> Until you get to gigabit last-mile links, the last mile is almost always the bottleneck from both sides, so implementing cake on the home router makes a huge improvement (and if you can get it on the last-mile ISP router, even better). Once you get into the Internet fabric, bottlenecks are fairly rare, they do happen, but ISPs carefully watch for those and add additional paths and/or increase bandwith to avoid them.
>>
>> [SM] Well, sometimes such links are congested too for economic reasons...
>>
>> Regards
>> Sebastian
>>
>>
>>>
>>> David Lang
>>>
>>>>>
>>>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>>>
>>>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>>>
>>>> The hardest part is getting competing ISPs to implement and coordinate. Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>>>
>>>>
>>>>
>>>>>
>>>>> Regards
>>>>> Sebastian
>>>>>
>>>>>
>>>>>>
>>>>>> Gene
>>>>>> ----------------------------------------------
>>>>>> Eugene Chang
>>>>>> IEEE Senior Life Member
>>>>>> eugene.chang@ieee.org
>>>>>> 781-799-0233 (in Honolulu)
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>>>>
>>>>>>> software updates can do far more than just improve recovery.
>>>>>>>
>>>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>>>>
>>>>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>>>>
>>>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>>>>
>>>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>>>>
>>>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>>>>
>>>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>>>>
>>>>>>> David Lang
>>>>>>>
>>>>>>>
>>>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>>>
>>>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>>>
>>>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>>>>
>>>>>>>> Gene
>>>>>>>> ----------------------------------------------
>>>>>>>> Eugene Chang
>>>>>>>> IEEE Senior Life Member
>>>>>>>> eugene.chang@ieee.org
>>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>>>>
>>>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>>>
>>>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>>>>
>>>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>>>>
>>>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>>>>
>>>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>>>>
>>>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>>>>
>>>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>>>>
>>>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>>>>
>>>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>>>>
>>>>>>>>> Gene
>>>>>>>>> -----------------------------------
>>>>>>>>> Eugene Chang
>>>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>
>>>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>>>>
>>>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>> Bruce
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>>>>
>>>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Starlink mailing list
>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Bruce Perens K6BP
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Starlink mailing list
>>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Bruce Perens K6BP
>>>>>>
>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-27 0:35 ` Dave Taht
2022-09-27 0:55 ` Bruce Perens
@ 2022-09-27 4:06 ` Eugene Y Chang
1 sibling, 0 replies; 56+ messages in thread
From: Eugene Y Chang @ 2022-09-27 4:06 UTC (permalink / raw)
To: Dave Taht; +Cc: Eugene Chang, Bruce Perens, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 20775 bytes --]
> Speedtest also does nothing to measure how well a given
> videoconference or voip session might go. There isn't a test (at least
> not when last I looked) in the FCC broadband measurements for just
> videoconferencing, and their latency under load test for many years
> now, is buried deep in the annual report.
> I have a new one - prototyped in some starlink tests so far, and
> elsewhere - called "SPOM" - steady packets over milliseconds, which,
> when run simultaneously with capacity seeking traffic, might be a
> better predictor of videoconferencing performance.
The challenge is for most people, they cannot observe when the network behavior is causing problems. Even videoconf, it is usually one person speaking at a time. Timing problems are invisible. Since 2020, I have been using Jamulus, an app that lets 2-100+ users hold realtime music. (Typically 2-30 users). The delay and fluctuation of delay is easily heard by the participants. Steady reasonable latency (<60ms) is manageable. High fluctuating latency is disruptive. (And yes, all the traffic is time sensitive UDP.)
SPOM is a good start. I would be interested in SPOM measurements when the video conf is stuttering or hanging. Note, often it is only one user in the conf affected. If the SPOM is not running on that affected user’s link, probably won’t get the interesting data.
Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honolulu)
> On Sep 26, 2022, at 2:35 PM, Dave Taht <dave.taht@gmail.com> wrote:
>
> On Mon, Sep 26, 2022 at 2:45 PM Bruce Perens via Starlink
> <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>
>> That's a good maxim: Don't believe a speed test that is hosted by your own ISP.
>
> A network designed for speedtest.net <http://speedtest.net/>, is a network... designed for
> speedtest. Starlink seemingly was designed for speedtest - the 15
> second "cycle" to sense/change their bandwidth setting is just within
> the 20s cycle speedtest terminates at, and speedtest returns the last
> number for the bandwidth. It is a brutal test - using 8 or more flows
> - much harder on the network than your typical web page load which,
> while that is often 15 or so, most never run long enough to get out of
> slow start. At least some of qualifying for the RDOF money was
> achieving 100mbits down on "speedtest".
>
> A knowledgeable user concerned about web PLT should be looking a the
> first 3 s of a given test, and even then once the bandwidth cracks
> 20Mbit, it's of no help for most web traffic ( we've been citing mike
> belshe's original work here a lot,
> and more recent measurements still show that )
>
> Speedtest also does nothing to measure how well a given
> videoconference or voip session might go. There isn't a test (at least
> not when last I looked) in the FCC broadband measurements for just
> videoconferencing, and their latency under load test for many years
> now, is buried deep in the annual report.
>
> I hope that with both ookla and samknows more publicly recording and
> displaying latency under load (still, sigh, I think only displaying
> the last number and only sampling every 250ms) that we can shift the
> needle on this, but I started off this thread complaining nobody was
> picking up on those numbers... and neither service tests the worst
> case scenario of a simultaneous up/download, which was the principal
> scenario we explored with the flent "rrul" series of tests, which were
> originally designed to emulate and deeply understand what bittorrent
> was doing to networks, and our principal tool in designing new fq and
> aqm and transport CCs, along with the rtt_fair test for testing near
> and far destinations at the same time.
>
> My model has always been a family of four, one person uploading,
> another doing web, one doing videoconferencing,
> and another doing voip or gaming, and no test anyone has emulates
> that. With 16 wifi devices
> per household, the rrul scenario is actually not "worst case", but
> increasingly the state of things "normally".
>
> Another irony about speedtest is that users are inspired^Wtrained to
> use it when the "network feels slow", and self-initiate something that
> makes it worse, for both them and their portion of the network.
>
> Since the internet architecture board met last year, (
> https://www.iab.org/activities/workshops/network-quality/ <https://www.iab.org/activities/workshops/network-quality/> ) there
> seems to be an increasing amount of work on better metrics and tests
> for QoE, with stuff like apple's responsiveness test, etc.
>
> I have a new one - prototyped in some starlink tests so far, and
> elsewhere - called "SPOM" - steady packets over milliseconds, which,
> when run simultaneously with capacity seeking traffic, might be a
> better predictor of videoconferencing performance.
>
> There's also a really good "P99" conference coming up for those, that
> like me, are OCD about a few sigmas.
>
>>
>> On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>
>>> Thank you for the dialog,.
>>> This discussion with regards to Starlink is interesting as it confirms my guesses about the gap between Starlinks overly simplified, over optimistic marketing and the reality as they acquire subscribers.
>>>
>>> I am actually interested in a more perverse issue. I am seeing latency and bufferbloat as a consequence from significant under provisioning. It doesn’t matter that the ISP is selling a fiber drop, if (parts) of their network is under provisioned. Two end points can be less than 5 mile apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency was 230+ ms. The pattern I see suggest digital redlining. The older communities appear to have much more severe under provisioning.
>>>
>>> Another observation. Running speedtest appears to go from the edge of the network by layer 2 to the speedtest host operated by the ISP. Yup, bypasses the (suspected overloaded) routers.
>>>
>>> Anyway, just observing.
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>
>>> Hi Gene,
>>>
>>>
>>> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>>
>>> Comments inline below.
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>
>>> Hi Eugene,
>>>
>>>
>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>
>>> Ok, we are getting into the details. I agree.
>>>
>>> Every node in the path has to implement this to be effective.
>>>
>>>
>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>
>>>
>>> This is not completely true.
>>>
>>>
>>> [SM] You are likely right, trying to summarize things leads to partially incorrect generalizations.
>>>
>>>
>>> Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>>>
>>>
>>> [SM] It is the node that builds up the queue that profits most from better queue management.... (again I generalize, the node with the queue itself probably does not care all that much, but the endpoints will profit if the queue experiencing node deals with that queue more gracefully).
>>>
>>>
>>>
>>>
>>>
>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>
>>>
>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>>
>>>
>>> The hardest part is getting competing ISPs to implement and coordinate.
>>>
>>>
>>> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot end-users can do unilaterally on their side to improve both ingress and egress congestion. Admittedly especially ingress congestion would be even better handled with cooperation of the ISP.
>>>
>>> Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>>
>>>
>>> [SM] Well we do this one home network at a time (not because that is efficient or ideal, but simply because it is possible). Maybe, if you have not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in addition) on your home internet access link for say a week and let us know ih/how your experience changed?
>>>
>>> Regards
>>> Sebastian
>>>
>>>
>>>
>>>
>>>
>>>
>>> Regards
>>> Sebastian
>>>
>>>
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>
>>> software updates can do far more than just improve recovery.
>>>
>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>
>>> (the example below is not completely accurate, but I think it gets the point across)
>>>
>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>
>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>
>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>
>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>
>>> David Lang
>>>
>>>
>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>
>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>
>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>
>>> Please help to explain. Here's a draft to start with:
>>>
>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>
>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>
>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>
>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>
>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>> - gamers care but most people may think it is frivolous.
>>> - musicians care but that is mostly for a hobby.
>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>
>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>
>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>
>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>
>>> Gene
>>> -----------------------------------
>>> Eugene Chang
>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>> +1-781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>>
>>>
>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>
>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>
>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>
>>> Thanks
>>>
>>> Bruce
>>>
>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>> These days, if you want attention, you gotta buy it. A 50k half page
>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>> go a long way towards shifting the tide.
>>>
>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>
>>>
>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>
>>>
>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>
>>>
>>> That's a great idea. I have visions of crashing the washington
>>> correspondents dinner, but perhaps
>>> there is some set of gatherings journalists regularly attend?
>>>
>>>
>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>
>>> I still find it remarkable that reporters are still missing the
>>> meaning of the huge latencies for starlink, under load.
>>>
>>>
>>>
>>>
>>> --
>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>> Dave Täht CEO, TekLibre, LLC
>>>
>>>
>>>
>>>
>>> --
>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>> Dave Täht CEO, TekLibre, LLC
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>
>>>
>>> --
>>> Bruce Perens K6BP
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>
>>>
>>>
>>>
>>> --
>>> Bruce Perens K6BP
>>>
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>>
>> --
>> Bruce Perens K6BP
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>
>
>
> --
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>
> Dave Täht CEO, TekLibre, LLC
[-- Attachment #1.2: Type: text/html, Size: 72347 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:44 ` Bruce Perens
2022-09-27 0:35 ` Dave Taht
@ 2022-09-27 3:50 ` Eugene Y Chang
2022-09-27 7:09 ` Sebastian Moeller
1 sibling, 1 reply; 56+ messages in thread
From: Eugene Y Chang @ 2022-09-27 3:50 UTC (permalink / raw)
To: Bruce Perens; +Cc: Eugene Chang, Sebastian Moeller, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 17895 bytes --]
Of course…. But the ISP’s maxim is "don’t believe the customer’s speedtest numbers if it is not with their host".
Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honolulu)
> On Sep 26, 2022, at 11:44 AM, Bruce Perens <bruce@perens.com> wrote:
>
> That's a good maxim: Don't believe a speed test that is hosted by your own ISP.
>
> On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
> Thank you for the dialog,.
> This discussion with regards to Starlink is interesting as it confirms my guesses about the gap between Starlinks overly simplified, over optimistic marketing and the reality as they acquire subscribers.
>
> I am actually interested in a more perverse issue. I am seeing latency and bufferbloat as a consequence from significant under provisioning. It doesn’t matter that the ISP is selling a fiber drop, if (parts) of their network is under provisioned. Two end points can be less than 5 mile apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency was 230+ ms. The pattern I see suggest digital redlining. The older communities appear to have much more severe under provisioning.
>
> Another observation. Running speedtest appears to go from the edge of the network by layer 2 to the speedtest host operated by the ISP. Yup, bypasses the (suspected overloaded) routers.
>
> Anyway, just observing.
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org <mailto:eugene.chang@ieee.org>
> 781-799-0233 (in Honolulu)
>
>
>
>> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de <mailto:moeller0@gmx.de>> wrote:
>>
>> Hi Gene,
>>
>>
>>> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org <mailto:eugene.chang@ieee.org>> wrote:
>>>
>>> Comments inline below.
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org <mailto:eugene.chang@ieee.org>
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de <mailto:moeller0@gmx.de>> wrote:
>>>>
>>>> Hi Eugene,
>>>>
>>>>
>>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>
>>>>> Ok, we are getting into the details. I agree.
>>>>>
>>>>> Every node in the path has to implement this to be effective.
>>>>
>>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>>
>>>
>>> This is not completely true.
>>
>> [SM] You are likely right, trying to summarize things leads to partially incorrect generalizations.
>>
>>
>>> Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>>
>> [SM] It is the node that builds up the queue that profits most from better queue management.... (again I generalize, the node with the queue itself probably does not care all that much, but the endpoints will profit if the queue experiencing node deals with that queue more gracefully).
>>
>>
>>>
>>>
>>>>
>>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>>
>>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>>
>>> The hardest part is getting competing ISPs to implement and coordinate.
>>
>> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot end-users can do unilaterally on their side to improve both ingress and egress congestion. Admittedly especially ingress congestion would be even better handled with cooperation of the ISP.
>>
>>> Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>
>> [SM] Well we do this one home network at a time (not because that is efficient or ideal, but simply because it is possible). Maybe, if you have not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in addition) on your home internet access link for say a week and let us know ih/how your experience changed?
>>
>> Regards
>> Sebastian
>>
>>
>>>
>>>
>>>
>>>>
>>>> Regards
>>>> Sebastian
>>>>
>>>>
>>>>>
>>>>> Gene
>>>>> ----------------------------------------------
>>>>> Eugene Chang
>>>>> IEEE Senior Life Member
>>>>> eugene.chang@ieee.org <mailto:eugene.chang@ieee.org>
>>>>> 781-799-0233 (in Honolulu)
>>>>>
>>>>>
>>>>>
>>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm <mailto:david@lang.hm>> wrote:
>>>>>>
>>>>>> software updates can do far more than just improve recovery.
>>>>>>
>>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>>>
>>>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>>>
>>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>>>
>>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>>>
>>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>>>
>>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>>>
>>>>>> David Lang
>>>>>>
>>>>>>
>>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>>
>>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>>
>>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>>>
>>>>>>> Gene
>>>>>>> ----------------------------------------------
>>>>>>> Eugene Chang
>>>>>>> IEEE Senior Life Member
>>>>>>> eugene.chang@ieee.org <mailto:eugene.chang@ieee.org>
>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com <mailto:bruce@perens.com>> wrote:
>>>>>>>>
>>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>>
>>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>>>
>>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>>>
>>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>>>
>>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu><mailto:eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>>> wrote:
>>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>>>
>>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>>>
>>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>>>
>>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>>>
>>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>>>
>>>>>>>> Gene
>>>>>>>> -----------------------------------
>>>>>>>> Eugene Chang
>>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu> <mailto:eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>>
>>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net><mailto:starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>> wrote:
>>>>>>>>>
>>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>>>
>>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>> Bruce
>>>>>>>>>
>>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net><mailto:starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>> wrote:
>>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>>
>>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:dave.taht@gmail.com <mailto:dave.taht@gmail.com>>> wrote:
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com> <mailto:Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>>>
>>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> <mailto:starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net>> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net> <mailto:starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/><https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>>
>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/><https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>>
>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net> <mailto:Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>>
>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink> <https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Bruce Perens K6BP
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net> <mailto:Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>>
>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink> <https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Bruce Perens K6BP
>>>>>
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>
>
> --
> Bruce Perens K6BP
[-- Attachment #1.2: Type: text/html, Size: 34046 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:29 ` Sebastian Moeller
@ 2022-09-27 3:47 ` Eugene Y Chang
2022-09-27 6:36 ` Sebastian Moeller
0 siblings, 1 reply; 56+ messages in thread
From: Eugene Y Chang @ 2022-09-27 3:47 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Eugene Chang, David Lang, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 16413 bytes --]
> [SM] I note that typically for ingress shaping a post-true-bottleneck shaper will not work unless we create an artificial bottleneck by shaping the traffic to below true bottleneck (thereby creating a new true but artificial bottleneck so the queue develops at a point where we can control it).
> Also if the difference between "true structural" and artificial bottleneck is small in comparison to the traffic inrush we can see "traffic back-spill" into the typically oversized and under-managed upstream buffers, but for reasonably well behaved that happens relatively rarely. Rarely enough that ingress traffic shaping noticeably improves latency-under-load in spite of not beeing a guranteed solution.
Perhaps I am overthinking this. In general, I don’t think this really works. Consider a router with M ports from the network edge and N ports toward the network core. You are only trying to influence one of the M ports. Even if N=1, what actually happens with the buffering at N depends on all the traffic, not just the traffic you are shaping.
> [SM] Well, sometimes such links are congested too for economic reasons…
It is always economic. The network is always undersized because of some economic (or management) policy.
These days it is more and more true with the preference of taking fiber to the subscriber. It is physically possible to send more traffic to the network than the router can handle. My ISP happily markets 1Gbps service but the fine print of their contract says they don’t promise more than 700Mbps. Worst, I waited 9 months for them to resolve why I was only getting 300Mbps on my 1Gbps service (oops, sorry my 700Mbps service).
Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honolulu)
> On Sep 26, 2022, at 11:29 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi David,
>
>> On Sep 26, 2022, at 23:22, David Lang <david@lang.hm> wrote:
>>
>> On Mon, 26 Sep 2022, Eugene Y Chang wrote:
>>
>>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>
>>>> Hi Eugene,
>>>>
>>>>
>>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>
>>>>> Ok, we are getting into the details. I agree.
>>>>>
>>>>> Every node in the path has to implement this to be effective.
>>>>
>>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>>
>>>
>>> This is not completely true. Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>>
>> only if node N and node N-1 handle the same traffic with the same link speeds. In practice this is almost never the case.
>
> [SM] I note that typically for ingress shaping a post-true-bottleneck shaper will not work unless we create an artificial bottleneck by shaping the traffic to below true bottleneck (thereby creating a new true but artificial bottleneck so the queue develops at a point where we can control it).
> Also if the difference between "true structural" and artificial bottleneck is small in comparison to the traffic inrush we can see "traffic back-spill" into the typically oversized and under-managed upstream buffers, but for reasonably well behaved that happens relatively rarely. Rarely enough that ingress traffic shaping noticeably improves latency-under-load in spite of not beeing a guranteed solution.
>
>
>> Until you get to gigabit last-mile links, the last mile is almost always the bottleneck from both sides, so implementing cake on the home router makes a huge improvement (and if you can get it on the last-mile ISP router, even better). Once you get into the Internet fabric, bottlenecks are fairly rare, they do happen, but ISPs carefully watch for those and add additional paths and/or increase bandwith to avoid them.
>
> [SM] Well, sometimes such links are congested too for economic reasons...
>
> Regards
> Sebastian
>
>
>>
>> David Lang
>>
>>>>
>>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>>
>>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>>
>>> The hardest part is getting competing ISPs to implement and coordinate. Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>>
>>>
>>>
>>>>
>>>> Regards
>>>> Sebastian
>>>>
>>>>
>>>>>
>>>>> Gene
>>>>> ----------------------------------------------
>>>>> Eugene Chang
>>>>> IEEE Senior Life Member
>>>>> eugene.chang@ieee.org
>>>>> 781-799-0233 (in Honolulu)
>>>>>
>>>>>
>>>>>
>>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>>>
>>>>>> software updates can do far more than just improve recovery.
>>>>>>
>>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>>>
>>>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>>>
>>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>>>
>>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>>>
>>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>>>
>>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>>>
>>>>>> David Lang
>>>>>>
>>>>>>
>>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>>
>>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>>
>>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>>>
>>>>>>> Gene
>>>>>>> ----------------------------------------------
>>>>>>> Eugene Chang
>>>>>>> IEEE Senior Life Member
>>>>>>> eugene.chang@ieee.org
>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>>>
>>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>>
>>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>>>
>>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>>>
>>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>>>
>>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>>>
>>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>>>
>>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>>>
>>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>>>
>>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>>>
>>>>>>>> Gene
>>>>>>>> -----------------------------------
>>>>>>>> Eugene Chang
>>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>
>>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>>>
>>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>> Bruce
>>>>>>>>>
>>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>>
>>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>>>
>>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Bruce Perens K6BP
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Bruce Perens K6BP
>>>>>
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
[-- Attachment #1.2: Type: text/html, Size: 30154 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-27 0:55 ` Bruce Perens
@ 2022-09-27 1:12 ` Dave Taht
0 siblings, 0 replies; 56+ messages in thread
From: Dave Taht @ 2022-09-27 1:12 UTC (permalink / raw)
To: Bruce Perens; +Cc: Eugene Y Chang, Dave Taht via Starlink
On Mon, Sep 26, 2022 at 5:55 PM Bruce Perens <bruce@perens.com> wrote:
>
> Why not write an RFC on internet metrics? Then evangelize customers to rely on metrics compliant with the RFC.
Glad you asked. :) Apple's current IETF IPPM draft is
here:https://datatracker.ietf.org/doc/draft-ietf-ippm-responsiveness/
github here: https://github.com/network-quality/draft-ietf-ippm-responsiveness
and distributed now as networkQuality on OSX and available under
developer settings for ios. Given that the methods and metrics are not
final, it is not being pushed all that much except into the developer
userbase as yet.
https://github.com/network-quality has a go version of the client
which runs on everything and how to set up your own ngix server.
A nice thing about it is that it is designed to stress out our more
common protocols (https, quic) and servers, unlike my flent tests
which test tcp only. I think the present go implementation stresses
out a low speed network too hard, actually, and needs to get more
deeply and often into sampling the TCP_INFO statistics - and really
needs a section for my most desired metric which is a simultaneous up
and download to be at least available, if optional, but haven't
written that up yet). A noted problem with up/down is that you are
also stressing the device drivers of the local system.... which is
important to fix, but
There has been a LOT of other work on better metric producing tools in
the past year - iperf2 gained a few new tests like "bounce-back", and
crusader - written in rust and notable for it's speed and portability
- appeared here: https://github.com/Zoxc/crusader
I am a lot behind on tracking all this stuff, and putting it in my
cloud. Although I was deeply involved in the beginning, spent most of
the last 9 months heads down fixing regression after regression in
openwrt's wifi code. I wanted to have a sterling example of how to do
wifi buffering more right, and only in late august, got there.
> On Mon, Sep 26, 2022 at 5:36 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>> On Mon, Sep 26, 2022 at 2:45 PM Bruce Perens via Starlink
>> <starlink@lists.bufferbloat.net> wrote:
>> >
>> > That's a good maxim: Don't believe a speed test that is hosted by your own ISP.
>>
>> A network designed for speedtest.net, is a network... designed for
>> speedtest. Starlink seemingly was designed for speedtest - the 15
>> second "cycle" to sense/change their bandwidth setting is just within
>> the 20s cycle speedtest terminates at, and speedtest returns the last
>> number for the bandwidth. It is a brutal test - using 8 or more flows
>> - much harder on the network than your typical web page load which,
>> while that is often 15 or so, most never run long enough to get out of
>> slow start. At least some of qualifying for the RDOF money was
>> achieving 100mbits down on "speedtest".
>>
>> A knowledgeable user concerned about web PLT should be looking a the
>> first 3 s of a given test, and even then once the bandwidth cracks
>> 20Mbit, it's of no help for most web traffic ( we've been citing mike
>> belshe's original work here a lot,
>> and more recent measurements still show that )
>>
>> Speedtest also does nothing to measure how well a given
>> videoconference or voip session might go. There isn't a test (at least
>> not when last I looked) in the FCC broadband measurements for just
>> videoconferencing, and their latency under load test for many years
>> now, is buried deep in the annual report.
>>
>> I hope that with both ookla and samknows more publicly recording and
>> displaying latency under load (still, sigh, I think only displaying
>> the last number and only sampling every 250ms) that we can shift the
>> needle on this, but I started off this thread complaining nobody was
>> picking up on those numbers... and neither service tests the worst
>> case scenario of a simultaneous up/download, which was the principal
>> scenario we explored with the flent "rrul" series of tests, which were
>> originally designed to emulate and deeply understand what bittorrent
>> was doing to networks, and our principal tool in designing new fq and
>> aqm and transport CCs, along with the rtt_fair test for testing near
>> and far destinations at the same time.
>>
>> My model has always been a family of four, one person uploading,
>> another doing web, one doing videoconferencing,
>> and another doing voip or gaming, and no test anyone has emulates
>> that. With 16 wifi devices
>> per household, the rrul scenario is actually not "worst case", but
>> increasingly the state of things "normally".
>>
>> Another irony about speedtest is that users are inspired^Wtrained to
>> use it when the "network feels slow", and self-initiate something that
>> makes it worse, for both them and their portion of the network.
>>
>> Since the internet architecture board met last year, (
>> https://www.iab.org/activities/workshops/network-quality/ ) there
>> seems to be an increasing amount of work on better metrics and tests
>> for QoE, with stuff like apple's responsiveness test, etc.
>>
>> I have a new one - prototyped in some starlink tests so far, and
>> elsewhere - called "SPOM" - steady packets over milliseconds, which,
>> when run simultaneously with capacity seeking traffic, might be a
>> better predictor of videoconferencing performance.
>>
>> There's also a really good "P99" conference coming up for those, that
>> like me, are OCD about a few sigmas.
>>
>> >
>> > On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>> >>
>> >> Thank you for the dialog,.
>> >> This discussion with regards to Starlink is interesting as it confirms my guesses about the gap between Starlinks overly simplified, over optimistic marketing and the reality as they acquire subscribers.
>> >>
>> >> I am actually interested in a more perverse issue. I am seeing latency and bufferbloat as a consequence from significant under provisioning. It doesn’t matter that the ISP is selling a fiber drop, if (parts) of their network is under provisioned. Two end points can be less than 5 mile apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency was 230+ ms. The pattern I see suggest digital redlining. The older communities appear to have much more severe under provisioning.
>> >>
>> >> Another observation. Running speedtest appears to go from the edge of the network by layer 2 to the speedtest host operated by the ISP. Yup, bypasses the (suspected overloaded) routers.
>> >>
>> >> Anyway, just observing.
>> >>
>> >> Gene
>> >> ----------------------------------------------
>> >> Eugene Chang
>> >> IEEE Senior Life Member
>> >> eugene.chang@ieee.org
>> >> 781-799-0233 (in Honolulu)
>> >>
>> >>
>> >>
>> >> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>> >>
>> >> Hi Gene,
>> >>
>> >>
>> >> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>> >>
>> >> Comments inline below.
>> >>
>> >> Gene
>> >> ----------------------------------------------
>> >> Eugene Chang
>> >> IEEE Senior Life Member
>> >> eugene.chang@ieee.org
>> >> 781-799-0233 (in Honolulu)
>> >>
>> >>
>> >>
>> >> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>> >>
>> >> Hi Eugene,
>> >>
>> >>
>> >> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>> >>
>> >> Ok, we are getting into the details. I agree.
>> >>
>> >> Every node in the path has to implement this to be effective.
>> >>
>> >>
>> >> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>> >>
>> >>
>> >> This is not completely true.
>> >>
>> >>
>> >> [SM] You are likely right, trying to summarize things leads to partially incorrect generalizations.
>> >>
>> >>
>> >> Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>> >>
>> >>
>> >> [SM] It is the node that builds up the queue that profits most from better queue management.... (again I generalize, the node with the queue itself probably does not care all that much, but the endpoints will profit if the queue experiencing node deals with that queue more gracefully).
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>> >>
>> >>
>> >> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>> >>
>> >>
>> >> The hardest part is getting competing ISPs to implement and coordinate.
>> >>
>> >>
>> >> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot end-users can do unilaterally on their side to improve both ingress and egress congestion. Admittedly especially ingress congestion would be even better handled with cooperation of the ISP.
>> >>
>> >> Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>> >>
>> >>
>> >> [SM] Well we do this one home network at a time (not because that is efficient or ideal, but simply because it is possible). Maybe, if you have not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in addition) on your home internet access link for say a week and let us know ih/how your experience changed?
>> >>
>> >> Regards
>> >> Sebastian
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> Regards
>> >> Sebastian
>> >>
>> >>
>> >>
>> >> Gene
>> >> ----------------------------------------------
>> >> Eugene Chang
>> >> IEEE Senior Life Member
>> >> eugene.chang@ieee.org
>> >> 781-799-0233 (in Honolulu)
>> >>
>> >>
>> >>
>> >> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>> >>
>> >> software updates can do far more than just improve recovery.
>> >>
>> >> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>> >>
>> >> (the example below is not completely accurate, but I think it gets the point across)
>> >>
>> >> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>> >>
>> >> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>> >>
>> >> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>> >>
>> >> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>> >>
>> >> David Lang
>> >>
>> >>
>> >> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>> >>
>> >> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>> >>
>> >> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>> >>
>> >> Gene
>> >> ----------------------------------------------
>> >> Eugene Chang
>> >> IEEE Senior Life Member
>> >> eugene.chang@ieee.org
>> >> 781-799-0233 (in Honolulu)
>> >>
>> >>
>> >>
>> >> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>> >>
>> >> Please help to explain. Here's a draft to start with:
>> >>
>> >> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>> >>
>> >> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>> >>
>> >> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>> >> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>> >>
>> >> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>> >> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>> >>
>> >> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>> >> - gamers care but most people may think it is frivolous.
>> >> - musicians care but that is mostly for a hobby.
>> >> - business should care because of productivity but they don’t know how to “see” the impact.
>> >>
>> >> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>> >>
>> >> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>> >>
>> >> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>> >>
>> >> Gene
>> >> -----------------------------------
>> >> Eugene Chang
>> >> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>> >> +1-781-799-0233 (in Honolulu)
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>> >>
>> >> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>> >>
>> >> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>> >>
>> >> Thanks
>> >>
>> >> Bruce
>> >>
>> >> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>> >> These days, if you want attention, you gotta buy it. A 50k half page
>> >> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>> >> signed by the kinds of luminaries we got for the fcc wifi fight, would
>> >> go a long way towards shifting the tide.
>> >>
>> >> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>> >>
>> >>
>> >> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>> >> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>> >>
>> >>
>> >> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>> >>
>> >>
>> >> That's a great idea. I have visions of crashing the washington
>> >> correspondents dinner, but perhaps
>> >> there is some set of gatherings journalists regularly attend?
>> >>
>> >>
>> >> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>> >>
>> >> I still find it remarkable that reporters are still missing the
>> >> meaning of the huge latencies for starlink, under load.
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>> >> Dave Täht CEO, TekLibre, LLC
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>> >> Dave Täht CEO, TekLibre, LLC
>> >> _______________________________________________
>> >> Starlink mailing list
>> >> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> >> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>> >>
>> >>
>> >> --
>> >> Bruce Perens K6BP
>> >> _______________________________________________
>> >> Starlink mailing list
>> >> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> >> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Bruce Perens K6BP
>> >>
>> >>
>> >> _______________________________________________
>> >> Starlink mailing list
>> >> Starlink@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/starlink
>> >>
>> >>
>> >> _______________________________________________
>> >> Starlink mailing list
>> >> Starlink@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/starlink
>> >
>> >
>> >
>> > --
>> > Bruce Perens K6BP
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC
>
>
>
> --
> Bruce Perens K6BP
--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-27 0:35 ` Dave Taht
@ 2022-09-27 0:55 ` Bruce Perens
2022-09-27 1:12 ` Dave Taht
2022-09-27 4:06 ` Eugene Y Chang
1 sibling, 1 reply; 56+ messages in thread
From: Bruce Perens @ 2022-09-27 0:55 UTC (permalink / raw)
To: Dave Taht; +Cc: Eugene Y Chang, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 19667 bytes --]
Why not write an RFC on internet metrics? Then evangelize customers to rely
on metrics compliant with the RFC.
On Mon, Sep 26, 2022 at 5:36 PM Dave Taht <dave.taht@gmail.com> wrote:
> On Mon, Sep 26, 2022 at 2:45 PM Bruce Perens via Starlink
> <starlink@lists.bufferbloat.net> wrote:
> >
> > That's a good maxim: Don't believe a speed test that is hosted by your
> own ISP.
>
> A network designed for speedtest.net, is a network... designed for
> speedtest. Starlink seemingly was designed for speedtest - the 15
> second "cycle" to sense/change their bandwidth setting is just within
> the 20s cycle speedtest terminates at, and speedtest returns the last
> number for the bandwidth. It is a brutal test - using 8 or more flows
> - much harder on the network than your typical web page load which,
> while that is often 15 or so, most never run long enough to get out of
> slow start. At least some of qualifying for the RDOF money was
> achieving 100mbits down on "speedtest".
>
> A knowledgeable user concerned about web PLT should be looking a the
> first 3 s of a given test, and even then once the bandwidth cracks
> 20Mbit, it's of no help for most web traffic ( we've been citing mike
> belshe's original work here a lot,
> and more recent measurements still show that )
>
> Speedtest also does nothing to measure how well a given
> videoconference or voip session might go. There isn't a test (at least
> not when last I looked) in the FCC broadband measurements for just
> videoconferencing, and their latency under load test for many years
> now, is buried deep in the annual report.
>
> I hope that with both ookla and samknows more publicly recording and
> displaying latency under load (still, sigh, I think only displaying
> the last number and only sampling every 250ms) that we can shift the
> needle on this, but I started off this thread complaining nobody was
> picking up on those numbers... and neither service tests the worst
> case scenario of a simultaneous up/download, which was the principal
> scenario we explored with the flent "rrul" series of tests, which were
> originally designed to emulate and deeply understand what bittorrent
> was doing to networks, and our principal tool in designing new fq and
> aqm and transport CCs, along with the rtt_fair test for testing near
> and far destinations at the same time.
>
> My model has always been a family of four, one person uploading,
> another doing web, one doing videoconferencing,
> and another doing voip or gaming, and no test anyone has emulates
> that. With 16 wifi devices
> per household, the rrul scenario is actually not "worst case", but
> increasingly the state of things "normally".
>
> Another irony about speedtest is that users are inspired^Wtrained to
> use it when the "network feels slow", and self-initiate something that
> makes it worse, for both them and their portion of the network.
>
> Since the internet architecture board met last year, (
> https://www.iab.org/activities/workshops/network-quality/ ) there
> seems to be an increasing amount of work on better metrics and tests
> for QoE, with stuff like apple's responsiveness test, etc.
>
> I have a new one - prototyped in some starlink tests so far, and
> elsewhere - called "SPOM" - steady packets over milliseconds, which,
> when run simultaneously with capacity seeking traffic, might be a
> better predictor of videoconferencing performance.
>
> There's also a really good "P99" conference coming up for those, that
> like me, are OCD about a few sigmas.
>
> >
> > On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink <
> starlink@lists.bufferbloat.net> wrote:
> >>
> >> Thank you for the dialog,.
> >> This discussion with regards to Starlink is interesting as it confirms
> my guesses about the gap between Starlinks overly simplified, over
> optimistic marketing and the reality as they acquire subscribers.
> >>
> >> I am actually interested in a more perverse issue. I am seeing latency
> and bufferbloat as a consequence from significant under provisioning. It
> doesn’t matter that the ISP is selling a fiber drop, if (parts) of their
> network is under provisioned. Two end points can be less than 5 mile apart
> and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency
> was 230+ ms. The pattern I see suggest digital redlining. The older
> communities appear to have much more severe under provisioning.
> >>
> >> Another observation. Running speedtest appears to go from the edge of
> the network by layer 2 to the speedtest host operated by the ISP. Yup,
> bypasses the (suspected overloaded) routers.
> >>
> >> Anyway, just observing.
> >>
> >> Gene
> >> ----------------------------------------------
> >> Eugene Chang
> >> IEEE Senior Life Member
> >> eugene.chang@ieee.org
> >> 781-799-0233 (in Honolulu)
> >>
> >>
> >>
> >> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de>
> wrote:
> >>
> >> Hi Gene,
> >>
> >>
> >> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org>
> wrote:
> >>
> >> Comments inline below.
> >>
> >> Gene
> >> ----------------------------------------------
> >> Eugene Chang
> >> IEEE Senior Life Member
> >> eugene.chang@ieee.org
> >> 781-799-0233 (in Honolulu)
> >>
> >>
> >>
> >> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de>
> wrote:
> >>
> >> Hi Eugene,
> >>
> >>
> >> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <
> starlink@lists.bufferbloat.net> wrote:
> >>
> >> Ok, we are getting into the details. I agree.
> >>
> >> Every node in the path has to implement this to be effective.
> >>
> >>
> >> Amazingly the biggest bang for the buck is gotten by fixing those nodes
> that actually contain a network path's bottleneck. Often these are pretty
> stable. So yes for fully guaranteed service quality all nodes would need to
> participate, but for improving things noticeably it is sufficient to
> improve the usual bottlenecks, e.g. for many internet access links the home
> gateway is a decent point to implement better buffer management. (In short
> the problem are over-sized and under-managed buffers, and one of the best
> solution is better/smarter buffer management).
> >>
> >>
> >> This is not completely true.
> >>
> >>
> >> [SM] You are likely right, trying to summarize things leads to
> partially incorrect generalizations.
> >>
> >>
> >> Say the bottleneck is at node N. During the period of congestion, the
> upstream node N-1 will have to buffer. When node N recovers, the
> bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc.
> etc. Making node N better will reduce the extent of the backup at N-1, but
> N-1 should implement the better code.
> >>
> >>
> >> [SM] It is the node that builds up the queue that profits most from
> better queue management.... (again I generalize, the node with the queue
> itself probably does not care all that much, but the endpoints will profit
> if the queue experiencing node deals with that queue more gracefully).
> >>
> >>
> >>
> >>
> >>
> >> In fact, every node in the path has to have the same prioritization or
> the scheme becomes ineffective.
> >>
> >>
> >> Yes and no, one of the clearest winners has been flow queueing, IMHO
> not because it is the most optimal capacity sharing scheme, but because it
> is the least pessimal scheme, allowing all (or none) flows forward
> progress. You can interpret that as a scheme in which flows below their
> capacity share are prioritized, but I am not sure that is the best way to
> look at these things.
> >>
> >>
> >> The hardest part is getting competing ISPs to implement and coordinate.
> >>
> >>
> >> [SM] Yes, but it turned out even with non-cooperating ISPs there is a
> lot end-users can do unilaterally on their side to improve both ingress and
> egress congestion. Admittedly especially ingress congestion would be even
> better handled with cooperation of the ISP.
> >>
> >> Bufferbloat and handoff between ISPs will be hard. The only way to fix
> this is to get the unwashed public to care. Then they can say “we don’t
> care about the technical issues, just fix it.” Until then …..
> >>
> >>
> >> [SM] Well we do this one home network at a time (not because that is
> efficient or ideal, but simply because it is possible). Maybe, if you have
> not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate
> in addition) on your home internet access link for say a week and let us
> know ih/how your experience changed?
> >>
> >> Regards
> >> Sebastian
> >>
> >>
> >>
> >>
> >>
> >>
> >> Regards
> >> Sebastian
> >>
> >>
> >>
> >> Gene
> >> ----------------------------------------------
> >> Eugene Chang
> >> IEEE Senior Life Member
> >> eugene.chang@ieee.org
> >> 781-799-0233 (in Honolulu)
> >>
> >>
> >>
> >> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
> >>
> >> software updates can do far more than just improve recovery.
> >>
> >> In practice, large data transfers are less sensitive to latency than
> smaller data transfers (i.e. downloading a CD image vs a video conference),
> software can ensure better fairness in preventing a bulk transfer from
> hurting the more latency sensitive transfers.
> >>
> >> (the example below is not completely accurate, but I think it gets the
> point across)
> >>
> >> When buffers become excessivly large, you have the situation where a
> video call is going to generate a small amount of data at a regular
> interval, but a bulk data transfer is able to dump a huge amount of data
> into the buffer instantly.
> >>
> >> If you just do FIFO, then you get a small chunk of video call, then
> several seconds worth of CD transfer, followed by the next small chunk of
> the video call.
> >>
> >> But the software can prevent the one app from hogging so much of the
> connection and let the chunk of video call in sooner, avoiding the impact
> to the real time traffic. Historically this has required the admin classify
> all traffic and configure equipment to implement different treatment based
> on the classification (and this requires trust in the classification
> process), the bufferbloat team has developed options (fq_codel and cake)
> that can ensure fairness between applications/servers with little or no
> configuration, and no trust in other systems to properly classify their
> traffic.
> >>
> >> The one thing that Cake needs to work really well is to be able to know
> what the data rate available is. With Starlink, this changes frequently and
> cake integrated into the starlink dish/router software would be far better
> than anything that can be done externally as the rate changes can be fed
> directly into the settings (currently they are only indirectly detected)
> >>
> >> David Lang
> >>
> >>
> >> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
> >>
> >> You already know this. Bufferbloat is a symptom and not the cause.
> Bufferbloat grows when there are (1) periods of low or no bandwidth or (2)
> periods of insufficient bandwidth (aka network congestion).
> >>
> >> If I understand this correctly, just a software update cannot make
> bufferbloat go away. It might improve the speed of recovery (e.g. throw
> away all time sensitive UDP messages).
> >>
> >> Gene
> >> ----------------------------------------------
> >> Eugene Chang
> >> IEEE Senior Life Member
> >> eugene.chang@ieee.org
> >> 781-799-0233 (in Honolulu)
> >>
> >>
> >>
> >> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
> >>
> >> Please help to explain. Here's a draft to start with:
> >>
> >> Starlink Performance Not Sufficient for Military Applications, Say
> Scientists
> >>
> >> The problem is not availability: Starlink works where nothing but
> another satellite network would. It's not bandwidth, although others have
> questions about sustaining bandwidth as the customer base grows. It's
> latency and jitter. As load increases, latency, the time it takes for a
> packet to get through, increases more than it should. The scientists who
> have fought bufferbloat, a major cause of latency on the internet, know
> why. SpaceX needs to upgrade their system to use the scientist's Open
> Source modifications to Linux to fight bufferbloat, and thus reduce
> latency. This is mostly just using a newer version, but there are some
> tunable parameters. Jitter is a change in the speed of getting a packet
> through the network during a connection, which is inevitable in satellite
> networks, but will be improved by making use of the bufferbloat-fighting
> software, and probably with the addition of more satellites.
> >>
> >> We've done all of the work, SpaceX just needs to adopt it by upgrading
> their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator
> and creator of the X Window System, chimed in: <fill in here please>
> >> Open Source luminary Bruce Perens said: sometimes Starlink's latency
> and jitter make it inadequate to remote-control my ham radio station. But
> the military is experimenting with remote-control of vehicles on the
> battlefield and other applications that can be demonstrated, but won't
> happen at scale without adoption of bufferbloat-fighting strategies.
> >>
> >> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <
> eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
> >> The key issue is most people don’t understand why latency matters. They
> don’t see it or feel it’s impact.
> >>
> >> First, we have to help people see the symptoms of latency and how it
> impacts something they care about.
> >> - gamers care but most people may think it is frivolous.
> >> - musicians care but that is mostly for a hobby.
> >> - business should care because of productivity but they don’t know how
> to “see” the impact.
> >>
> >> Second, there needs to be a “OMG, I have been seeing the action of
> latency all this time and never knew it! I was being shafted.” Once you
> have this awakening, you can get all the press you want for free.
> >>
> >> Most of the time when business apps are developed, “we” hide the impact
> of poor performance (aka latency) or they hide from the discussion because
> the developers don’t have a way to fix the latency. Maybe businesses don’t
> care because any employees affected are just considered poor performers.
> (In bad economic times, the poor performers are just laid off.) For
> employees, if they happen to be at a location with bad latency, they don’t
> know that latency is hurting them. Unfair but most people don’t know the
> issue is latency.
> >>
> >> Talking and explaining why latency is bad is not as effective as
> showing why latency is bad. Showing has to be with something that has a
> person impact.
> >>
> >> Gene
> >> -----------------------------------
> >> Eugene Chang
> >> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
> >> +1-781-799-0233 (in Honolulu)
> >>
> >>
> >>
> >>
> >>
> >> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <
> starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>>
> wrote:
> >>
> >> If you want to get attention, you can get it for free. I can place
> articles with various press if there is something interesting to say. Did
> this all through the evangelism of Open Source. All we need to do is write,
> sign, and publish a statement. What they actually write is less relevant if
> they publish a link to our statement.
> >>
> >> Right now I am concerned that the Starlink latency and jitter is going
> to be a problem even for remote controlling my ham station. The US Military
> is interested in doing much more, which they have demonstrated, but I don't
> see happening at scale without some technical work on the network. Being
> able to say this isn't ready for the government's application would be an
> attention-getter.
> >>
> >> Thanks
> >>
> >> Bruce
> >>
> >> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <
> starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>>
> wrote:
> >> These days, if you want attention, you gotta buy it. A 50k half page
> >> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
> >> signed by the kinds of luminaries we got for the fcc wifi fight, would
> >> go a long way towards shifting the tide.
> >>
> >> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:
> dave.taht@gmail.com>> wrote:
> >>
> >>
> >> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
> >> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>>
> wrote:
> >>
> >>
> >> The awareness & understanding of latency & impact on QoE is nearly
> unknown among reporters. IMO maybe there should be some kind of background
> briefings for reporters - maybe like a simple YouTube video explainer that
> is short & high level & visual? Otherwise reporters will just continue to
> focus on what they know...
> >>
> >>
> >> That's a great idea. I have visions of crashing the washington
> >> correspondents dinner, but perhaps
> >> there is some set of gatherings journalists regularly attend?
> >>
> >>
> >> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <
> starlink-bounces@lists.bufferbloat.net <mailto:
> starlink-bounces@lists.bufferbloat.net> on behalf of
> starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>
> wrote:
> >>
> >> I still find it remarkable that reporters are still missing the
> >> meaning of the huge latencies for starlink, under load.
> >>
> >>
> >>
> >>
> >> --
> >> FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/<
> https://blog.cerowrt.org/post/state_of_fq_codel/>
> >> Dave Täht CEO, TekLibre, LLC
> >>
> >>
> >>
> >>
> >> --
> >> FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/<
> https://blog.cerowrt.org/post/state_of_fq_codel/>
> >> Dave Täht CEO, TekLibre, LLC
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> >> https://lists.bufferbloat.net/listinfo/starlink <
> https://lists.bufferbloat.net/listinfo/starlink>
> >>
> >>
> >> --
> >> Bruce Perens K6BP
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> >> https://lists.bufferbloat.net/listinfo/starlink <
> https://lists.bufferbloat.net/listinfo/starlink>
> >>
> >>
> >>
> >>
> >> --
> >> Bruce Perens K6BP
> >>
> >>
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/starlink
> >>
> >>
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/starlink
> >
> >
> >
> > --
> > Bruce Perens K6BP
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
>
>
>
> --
> FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC
>
--
Bruce Perens K6BP
[-- Attachment #2: Type: text/html, Size: 25826 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:44 ` Bruce Perens
@ 2022-09-27 0:35 ` Dave Taht
2022-09-27 0:55 ` Bruce Perens
2022-09-27 4:06 ` Eugene Y Chang
2022-09-27 3:50 ` Eugene Y Chang
1 sibling, 2 replies; 56+ messages in thread
From: Dave Taht @ 2022-09-27 0:35 UTC (permalink / raw)
To: Bruce Perens; +Cc: Eugene Y Chang, Dave Taht via Starlink
On Mon, Sep 26, 2022 at 2:45 PM Bruce Perens via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
> That's a good maxim: Don't believe a speed test that is hosted by your own ISP.
A network designed for speedtest.net, is a network... designed for
speedtest. Starlink seemingly was designed for speedtest - the 15
second "cycle" to sense/change their bandwidth setting is just within
the 20s cycle speedtest terminates at, and speedtest returns the last
number for the bandwidth. It is a brutal test - using 8 or more flows
- much harder on the network than your typical web page load which,
while that is often 15 or so, most never run long enough to get out of
slow start. At least some of qualifying for the RDOF money was
achieving 100mbits down on "speedtest".
A knowledgeable user concerned about web PLT should be looking a the
first 3 s of a given test, and even then once the bandwidth cracks
20Mbit, it's of no help for most web traffic ( we've been citing mike
belshe's original work here a lot,
and more recent measurements still show that )
Speedtest also does nothing to measure how well a given
videoconference or voip session might go. There isn't a test (at least
not when last I looked) in the FCC broadband measurements for just
videoconferencing, and their latency under load test for many years
now, is buried deep in the annual report.
I hope that with both ookla and samknows more publicly recording and
displaying latency under load (still, sigh, I think only displaying
the last number and only sampling every 250ms) that we can shift the
needle on this, but I started off this thread complaining nobody was
picking up on those numbers... and neither service tests the worst
case scenario of a simultaneous up/download, which was the principal
scenario we explored with the flent "rrul" series of tests, which were
originally designed to emulate and deeply understand what bittorrent
was doing to networks, and our principal tool in designing new fq and
aqm and transport CCs, along with the rtt_fair test for testing near
and far destinations at the same time.
My model has always been a family of four, one person uploading,
another doing web, one doing videoconferencing,
and another doing voip or gaming, and no test anyone has emulates
that. With 16 wifi devices
per household, the rrul scenario is actually not "worst case", but
increasingly the state of things "normally".
Another irony about speedtest is that users are inspired^Wtrained to
use it when the "network feels slow", and self-initiate something that
makes it worse, for both them and their portion of the network.
Since the internet architecture board met last year, (
https://www.iab.org/activities/workshops/network-quality/ ) there
seems to be an increasing amount of work on better metrics and tests
for QoE, with stuff like apple's responsiveness test, etc.
I have a new one - prototyped in some starlink tests so far, and
elsewhere - called "SPOM" - steady packets over milliseconds, which,
when run simultaneously with capacity seeking traffic, might be a
better predictor of videoconferencing performance.
There's also a really good "P99" conference coming up for those, that
like me, are OCD about a few sigmas.
>
> On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>> Thank you for the dialog,.
>> This discussion with regards to Starlink is interesting as it confirms my guesses about the gap between Starlinks overly simplified, over optimistic marketing and the reality as they acquire subscribers.
>>
>> I am actually interested in a more perverse issue. I am seeing latency and bufferbloat as a consequence from significant under provisioning. It doesn’t matter that the ISP is selling a fiber drop, if (parts) of their network is under provisioned. Two end points can be less than 5 mile apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency was 230+ ms. The pattern I see suggest digital redlining. The older communities appear to have much more severe under provisioning.
>>
>> Another observation. Running speedtest appears to go from the edge of the network by layer 2 to the speedtest host operated by the ISP. Yup, bypasses the (suspected overloaded) routers.
>>
>> Anyway, just observing.
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Gene,
>>
>>
>> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>
>> Comments inline below.
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Eugene,
>>
>>
>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>> Ok, we are getting into the details. I agree.
>>
>> Every node in the path has to implement this to be effective.
>>
>>
>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>
>>
>> This is not completely true.
>>
>>
>> [SM] You are likely right, trying to summarize things leads to partially incorrect generalizations.
>>
>>
>> Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>>
>>
>> [SM] It is the node that builds up the queue that profits most from better queue management.... (again I generalize, the node with the queue itself probably does not care all that much, but the endpoints will profit if the queue experiencing node deals with that queue more gracefully).
>>
>>
>>
>>
>>
>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>
>>
>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>
>>
>> The hardest part is getting competing ISPs to implement and coordinate.
>>
>>
>> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot end-users can do unilaterally on their side to improve both ingress and egress congestion. Admittedly especially ingress congestion would be even better handled with cooperation of the ISP.
>>
>> Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>
>>
>> [SM] Well we do this one home network at a time (not because that is efficient or ideal, but simply because it is possible). Maybe, if you have not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in addition) on your home internet access link for say a week and let us know ih/how your experience changed?
>>
>> Regards
>> Sebastian
>>
>>
>>
>>
>>
>>
>> Regards
>> Sebastian
>>
>>
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>
>> software updates can do far more than just improve recovery.
>>
>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>
>> (the example below is not completely accurate, but I think it gets the point across)
>>
>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>
>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>
>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>
>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>
>> David Lang
>>
>>
>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>
>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>
>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>
>> Please help to explain. Here's a draft to start with:
>>
>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>
>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>
>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>
>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>
>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>> - gamers care but most people may think it is frivolous.
>> - musicians care but that is mostly for a hobby.
>> - business should care because of productivity but they don’t know how to “see” the impact.
>>
>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>
>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>
>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>
>> Gene
>> -----------------------------------
>> Eugene Chang
>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>> +1-781-799-0233 (in Honolulu)
>>
>>
>>
>>
>>
>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>
>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>
>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>
>> Thanks
>>
>> Bruce
>>
>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>> These days, if you want attention, you gotta buy it. A 50k half page
>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>> go a long way towards shifting the tide.
>>
>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>
>>
>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>
>>
>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>
>>
>> That's a great idea. I have visions of crashing the washington
>> correspondents dinner, but perhaps
>> there is some set of gatherings journalists regularly attend?
>>
>>
>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>
>> I still find it remarkable that reporters are still missing the
>> meaning of the huge latencies for starlink, under load.
>>
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>> Dave Täht CEO, TekLibre, LLC
>>
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>
>>
>> --
>> Bruce Perens K6BP
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>
>>
>>
>>
>> --
>> Bruce Perens K6BP
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
>
>
> --
> Bruce Perens K6BP
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:35 ` Eugene Y Chang
2022-09-26 21:44 ` David Lang
@ 2022-09-26 21:44 ` Bruce Perens
2022-09-27 0:35 ` Dave Taht
2022-09-27 3:50 ` Eugene Y Chang
1 sibling, 2 replies; 56+ messages in thread
From: Bruce Perens @ 2022-09-26 21:44 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: Sebastian Moeller, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 15022 bytes --]
That's a good maxim: Don't believe a speed test that is hosted by your own
ISP.
On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink <
starlink@lists.bufferbloat.net> wrote:
> Thank you for the dialog,.
> This discussion with regards to Starlink is interesting as it confirms my
> guesses about the gap between Starlinks overly simplified, over optimistic
> marketing and the reality as they acquire subscribers.
>
> I am actually interested in a more perverse issue. I am seeing latency and
> bufferbloat as a consequence from significant under provisioning. It
> doesn’t matter that the ISP is selling a fiber drop, if (parts) of their
> network is under provisioned. Two end points can be less than 5 mile apart
> and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency
> was 230+ ms. The pattern I see suggest digital redlining. The older
> communities appear to have much more severe under provisioning.
>
> Another observation. Running speedtest appears to go from the edge of the
> network by layer 2 to the speedtest host operated by the ISP. Yup, bypasses
> the (suspected overloaded) routers.
>
> Anyway, just observing.
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Gene,
>
>
> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>
> Comments inline below.
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Eugene,
>
>
> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
> Ok, we are getting into the details. I agree.
>
> Every node in the path has to implement this to be effective.
>
>
> Amazingly the biggest bang for the buck is gotten by fixing those nodes
> that actually contain a network path's bottleneck. Often these are pretty
> stable. So yes for fully guaranteed service quality all nodes would need to
> participate, but for improving things noticeably it is sufficient to
> improve the usual bottlenecks, e.g. for many internet access links the home
> gateway is a decent point to implement better buffer management. (In short
> the problem are over-sized and under-managed buffers, and one of the best
> solution is better/smarter buffer management).
>
>
> This is not completely true.
>
>
> [SM] You are likely right, trying to summarize things leads to partially
> incorrect generalizations.
>
>
> Say the bottleneck is at node N. During the period of congestion, the
> upstream node N-1 will have to buffer. When node N recovers, the
> bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc.
> etc. Making node N better will reduce the extent of the backup at N-1, but
> N-1 should implement the better code.
>
>
> [SM] It is the node that builds up the queue that profits most from better
> queue management.... (again I generalize, the node with the queue itself
> probably does not care all that much, but the endpoints will profit if the
> queue experiencing node deals with that queue more gracefully).
>
>
>
>
>
> In fact, every node in the path has to have the same prioritization or the
> scheme becomes ineffective.
>
>
> Yes and no, one of the clearest winners has been flow queueing, IMHO not
> because it is the most optimal capacity sharing scheme, but because it is
> the least pessimal scheme, allowing all (or none) flows forward progress.
> You can interpret that as a scheme in which flows below their capacity
> share are prioritized, but I am not sure that is the best way to look at
> these things.
>
>
> The hardest part is getting competing ISPs to implement and coordinate.
>
>
> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot
> end-users can do unilaterally on their side to improve both ingress and
> egress congestion. Admittedly especially ingress congestion would be even
> better handled with cooperation of the ISP.
>
> Bufferbloat and handoff between ISPs will be hard. The only way to fix
> this is to get the unwashed public to care. Then they can say “we don’t
> care about the technical issues, just fix it.” Until then …..
>
>
> [SM] Well we do this one home network at a time (not because that is
> efficient or ideal, but simply because it is possible). Maybe, if you have
> not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate
> in addition) on your home internet access link for say a week and let us
> know ih/how your experience changed?
>
> Regards
> Sebastian
>
>
>
>
>
>
> Regards
> Sebastian
>
>
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>
> software updates can do far more than just improve recovery.
>
> In practice, large data transfers are less sensitive to latency than
> smaller data transfers (i.e. downloading a CD image vs a video conference),
> software can ensure better fairness in preventing a bulk transfer from
> hurting the more latency sensitive transfers.
>
> (the example below is not completely accurate, but I think it gets the
> point across)
>
> When buffers become excessivly large, you have the situation where a video
> call is going to generate a small amount of data at a regular interval, but
> a bulk data transfer is able to dump a huge amount of data into the buffer
> instantly.
>
> If you just do FIFO, then you get a small chunk of video call, then
> several seconds worth of CD transfer, followed by the next small chunk of
> the video call.
>
> But the software can prevent the one app from hogging so much of the
> connection and let the chunk of video call in sooner, avoiding the impact
> to the real time traffic. Historically this has required the admin classify
> all traffic and configure equipment to implement different treatment based
> on the classification (and this requires trust in the classification
> process), the bufferbloat team has developed options (fq_codel and cake)
> that can ensure fairness between applications/servers with little or no
> configuration, and no trust in other systems to properly classify their
> traffic.
>
> The one thing that Cake needs to work really well is to be able to know
> what the data rate available is. With Starlink, this changes frequently and
> cake integrated into the starlink dish/router software would be far better
> than anything that can be done externally as the rate changes can be fed
> directly into the settings (currently they are only indirectly detected)
>
> David Lang
>
>
> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>
> You already know this. Bufferbloat is a symptom and not the cause.
> Bufferbloat grows when there are (1) periods of low or no bandwidth or (2)
> periods of insufficient bandwidth (aka network congestion).
>
> If I understand this correctly, just a software update cannot make
> bufferbloat go away. It might improve the speed of recovery (e.g. throw
> away all time sensitive UDP messages).
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>
> Please help to explain. Here's a draft to start with:
>
> Starlink Performance Not Sufficient for Military Applications, Say
> Scientists
>
> The problem is not availability: Starlink works where nothing but another
> satellite network would. It's not bandwidth, although others have questions
> about sustaining bandwidth as the customer base grows. It's latency and
> jitter. As load increases, latency, the time it takes for a packet to get
> through, increases more than it should. The scientists who have fought
> bufferbloat, a major cause of latency on the internet, know why. SpaceX
> needs to upgrade their system to use the scientist's Open Source
> modifications to Linux to fight bufferbloat, and thus reduce latency. This
> is mostly just using a newer version, but there are some tunable
> parameters. Jitter is a change in the speed of getting a packet through the
> network during a connection, which is inevitable in satellite networks, but
> will be improved by making use of the bufferbloat-fighting software, and
> probably with the addition of more satellites.
>
> We've done all of the work, SpaceX just needs to adopt it by upgrading
> their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator
> and creator of the X Window System, chimed in: <fill in here please>
> Open Source luminary Bruce Perens said: sometimes Starlink's latency and
> jitter make it inadequate to remote-control my ham radio station. But the
> military is experimenting with remote-control of vehicles on the
> battlefield and other applications that can be demonstrated, but won't
> happen at scale without adoption of bufferbloat-fighting strategies.
>
> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu
> <mailto:eugene.chang@alum.mit.edu>> wrote:
> The key issue is most people don’t understand why latency matters. They
> don’t see it or feel it’s impact.
>
> First, we have to help people see the symptoms of latency and how it
> impacts something they care about.
> - gamers care but most people may think it is frivolous.
> - musicians care but that is mostly for a hobby.
> - business should care because of productivity but they don’t know how to
> “see” the impact.
>
> Second, there needs to be a “OMG, I have been seeing the action of latency
> all this time and never knew it! I was being shafted.” Once you have this
> awakening, you can get all the press you want for free.
>
> Most of the time when business apps are developed, “we” hide the impact of
> poor performance (aka latency) or they hide from the discussion because the
> developers don’t have a way to fix the latency. Maybe businesses don’t care
> because any employees affected are just considered poor performers. (In bad
> economic times, the poor performers are just laid off.) For employees, if
> they happen to be at a location with bad latency, they don’t know that
> latency is hurting them. Unfair but most people don’t know the issue is
> latency.
>
> Talking and explaining why latency is bad is not as effective as showing
> why latency is bad. Showing has to be with something that has a person
> impact.
>
> Gene
> -----------------------------------
> Eugene Chang
> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
> +1-781-799-0233 (in Honolulu)
>
>
>
>
>
> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <
> starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>>
> wrote:
>
> If you want to get attention, you can get it for free. I can place
> articles with various press if there is something interesting to say. Did
> this all through the evangelism of Open Source. All we need to do is write,
> sign, and publish a statement. What they actually write is less relevant if
> they publish a link to our statement.
>
> Right now I am concerned that the Starlink latency and jitter is going to
> be a problem even for remote controlling my ham station. The US Military is
> interested in doing much more, which they have demonstrated, but I don't
> see happening at scale without some technical work on the network. Being
> able to say this isn't ready for the government's application would be an
> attention-getter.
>
> Thanks
>
> Bruce
>
> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <
> starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>>
> wrote:
> These days, if you want attention, you gotta buy it. A 50k half page
> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
> signed by the kinds of luminaries we got for the fcc wifi fight, would
> go a long way towards shifting the tide.
>
> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:
> dave.taht@gmail.com>> wrote:
>
>
> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>
>
> The awareness & understanding of latency & impact on QoE is nearly unknown
> among reporters. IMO maybe there should be some kind of background
> briefings for reporters - maybe like a simple YouTube video explainer that
> is short & high level & visual? Otherwise reporters will just continue to
> focus on what they know...
>
>
> That's a great idea. I have visions of crashing the washington
> correspondents dinner, but perhaps
> there is some set of gatherings journalists regularly attend?
>
>
> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <
> starlink-bounces@lists.bufferbloat.net <mailto:
> starlink-bounces@lists.bufferbloat.net> on behalf of
> starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>
> wrote:
>
> I still find it remarkable that reporters are still missing the
> meaning of the huge latencies for starlink, under load.
>
>
>
>
> --
> FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/<
> https://blog.cerowrt.org/post/state_of_fq_codel/>
> Dave Täht CEO, TekLibre, LLC
>
>
>
>
> --
> FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/<
> https://blog.cerowrt.org/post/state_of_fq_codel/>
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/starlink <
> https://lists.bufferbloat.net/listinfo/starlink>
>
>
> --
> Bruce Perens K6BP
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/starlink <
> https://lists.bufferbloat.net/listinfo/starlink>
>
>
>
>
> --
> Bruce Perens K6BP
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
--
Bruce Perens K6BP
[-- Attachment #2: Type: text/html, Size: 29208 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:35 ` Eugene Y Chang
@ 2022-09-26 21:44 ` David Lang
2022-09-26 21:44 ` Bruce Perens
1 sibling, 0 replies; 56+ messages in thread
From: David Lang @ 2022-09-26 21:44 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: Sebastian Moeller, David Lang, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 15976 bytes --]
In some places that is a problem, but for most of the rest of us, the link
between us and the ISP is far more likely to be the bottleneck than links
further in the path.
Yes, in an ideal world, all devices would have active queue management, but in
practice, just getting it deployed to the edge makes life much nicer.
I have DSL, cable (Spectrum business), and Starlink, and I keep my work laptop
that I user for video calls on starlink, and run into far more cases where the
call quality is impacted by things on the other end of the link than being
caused starlink. So for all the doom-and-gloom and pointing out how it can be
better, it's already pretty good. I would have no hesitation in moving to an
off-grid location with starlink while continuing to work remotely.
David Lang
On Mon, 26 Sep 2022, Eugene Y Chang wrote:
> Thank you for the dialog,.
> This discussion with regards to Starlink is interesting as it confirms my guesses about the gap between Starlinks overly simplified, over optimistic marketing and the reality as they acquire subscribers.
>
> I am actually interested in a more perverse issue. I am seeing latency and bufferbloat as a consequence from significant under provisioning. It doesn’t matter that the ISP is selling a fiber drop, if (parts) of their network is under provisioned. Two end points can be less than 5 mile apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency was 230+ ms. The pattern I see suggest digital redlining. The older communities appear to have much more severe under provisioning.
>
> Another observation. Running speedtest appears to go from the edge of the network by layer 2 to the speedtest host operated by the ISP. Yup, bypasses the (suspected overloaded) routers.
>
> Anyway, just observing.
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
>> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Gene,
>>
>>
>>> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>>
>>> Comments inline below.
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>
>>>> Hi Eugene,
>>>>
>>>>
>>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>>
>>>>> Ok, we are getting into the details. I agree.
>>>>>
>>>>> Every node in the path has to implement this to be effective.
>>>>
>>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>>
>>>
>>> This is not completely true.
>>
>> [SM] You are likely right, trying to summarize things leads to partially incorrect generalizations.
>>
>>
>>> Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>>
>> [SM] It is the node that builds up the queue that profits most from better queue management.... (again I generalize, the node with the queue itself probably does not care all that much, but the endpoints will profit if the queue experiencing node deals with that queue more gracefully).
>>
>>
>>>
>>>
>>>>
>>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>>
>>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>>
>>> The hardest part is getting competing ISPs to implement and coordinate.
>>
>> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot end-users can do unilaterally on their side to improve both ingress and egress congestion. Admittedly especially ingress congestion would be even better handled with cooperation of the ISP.
>>
>>> Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>
>> [SM] Well we do this one home network at a time (not because that is efficient or ideal, but simply because it is possible). Maybe, if you have not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in addition) on your home internet access link for say a week and let us know ih/how your experience changed?
>>
>> Regards
>> Sebastian
>>
>>
>>>
>>>
>>>
>>>>
>>>> Regards
>>>> Sebastian
>>>>
>>>>
>>>>>
>>>>> Gene
>>>>> ----------------------------------------------
>>>>> Eugene Chang
>>>>> IEEE Senior Life Member
>>>>> eugene.chang@ieee.org
>>>>> 781-799-0233 (in Honolulu)
>>>>>
>>>>>
>>>>>
>>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>>>
>>>>>> software updates can do far more than just improve recovery.
>>>>>>
>>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>>>
>>>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>>>
>>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>>>
>>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>>>
>>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>>>
>>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>>>
>>>>>> David Lang
>>>>>>
>>>>>>
>>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>>
>>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>>
>>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>>>
>>>>>>> Gene
>>>>>>> ----------------------------------------------
>>>>>>> Eugene Chang
>>>>>>> IEEE Senior Life Member
>>>>>>> eugene.chang@ieee.org
>>>>>>> 781-799-0233 (in Honolulu)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>>>
>>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>>
>>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>>>
>>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>>>
>>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>>>
>>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>>>
>>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>>>
>>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>>>
>>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>>>
>>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>>>
>>>>>>>> Gene
>>>>>>>> -----------------------------------
>>>>>>>> Eugene Chang
>>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>
>>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>>>
>>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>> Bruce
>>>>>>>>>
>>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>>
>>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>>>
>>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Bruce Perens K6BP
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Bruce Perens K6BP
>>>>>
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:20 ` Sebastian Moeller
@ 2022-09-26 21:35 ` Eugene Y Chang
2022-09-26 21:44 ` David Lang
2022-09-26 21:44 ` Bruce Perens
0 siblings, 2 replies; 56+ messages in thread
From: Eugene Y Chang @ 2022-09-26 21:35 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Eugene Chang, David Lang, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 15219 bytes --]
Thank you for the dialog,.
This discussion with regards to Starlink is interesting as it confirms my guesses about the gap between Starlinks overly simplified, over optimistic marketing and the reality as they acquire subscribers.
I am actually interested in a more perverse issue. I am seeing latency and bufferbloat as a consequence from significant under provisioning. It doesn’t matter that the ISP is selling a fiber drop, if (parts) of their network is under provisioned. Two end points can be less than 5 mile apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the max latency was 230+ ms. The pattern I see suggest digital redlining. The older communities appear to have much more severe under provisioning.
Another observation. Running speedtest appears to go from the edge of the network by layer 2 to the speedtest host operated by the ISP. Yup, bypasses the (suspected overloaded) routers.
Anyway, just observing.
Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honolulu)
> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Gene,
>
>
>> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>
>> Comments inline below.
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>
>>> Hi Eugene,
>>>
>>>
>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>
>>>> Ok, we are getting into the details. I agree.
>>>>
>>>> Every node in the path has to implement this to be effective.
>>>
>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>
>>
>> This is not completely true.
>
> [SM] You are likely right, trying to summarize things leads to partially incorrect generalizations.
>
>
>> Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>
> [SM] It is the node that builds up the queue that profits most from better queue management.... (again I generalize, the node with the queue itself probably does not care all that much, but the endpoints will profit if the queue experiencing node deals with that queue more gracefully).
>
>
>>
>>
>>>
>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>
>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>
>> The hardest part is getting competing ISPs to implement and coordinate.
>
> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot end-users can do unilaterally on their side to improve both ingress and egress congestion. Admittedly especially ingress congestion would be even better handled with cooperation of the ISP.
>
>> Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>
> [SM] Well we do this one home network at a time (not because that is efficient or ideal, but simply because it is possible). Maybe, if you have not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in addition) on your home internet access link for say a week and let us know ih/how your experience changed?
>
> Regards
> Sebastian
>
>
>>
>>
>>
>>>
>>> Regards
>>> Sebastian
>>>
>>>
>>>>
>>>> Gene
>>>> ----------------------------------------------
>>>> Eugene Chang
>>>> IEEE Senior Life Member
>>>> eugene.chang@ieee.org
>>>> 781-799-0233 (in Honolulu)
>>>>
>>>>
>>>>
>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>>
>>>>> software updates can do far more than just improve recovery.
>>>>>
>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>>
>>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>>
>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>>
>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>>
>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>>
>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>>
>>>>> David Lang
>>>>>
>>>>>
>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>
>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>
>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>>
>>>>>> Gene
>>>>>> ----------------------------------------------
>>>>>> Eugene Chang
>>>>>> IEEE Senior Life Member
>>>>>> eugene.chang@ieee.org
>>>>>> 781-799-0233 (in Honolulu)
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>>
>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>
>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>>
>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>>
>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>>
>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>>
>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>>
>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>>
>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>>
>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>>
>>>>>>> Gene
>>>>>>> -----------------------------------
>>>>>>> Eugene Chang
>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>
>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>>
>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>> Bruce
>>>>>>>>
>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>
>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>>
>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>>
>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>
>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Bruce Perens K6BP
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Bruce Perens K6BP
>>>>
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #1.2: Type: text/html, Size: 32288 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:28 ` warren ponder
@ 2022-09-26 21:34 ` Bruce Perens
2022-09-27 16:14 ` Dave Taht
1 sibling, 0 replies; 56+ messages in thread
From: Bruce Perens @ 2022-09-26 21:34 UTC (permalink / raw)
To: warren ponder; +Cc: Dave Taht, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 13227 bytes --]
I'm giving Dave a wireguard link and root on my off-grid host (which is
recoverable with PiKVM unless the hardware fails). It's in the container in
the photo.[image: Macdoel.jpg]
On Mon, Sep 26, 2022 at 2:28 PM warren ponder via Starlink <
starlink@lists.bufferbloat.net> wrote:
> Dave what do you need in order to add sites to the data collection. Feel
> free to reply separate or link to a previous thread
>
> Thx
>
> WP
>
> On Mon, Sep 26, 2022, 2:10 PM Dave Taht via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> I tend to cite rfc7567 (published 2015) a lot, which replaces rfc2309
>> (published 1992!).
>>
>> Thing is, long before that, I'd come to the conclusion that fair
>> queuing was a requirement for
>> sustaining the right throughput for low rate flows in wildly variable
>> bandwidth. At certain places in
>> LTE/5g/starlink networks the payload is encrypted and the header info
>> required unavailable, and my advocacy of fq is certainly not shared by
>> everyone.
>>
>> We don't know enough about the actual points of congestion in starlink
>> to know if fq could be applied,
>> and although aqm is a very good idea everywhere, is also largely
>> undeployed where it would matter most.
>>
>> I focused my initial analysis of starlink on just uplink congestion,
>> which I believe can be easily improved given about 20 minutes with a
>> cross compiler for the dishy. We have a very good proof of concept as
>> to how to improve starlinks behavior over here:
>> https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379/87 and
>> ironically the same script could be run on their router as it is based
>> on a 6 year old version of openwrt in the first place.
>>
>> I have plenty of data later than this (
>>
>> https://docs.google.com/document/d/1puRjUVxJ6cCv-rgQ_zn-jWZU9ae0jZbFATLf4PQKblM/edit
>> ) but I would like to be collecting it from at least six sites around
>> the world.
>>
>> On Mon, Sep 26, 2022 at 1:54 PM Eugene Y Chang via Starlink
>> <starlink@lists.bufferbloat.net> wrote:
>> >
>> > Ok, we are getting into the details. I agree.
>> >
>> > Every node in the path has to implement this to be effective.
>> > In fact, every node in the path has to have the same prioritization or
>> the scheme becomes ineffective.
>> >
>> > Gene
>> > ----------------------------------------------
>> > Eugene Chang
>> > IEEE Senior Life Member
>> > eugene.chang@ieee.org
>> > 781-799-0233 (in Honolulu)
>> >
>> >
>> >
>> > On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>> >
>> > software updates can do far more than just improve recovery.
>> >
>> > In practice, large data transfers are less sensitive to latency than
>> smaller data transfers (i.e. downloading a CD image vs a video conference),
>> software can ensure better fairness in preventing a bulk transfer from
>> hurting the more latency sensitive transfers.
>> >
>> > (the example below is not completely accurate, but I think it gets the
>> point across)
>> >
>> > When buffers become excessivly large, you have the situation where a
>> video call is going to generate a small amount of data at a regular
>> interval, but a bulk data transfer is able to dump a huge amount of data
>> into the buffer instantly.
>> >
>> > If you just do FIFO, then you get a small chunk of video call, then
>> several seconds worth of CD transfer, followed by the next small chunk of
>> the video call.
>> >
>> > But the software can prevent the one app from hogging so much of the
>> connection and let the chunk of video call in sooner, avoiding the impact
>> to the real time traffic. Historically this has required the admin classify
>> all traffic and configure equipment to implement different treatment based
>> on the classification (and this requires trust in the classification
>> process), the bufferbloat team has developed options (fq_codel and cake)
>> that can ensure fairness between applications/servers with little or no
>> configuration, and no trust in other systems to properly classify their
>> traffic.
>> >
>> > The one thing that Cake needs to work really well is to be able to know
>> what the data rate available is. With Starlink, this changes frequently and
>> cake integrated into the starlink dish/router software would be far better
>> than anything that can be done externally as the rate changes can be fed
>> directly into the settings (currently they are only indirectly detected)
>> >
>> > David Lang
>> >
>> >
>> > On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>> >
>> > You already know this. Bufferbloat is a symptom and not the cause.
>> Bufferbloat grows when there are (1) periods of low or no bandwidth or (2)
>> periods of insufficient bandwidth (aka network congestion).
>> >
>> > If I understand this correctly, just a software update cannot make
>> bufferbloat go away. It might improve the speed of recovery (e.g. throw
>> away all time sensitive UDP messages).
>> >
>> > Gene
>> > ----------------------------------------------
>> > Eugene Chang
>> > IEEE Senior Life Member
>> > eugene.chang@ieee.org
>> > 781-799-0233 (in Honolulu)
>> >
>> >
>> >
>> > On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>> >
>> > Please help to explain. Here's a draft to start with:
>> >
>> > Starlink Performance Not Sufficient for Military Applications, Say
>> Scientists
>> >
>> > The problem is not availability: Starlink works where nothing but
>> another satellite network would. It's not bandwidth, although others have
>> questions about sustaining bandwidth as the customer base grows. It's
>> latency and jitter. As load increases, latency, the time it takes for a
>> packet to get through, increases more than it should. The scientists who
>> have fought bufferbloat, a major cause of latency on the internet, know
>> why. SpaceX needs to upgrade their system to use the scientist's Open
>> Source modifications to Linux to fight bufferbloat, and thus reduce
>> latency. This is mostly just using a newer version, but there are some
>> tunable parameters. Jitter is a change in the speed of getting a packet
>> through the network during a connection, which is inevitable in satellite
>> networks, but will be improved by making use of the bufferbloat-fighting
>> software, and probably with the addition of more satellites.
>> >
>> > We've done all of the work, SpaceX just needs to adopt it by upgrading
>> their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator
>> and creator of the X Window System, chimed in: <fill in here please>
>> > Open Source luminary Bruce Perens said: sometimes Starlink's latency
>> and jitter make it inadequate to remote-control my ham radio station. But
>> the military is experimenting with remote-control of vehicles on the
>> battlefield and other applications that can be demonstrated, but won't
>> happen at scale without adoption of bufferbloat-fighting strategies.
>> >
>> > On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <
>> eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>> > The key issue is most people don’t understand why latency matters. They
>> don’t see it or feel it’s impact.
>> >
>> > First, we have to help people see the symptoms of latency and how it
>> impacts something they care about.
>> > - gamers care but most people may think it is frivolous.
>> > - musicians care but that is mostly for a hobby.
>> > - business should care because of productivity but they don’t know how
>> to “see” the impact.
>> >
>> > Second, there needs to be a “OMG, I have been seeing the action of
>> latency all this time and never knew it! I was being shafted.” Once you
>> have this awakening, you can get all the press you want for free.
>> >
>> > Most of the time when business apps are developed, “we” hide the impact
>> of poor performance (aka latency) or they hide from the discussion because
>> the developers don’t have a way to fix the latency. Maybe businesses don’t
>> care because any employees affected are just considered poor performers.
>> (In bad economic times, the poor performers are just laid off.) For
>> employees, if they happen to be at a location with bad latency, they don’t
>> know that latency is hurting them. Unfair but most people don’t know the
>> issue is latency.
>> >
>> > Talking and explaining why latency is bad is not as effective as
>> showing why latency is bad. Showing has to be with something that has a
>> person impact.
>> >
>> > Gene
>> > -----------------------------------
>> > Eugene Chang
>> > eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>> > +1-781-799-0233 (in Honolulu)
>> >
>> >
>> >
>> >
>> >
>> > On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <
>> starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>>
>> wrote:
>> >
>> > If you want to get attention, you can get it for free. I can place
>> articles with various press if there is something interesting to say. Did
>> this all through the evangelism of Open Source. All we need to do is write,
>> sign, and publish a statement. What they actually write is less relevant if
>> they publish a link to our statement.
>> >
>> > Right now I am concerned that the Starlink latency and jitter is going
>> to be a problem even for remote controlling my ham station. The US Military
>> is interested in doing much more, which they have demonstrated, but I don't
>> see happening at scale without some technical work on the network. Being
>> able to say this isn't ready for the government's application would be an
>> attention-getter.
>> >
>> > Thanks
>> >
>> > Bruce
>> >
>> > On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <
>> starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>>
>> wrote:
>> > These days, if you want attention, you gotta buy it. A 50k half page
>> > ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>> > signed by the kinds of luminaries we got for the fcc wifi fight, would
>> > go a long way towards shifting the tide.
>> >
>> > On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:
>> dave.taht@gmail.com>> wrote:
>> >
>> >
>> > On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>> > <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>>
>> wrote:
>> >
>> >
>> > The awareness & understanding of latency & impact on QoE is nearly
>> unknown among reporters. IMO maybe there should be some kind of background
>> briefings for reporters - maybe like a simple YouTube video explainer that
>> is short & high level & visual? Otherwise reporters will just continue to
>> focus on what they know...
>> >
>> >
>> > That's a great idea. I have visions of crashing the washington
>> > correspondents dinner, but perhaps
>> > there is some set of gatherings journalists regularly attend?
>> >
>> >
>> > On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <
>> starlink-bounces@lists.bufferbloat.net <mailto:
>> starlink-bounces@lists.bufferbloat.net> on behalf of
>> starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>
>> wrote:
>> >
>> > I still find it remarkable that reporters are still missing the
>> > meaning of the huge latencies for starlink, under load.
>> >
>> >
>> >
>> >
>> > --
>> > FQ World Domination pending:
>> https://blog.cerowrt.org/post/state_of_fq_codel/<
>> https://blog.cerowrt.org/post/state_of_fq_codel/>
>> > Dave Täht CEO, TekLibre, LLC
>> >
>> >
>> >
>> >
>> > --
>> > FQ World Domination pending:
>> https://blog.cerowrt.org/post/state_of_fq_codel/<
>> https://blog.cerowrt.org/post/state_of_fq_codel/>
>> > Dave Täht CEO, TekLibre, LLC
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> > https://lists.bufferbloat.net/listinfo/starlink <
>> https://lists.bufferbloat.net/listinfo/starlink>
>> >
>> >
>> > --
>> > Bruce Perens K6BP
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> > https://lists.bufferbloat.net/listinfo/starlink <
>> https://lists.bufferbloat.net/listinfo/starlink>
>> >
>> >
>> >
>> >
>> > --
>> > Bruce Perens K6BP
>> >
>> >
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>>
>> --
>> FQ World Domination pending:
>> https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
--
Bruce Perens K6BP
[-- Attachment #1.2: Type: text/html, Size: 18460 bytes --]
[-- Attachment #2: Macdoel.jpg --]
[-- Type: image/jpeg, Size: 2351751 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:22 ` David Lang
@ 2022-09-26 21:29 ` Sebastian Moeller
2022-09-27 3:47 ` Eugene Y Chang
0 siblings, 1 reply; 56+ messages in thread
From: Sebastian Moeller @ 2022-09-26 21:29 UTC (permalink / raw)
To: David Lang; +Cc: Eugene Y Chang, Dave Taht via Starlink
Hi David,
> On Sep 26, 2022, at 23:22, David Lang <david@lang.hm> wrote:
>
> On Mon, 26 Sep 2022, Eugene Y Chang wrote:
>
>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>
>>> Hi Eugene,
>>>
>>>
>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>
>>>> Ok, we are getting into the details. I agree.
>>>>
>>>> Every node in the path has to implement this to be effective.
>>>
>>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>>
>>
>> This is not completely true. Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>
> only if node N and node N-1 handle the same traffic with the same link speeds. In practice this is almost never the case.
[SM] I note that typically for ingress shaping a post-true-bottleneck shaper will not work unless we create an artificial bottleneck by shaping the traffic to below true bottleneck (thereby creating a new true but artificial bottleneck so the queue develops at a point where we can control it).
Also if the difference between "true structural" and artificial bottleneck is small in comparison to the traffic inrush we can see "traffic back-spill" into the typically oversized and under-managed upstream buffers, but for reasonably well behaved that happens relatively rarely. Rarely enough that ingress traffic shaping noticeably improves latency-under-load in spite of not beeing a guranteed solution.
> Until you get to gigabit last-mile links, the last mile is almost always the bottleneck from both sides, so implementing cake on the home router makes a huge improvement (and if you can get it on the last-mile ISP router, even better). Once you get into the Internet fabric, bottlenecks are fairly rare, they do happen, but ISPs carefully watch for those and add additional paths and/or increase bandwith to avoid them.
[SM] Well, sometimes such links are congested too for economic reasons...
Regards
Sebastian
>
> David Lang
>
>>>
>>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>>
>>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>>
>> The hardest part is getting competing ISPs to implement and coordinate. Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>>
>>
>>
>>>
>>> Regards
>>> Sebastian
>>>
>>>
>>>>
>>>> Gene
>>>> ----------------------------------------------
>>>> Eugene Chang
>>>> IEEE Senior Life Member
>>>> eugene.chang@ieee.org
>>>> 781-799-0233 (in Honolulu)
>>>>
>>>>
>>>>
>>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>>
>>>>> software updates can do far more than just improve recovery.
>>>>>
>>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>>
>>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>>
>>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>>
>>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>>
>>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>>
>>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>>
>>>>> David Lang
>>>>>
>>>>>
>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>>
>>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>>
>>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>>
>>>>>> Gene
>>>>>> ----------------------------------------------
>>>>>> Eugene Chang
>>>>>> IEEE Senior Life Member
>>>>>> eugene.chang@ieee.org
>>>>>> 781-799-0233 (in Honolulu)
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>>
>>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>>
>>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>>
>>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>>
>>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>>
>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>>
>>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>>> - gamers care but most people may think it is frivolous.
>>>>>>> - musicians care but that is mostly for a hobby.
>>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>>
>>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>>
>>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>>
>>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>>
>>>>>>> Gene
>>>>>>> -----------------------------------
>>>>>>> Eugene Chang
>>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>
>>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>>
>>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>> Bruce
>>>>>>>>
>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>>> go a long way towards shifting the tide.
>>>>>>>>
>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>>
>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>>
>>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>>> correspondents dinner, but perhaps
>>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>>
>>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Bruce Perens K6BP
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Bruce Perens K6BP
>>>>
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:10 ` Dave Taht
@ 2022-09-26 21:28 ` warren ponder
2022-09-26 21:34 ` Bruce Perens
2022-09-27 16:14 ` Dave Taht
0 siblings, 2 replies; 56+ messages in thread
From: warren ponder @ 2022-09-26 21:28 UTC (permalink / raw)
To: Dave Taht; +Cc: Eugene Y Chang, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 12437 bytes --]
Dave what do you need in order to add sites to the data collection. Feel
free to reply separate or link to a previous thread
Thx
WP
On Mon, Sep 26, 2022, 2:10 PM Dave Taht via Starlink <
starlink@lists.bufferbloat.net> wrote:
> I tend to cite rfc7567 (published 2015) a lot, which replaces rfc2309
> (published 1992!).
>
> Thing is, long before that, I'd come to the conclusion that fair
> queuing was a requirement for
> sustaining the right throughput for low rate flows in wildly variable
> bandwidth. At certain places in
> LTE/5g/starlink networks the payload is encrypted and the header info
> required unavailable, and my advocacy of fq is certainly not shared by
> everyone.
>
> We don't know enough about the actual points of congestion in starlink
> to know if fq could be applied,
> and although aqm is a very good idea everywhere, is also largely
> undeployed where it would matter most.
>
> I focused my initial analysis of starlink on just uplink congestion,
> which I believe can be easily improved given about 20 minutes with a
> cross compiler for the dishy. We have a very good proof of concept as
> to how to improve starlinks behavior over here:
> https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379/87 and
> ironically the same script could be run on their router as it is based
> on a 6 year old version of openwrt in the first place.
>
> I have plenty of data later than this (
>
> https://docs.google.com/document/d/1puRjUVxJ6cCv-rgQ_zn-jWZU9ae0jZbFATLf4PQKblM/edit
> ) but I would like to be collecting it from at least six sites around
> the world.
>
> On Mon, Sep 26, 2022 at 1:54 PM Eugene Y Chang via Starlink
> <starlink@lists.bufferbloat.net> wrote:
> >
> > Ok, we are getting into the details. I agree.
> >
> > Every node in the path has to implement this to be effective.
> > In fact, every node in the path has to have the same prioritization or
> the scheme becomes ineffective.
> >
> > Gene
> > ----------------------------------------------
> > Eugene Chang
> > IEEE Senior Life Member
> > eugene.chang@ieee.org
> > 781-799-0233 (in Honolulu)
> >
> >
> >
> > On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
> >
> > software updates can do far more than just improve recovery.
> >
> > In practice, large data transfers are less sensitive to latency than
> smaller data transfers (i.e. downloading a CD image vs a video conference),
> software can ensure better fairness in preventing a bulk transfer from
> hurting the more latency sensitive transfers.
> >
> > (the example below is not completely accurate, but I think it gets the
> point across)
> >
> > When buffers become excessivly large, you have the situation where a
> video call is going to generate a small amount of data at a regular
> interval, but a bulk data transfer is able to dump a huge amount of data
> into the buffer instantly.
> >
> > If you just do FIFO, then you get a small chunk of video call, then
> several seconds worth of CD transfer, followed by the next small chunk of
> the video call.
> >
> > But the software can prevent the one app from hogging so much of the
> connection and let the chunk of video call in sooner, avoiding the impact
> to the real time traffic. Historically this has required the admin classify
> all traffic and configure equipment to implement different treatment based
> on the classification (and this requires trust in the classification
> process), the bufferbloat team has developed options (fq_codel and cake)
> that can ensure fairness between applications/servers with little or no
> configuration, and no trust in other systems to properly classify their
> traffic.
> >
> > The one thing that Cake needs to work really well is to be able to know
> what the data rate available is. With Starlink, this changes frequently and
> cake integrated into the starlink dish/router software would be far better
> than anything that can be done externally as the rate changes can be fed
> directly into the settings (currently they are only indirectly detected)
> >
> > David Lang
> >
> >
> > On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
> >
> > You already know this. Bufferbloat is a symptom and not the cause.
> Bufferbloat grows when there are (1) periods of low or no bandwidth or (2)
> periods of insufficient bandwidth (aka network congestion).
> >
> > If I understand this correctly, just a software update cannot make
> bufferbloat go away. It might improve the speed of recovery (e.g. throw
> away all time sensitive UDP messages).
> >
> > Gene
> > ----------------------------------------------
> > Eugene Chang
> > IEEE Senior Life Member
> > eugene.chang@ieee.org
> > 781-799-0233 (in Honolulu)
> >
> >
> >
> > On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
> >
> > Please help to explain. Here's a draft to start with:
> >
> > Starlink Performance Not Sufficient for Military Applications, Say
> Scientists
> >
> > The problem is not availability: Starlink works where nothing but
> another satellite network would. It's not bandwidth, although others have
> questions about sustaining bandwidth as the customer base grows. It's
> latency and jitter. As load increases, latency, the time it takes for a
> packet to get through, increases more than it should. The scientists who
> have fought bufferbloat, a major cause of latency on the internet, know
> why. SpaceX needs to upgrade their system to use the scientist's Open
> Source modifications to Linux to fight bufferbloat, and thus reduce
> latency. This is mostly just using a newer version, but there are some
> tunable parameters. Jitter is a change in the speed of getting a packet
> through the network during a connection, which is inevitable in satellite
> networks, but will be improved by making use of the bufferbloat-fighting
> software, and probably with the addition of more satellites.
> >
> > We've done all of the work, SpaceX just needs to adopt it by upgrading
> their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator
> and creator of the X Window System, chimed in: <fill in here please>
> > Open Source luminary Bruce Perens said: sometimes Starlink's latency and
> jitter make it inadequate to remote-control my ham radio station. But the
> military is experimenting with remote-control of vehicles on the
> battlefield and other applications that can be demonstrated, but won't
> happen at scale without adoption of bufferbloat-fighting strategies.
> >
> > On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu
> <mailto:eugene.chang@alum.mit.edu>> wrote:
> > The key issue is most people don’t understand why latency matters. They
> don’t see it or feel it’s impact.
> >
> > First, we have to help people see the symptoms of latency and how it
> impacts something they care about.
> > - gamers care but most people may think it is frivolous.
> > - musicians care but that is mostly for a hobby.
> > - business should care because of productivity but they don’t know how
> to “see” the impact.
> >
> > Second, there needs to be a “OMG, I have been seeing the action of
> latency all this time and never knew it! I was being shafted.” Once you
> have this awakening, you can get all the press you want for free.
> >
> > Most of the time when business apps are developed, “we” hide the impact
> of poor performance (aka latency) or they hide from the discussion because
> the developers don’t have a way to fix the latency. Maybe businesses don’t
> care because any employees affected are just considered poor performers.
> (In bad economic times, the poor performers are just laid off.) For
> employees, if they happen to be at a location with bad latency, they don’t
> know that latency is hurting them. Unfair but most people don’t know the
> issue is latency.
> >
> > Talking and explaining why latency is bad is not as effective as showing
> why latency is bad. Showing has to be with something that has a person
> impact.
> >
> > Gene
> > -----------------------------------
> > Eugene Chang
> > eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
> > +1-781-799-0233 (in Honolulu)
> >
> >
> >
> >
> >
> > On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <
> starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>>
> wrote:
> >
> > If you want to get attention, you can get it for free. I can place
> articles with various press if there is something interesting to say. Did
> this all through the evangelism of Open Source. All we need to do is write,
> sign, and publish a statement. What they actually write is less relevant if
> they publish a link to our statement.
> >
> > Right now I am concerned that the Starlink latency and jitter is going
> to be a problem even for remote controlling my ham station. The US Military
> is interested in doing much more, which they have demonstrated, but I don't
> see happening at scale without some technical work on the network. Being
> able to say this isn't ready for the government's application would be an
> attention-getter.
> >
> > Thanks
> >
> > Bruce
> >
> > On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <
> starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>>
> wrote:
> > These days, if you want attention, you gotta buy it. A 50k half page
> > ad in the wapo or NYT riffing off of It's the latency, Stupid!",
> > signed by the kinds of luminaries we got for the fcc wifi fight, would
> > go a long way towards shifting the tide.
> >
> > On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:
> dave.taht@gmail.com>> wrote:
> >
> >
> > On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
> > <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>>
> wrote:
> >
> >
> > The awareness & understanding of latency & impact on QoE is nearly
> unknown among reporters. IMO maybe there should be some kind of background
> briefings for reporters - maybe like a simple YouTube video explainer that
> is short & high level & visual? Otherwise reporters will just continue to
> focus on what they know...
> >
> >
> > That's a great idea. I have visions of crashing the washington
> > correspondents dinner, but perhaps
> > there is some set of gatherings journalists regularly attend?
> >
> >
> > On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <
> starlink-bounces@lists.bufferbloat.net <mailto:
> starlink-bounces@lists.bufferbloat.net> on behalf of
> starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>
> wrote:
> >
> > I still find it remarkable that reporters are still missing the
> > meaning of the huge latencies for starlink, under load.
> >
> >
> >
> >
> > --
> > FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/<
> https://blog.cerowrt.org/post/state_of_fq_codel/>
> > Dave Täht CEO, TekLibre, LLC
> >
> >
> >
> >
> > --
> > FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/<
> https://blog.cerowrt.org/post/state_of_fq_codel/>
> > Dave Täht CEO, TekLibre, LLC
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> > https://lists.bufferbloat.net/listinfo/starlink <
> https://lists.bufferbloat.net/listinfo/starlink>
> >
> >
> > --
> > Bruce Perens K6BP
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> > https://lists.bufferbloat.net/listinfo/starlink <
> https://lists.bufferbloat.net/listinfo/starlink>
> >
> >
> >
> >
> > --
> > Bruce Perens K6BP
> >
> >
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
>
>
>
> --
> FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #2: Type: text/html, Size: 17277 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:10 ` Eugene Y Chang
2022-09-26 21:20 ` Sebastian Moeller
@ 2022-09-26 21:22 ` David Lang
2022-09-26 21:29 ` Sebastian Moeller
1 sibling, 1 reply; 56+ messages in thread
From: David Lang @ 2022-09-26 21:22 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: Sebastian Moeller, David Lang, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 12989 bytes --]
On Mon, 26 Sep 2022, Eugene Y Chang wrote:
>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Eugene,
>>
>>
>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>
>>> Ok, we are getting into the details. I agree.
>>>
>>> Every node in the path has to implement this to be effective.
>>
>> Amazingly the biggest bang for the buck is gotten by fixing those nodes
>> that actually contain a network path's bottleneck. Often these are pretty
>> stable. So yes for fully guaranteed service quality all nodes would need to
>> participate, but for improving things noticeably it is sufficient to improve
>> the usual bottlenecks, e.g. for many internet access links the home gateway
>> is a decent point to implement better buffer management. (In short the
>> problem are over-sized and under-managed buffers, and one of the best
>> solution is better/smarter buffer management).
>>
>
> This is not completely true. Say the bottleneck is at node N. During the
> period of congestion, the upstream node N-1 will have to buffer. When node N
> recovers, the bufferbloat at N-1 will be blocking until the bufferbloat
> drains. Etc. etc. Making node N better will reduce the extent of the backup
> at N-1, but N-1 should implement the better code.
only if node N and node N-1 handle the same traffic with the same link speeds.
In practice this is almost never the case.
Until you get to gigabit last-mile links, the last mile is almost always the
bottleneck from both sides, so implementing cake on the home router makes a huge
improvement (and if you can get it on the last-mile ISP router, even better).
Once you get into the Internet fabric, bottlenecks are fairly rare, they do
happen, but ISPs carefully watch for those and add additional paths and/or
increase bandwith to avoid them.
David Lang
>>
>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>
>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>
> The hardest part is getting competing ISPs to implement and coordinate. Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>
>
>
>>
>> Regards
>> Sebastian
>>
>>
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>
>>>> software updates can do far more than just improve recovery.
>>>>
>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>
>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>
>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>
>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>
>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>
>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>
>>>> David Lang
>>>>
>>>>
>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>
>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>
>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>
>>>>> Gene
>>>>> ----------------------------------------------
>>>>> Eugene Chang
>>>>> IEEE Senior Life Member
>>>>> eugene.chang@ieee.org
>>>>> 781-799-0233 (in Honolulu)
>>>>>
>>>>>
>>>>>
>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>
>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>
>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>
>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>
>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>
>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>
>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>> - gamers care but most people may think it is frivolous.
>>>>>> - musicians care but that is mostly for a hobby.
>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>
>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>
>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>
>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>
>>>>>> Gene
>>>>>> -----------------------------------
>>>>>> Eugene Chang
>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>
>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>
>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> Bruce
>>>>>>>
>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>> go a long way towards shifting the tide.
>>>>>>>
>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>
>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>
>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>
>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>> correspondents dinner, but perhaps
>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>
>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Bruce Perens K6BP
>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Bruce Perens K6BP
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:10 ` Eugene Y Chang
@ 2022-09-26 21:20 ` Sebastian Moeller
2022-09-26 21:35 ` Eugene Y Chang
2022-09-26 21:22 ` David Lang
1 sibling, 1 reply; 56+ messages in thread
From: Sebastian Moeller @ 2022-09-26 21:20 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: David Lang, Dave Taht via Starlink
Hi Gene,
> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:
>
> Comments inline below.
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Eugene,
>>
>>
>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>
>>> Ok, we are getting into the details. I agree.
>>>
>>> Every node in the path has to implement this to be effective.
>>
>> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>>
>
> This is not completely true.
[SM] You are likely right, trying to summarize things leads to partially incorrect generalizations.
> Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
[SM] It is the node that builds up the queue that profits most from better queue management.... (again I generalize, the node with the queue itself probably does not care all that much, but the endpoints will profit if the queue experiencing node deals with that queue more gracefully).
>
>
>>
>>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>>
>> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
>
> The hardest part is getting competing ISPs to implement and coordinate.
[SM] Yes, but it turned out even with non-cooperating ISPs there is a lot end-users can do unilaterally on their side to improve both ingress and egress congestion. Admittedly especially ingress congestion would be even better handled with cooperation of the ISP.
> Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
[SM] Well we do this one home network at a time (not because that is efficient or ideal, but simply because it is possible). Maybe, if you have not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in addition) on your home internet access link for say a week and let us know ih/how your experience changed?
Regards
Sebastian
>
>
>
>>
>> Regards
>> Sebastian
>>
>>
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>>
>>>> software updates can do far more than just improve recovery.
>>>>
>>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>>
>>>> (the example below is not completely accurate, but I think it gets the point across)
>>>>
>>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>>
>>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>>
>>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>>
>>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>>
>>>> David Lang
>>>>
>>>>
>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>>
>>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>>
>>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>>
>>>>> Gene
>>>>> ----------------------------------------------
>>>>> Eugene Chang
>>>>> IEEE Senior Life Member
>>>>> eugene.chang@ieee.org
>>>>> 781-799-0233 (in Honolulu)
>>>>>
>>>>>
>>>>>
>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>>
>>>>>> Please help to explain. Here's a draft to start with:
>>>>>>
>>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>>
>>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>>
>>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>>
>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>>
>>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>>> - gamers care but most people may think it is frivolous.
>>>>>> - musicians care but that is mostly for a hobby.
>>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>>
>>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>>
>>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>>
>>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>>
>>>>>> Gene
>>>>>> -----------------------------------
>>>>>> Eugene Chang
>>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>>> +1-781-799-0233 (in Honolulu)
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>
>>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>>
>>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> Bruce
>>>>>>>
>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>>> go a long way towards shifting the tide.
>>>>>>>
>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>>
>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>>
>>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>>
>>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>>> correspondents dinner, but perhaps
>>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>>
>>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Bruce Perens K6BP
>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Bruce Perens K6BP
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:54 ` Eugene Y Chang
` (2 preceding siblings ...)
2022-09-26 21:10 ` Dave Taht
@ 2022-09-26 21:17 ` David Lang
3 siblings, 0 replies; 56+ messages in thread
From: David Lang @ 2022-09-26 21:17 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: David Lang, Bruce Perens, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 11615 bytes --]
On Mon, 26 Sep 2022, Eugene Y Chang wrote:
> Ok, we are getting into the details. I agree.
>
> Every node in the path has to implement this to be effective.
> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
With traditional QoS you are correct, however what we have found is that if you
just do something on the bottleneck nodes, everything else can just forward
traffic without thinking about it. In practice (at least until you get up to
gigabit to the home), the vast majority of the time the bottleneck nodes are the
last-mile hop, with the uplink from the home almost always being a bottleneck.
So if you implement cake on the home router, you solve most of the problem.
David Lang
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>
>> software updates can do far more than just improve recovery.
>>
>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>
>> (the example below is not completely accurate, but I think it gets the point across)
>>
>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>
>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>
>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>
>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>
>> David Lang
>>
>>
>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>
>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>
>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>
>>>> Please help to explain. Here's a draft to start with:
>>>>
>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>
>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>
>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>
>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu><mailto:eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>>> wrote:
>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>
>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>> - gamers care but most people may think it is frivolous.
>>>> - musicians care but that is mostly for a hobby.
>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>
>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>
>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>
>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>
>>>> Gene
>>>> -----------------------------------
>>>> Eugene Chang
>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu> <mailto:eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>>
>>>> +1-781-799-0233 (in Honolulu)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net><mailto:starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>> wrote:
>>>>>
>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>
>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>
>>>>> Thanks
>>>>>
>>>>> Bruce
>>>>>
>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net><mailto:starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>> wrote:
>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>> go a long way towards shifting the tide.
>>>>>
>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:dave.taht@gmail.com <mailto:dave.taht@gmail.com>>> wrote:
>>>>>>
>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com> <mailto:Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>>> wrote:
>>>>>>>
>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>
>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>> correspondents dinner, but perhaps
>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>
>>>>>>>
>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> <mailto:starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net>> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net> <mailto:starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>> wrote:
>>>>>>>
>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/><https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>>
>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/><https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>>
>>>>> Dave Täht CEO, TekLibre, LLC
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net> <mailto:Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>>
>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink> <https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>>
>>>>>
>>>>>
>>>>> --
>>>>> Bruce Perens K6BP
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net> <mailto:Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>>
>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink> <https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>>
>>>>
>>>>
>>>>
>>>> --
>>>> Bruce Perens K6BP
>
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:02 ` Bruce Perens
@ 2022-09-26 21:14 ` Dave Taht
0 siblings, 0 replies; 56+ messages in thread
From: Dave Taht @ 2022-09-26 21:14 UTC (permalink / raw)
To: Bruce Perens; +Cc: Eugene Y Chang, Dave Taht via Starlink
On Mon, Sep 26, 2022 at 2:02 PM Bruce Perens via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
>
>
> On Mon, Sep 26, 2022 at 1:54 PM Eugene Y Chang <eugene.chang@ieee.org> wrote:
>>
>> Every node in the path has to implement this to be effective.
>
>
> It would certainly be optimal if every node implemented it. But any node can detect the endpoint round-trip time, and how it degrades, and thus adjust how fast it feeds packets into the network. And a midpoint can detect when a host is feeding packets too fast for downstream nodes, and send explicit congestion notification, and failing that, drop some packets from that source.
yes.
Been working on libreqos.io lately, which is targetted at small ISPs,
and uses our latest XDP and cake code. It's getting marvelous - the
ipv6 code landed today, and we think we're good for 20gbit on 16 cores
on < $2k hw with purely free software.
Preseem (also using fq_codel) has been delivering a nice shaping
middlebox into the wisp market for going on 6 years.
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 21:01 ` Sebastian Moeller
@ 2022-09-26 21:10 ` Eugene Y Chang
2022-09-26 21:20 ` Sebastian Moeller
2022-09-26 21:22 ` David Lang
0 siblings, 2 replies; 56+ messages in thread
From: Eugene Y Chang @ 2022-09-26 21:10 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Eugene Chang, David Lang, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 12591 bytes --]
Comments inline below.
Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honolulu)
> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Eugene,
>
>
>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>
>> Ok, we are getting into the details. I agree.
>>
>> Every node in the path has to implement this to be effective.
>
> Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
>
This is not completely true. Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.
>
>> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>
> Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
The hardest part is getting competing ISPs to implement and coordinate. Bufferbloat and handoff between ISPs will be hard. The only way to fix this is to get the unwashed public to care. Then they can say “we don’t care about the technical issues, just fix it.” Until then …..
>
> Regards
> Sebastian
>
>
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>>
>>> software updates can do far more than just improve recovery.
>>>
>>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>>
>>> (the example below is not completely accurate, but I think it gets the point across)
>>>
>>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>>
>>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>>
>>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>>
>>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>>
>>> David Lang
>>>
>>>
>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>>
>>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>>
>>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>>
>>>> Gene
>>>> ----------------------------------------------
>>>> Eugene Chang
>>>> IEEE Senior Life Member
>>>> eugene.chang@ieee.org
>>>> 781-799-0233 (in Honolulu)
>>>>
>>>>
>>>>
>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>>
>>>>> Please help to explain. Here's a draft to start with:
>>>>>
>>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>>
>>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>>
>>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>>
>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>>
>>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>>> - gamers care but most people may think it is frivolous.
>>>>> - musicians care but that is mostly for a hobby.
>>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>>
>>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>>
>>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>>
>>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>>
>>>>> Gene
>>>>> -----------------------------------
>>>>> Eugene Chang
>>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>>> +1-781-799-0233 (in Honolulu)
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>
>>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>>
>>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> Bruce
>>>>>>
>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>>> go a long way towards shifting the tide.
>>>>>>
>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>>
>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>>
>>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>>
>>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>>> correspondents dinner, but perhaps
>>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>>
>>>>>>>>
>>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>>
>>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Bruce Perens K6BP
>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Bruce Perens K6BP
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
[-- Attachment #1.2: Type: text/html, Size: 24608 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:54 ` Eugene Y Chang
2022-09-26 21:01 ` Sebastian Moeller
2022-09-26 21:02 ` Bruce Perens
@ 2022-09-26 21:10 ` Dave Taht
2022-09-26 21:28 ` warren ponder
2022-09-26 21:17 ` David Lang
3 siblings, 1 reply; 56+ messages in thread
From: Dave Taht @ 2022-09-26 21:10 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: David Lang, Dave Taht via Starlink
I tend to cite rfc7567 (published 2015) a lot, which replaces rfc2309
(published 1992!).
Thing is, long before that, I'd come to the conclusion that fair
queuing was a requirement for
sustaining the right throughput for low rate flows in wildly variable
bandwidth. At certain places in
LTE/5g/starlink networks the payload is encrypted and the header info
required unavailable, and my advocacy of fq is certainly not shared by
everyone.
We don't know enough about the actual points of congestion in starlink
to know if fq could be applied,
and although aqm is a very good idea everywhere, is also largely
undeployed where it would matter most.
I focused my initial analysis of starlink on just uplink congestion,
which I believe can be easily improved given about 20 minutes with a
cross compiler for the dishy. We have a very good proof of concept as
to how to improve starlinks behavior over here:
https://forum.openwrt.org/t/cake-w-adaptive-bandwidth/135379/87 and
ironically the same script could be run on their router as it is based
on a 6 year old version of openwrt in the first place.
I have plenty of data later than this (
https://docs.google.com/document/d/1puRjUVxJ6cCv-rgQ_zn-jWZU9ae0jZbFATLf4PQKblM/edit
) but I would like to be collecting it from at least six sites around
the world.
On Mon, Sep 26, 2022 at 1:54 PM Eugene Y Chang via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
> Ok, we are getting into the details. I agree.
>
> Every node in the path has to implement this to be effective.
> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>
> software updates can do far more than just improve recovery.
>
> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>
> (the example below is not completely accurate, but I think it gets the point across)
>
> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>
> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>
> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>
> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>
> David Lang
>
>
> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>
> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>
> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>
> Please help to explain. Here's a draft to start with:
>
> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>
> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>
> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>
> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>
> First, we have to help people see the symptoms of latency and how it impacts something they care about.
> - gamers care but most people may think it is frivolous.
> - musicians care but that is mostly for a hobby.
> - business should care because of productivity but they don’t know how to “see” the impact.
>
> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>
> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>
> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>
> Gene
> -----------------------------------
> Eugene Chang
> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
> +1-781-799-0233 (in Honolulu)
>
>
>
>
>
> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>
> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>
> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>
> Thanks
>
> Bruce
>
> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
> These days, if you want attention, you gotta buy it. A 50k half page
> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
> signed by the kinds of luminaries we got for the fcc wifi fight, would
> go a long way towards shifting the tide.
>
> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>
>
> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>
>
> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>
>
> That's a great idea. I have visions of crashing the washington
> correspondents dinner, but perhaps
> there is some set of gatherings journalists regularly attend?
>
>
> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>
> I still find it remarkable that reporters are still missing the
> meaning of the huge latencies for starlink, under load.
>
>
>
>
> --
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
> Dave Täht CEO, TekLibre, LLC
>
>
>
>
> --
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>
>
> --
> Bruce Perens K6BP
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>
>
>
>
> --
> Bruce Perens K6BP
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:54 ` Eugene Y Chang
2022-09-26 21:01 ` Sebastian Moeller
@ 2022-09-26 21:02 ` Bruce Perens
2022-09-26 21:14 ` Dave Taht
2022-09-26 21:10 ` Dave Taht
2022-09-26 21:17 ` David Lang
3 siblings, 1 reply; 56+ messages in thread
From: Bruce Perens @ 2022-09-26 21:02 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: David Lang, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 526 bytes --]
On Mon, Sep 26, 2022 at 1:54 PM Eugene Y Chang <eugene.chang@ieee.org>
wrote:
> Every node in the path has to implement this to be effective.
>
It would certainly be optimal if every node implemented it. But any node
can detect the endpoint round-trip time, and how it degrades, and thus
adjust how fast it feeds packets into the network. And a midpoint can
detect when a host is feeding packets too fast for downstream nodes, and
send explicit congestion notification, and failing that, drop some packets
from that source.
[-- Attachment #2: Type: text/html, Size: 907 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:54 ` Eugene Y Chang
@ 2022-09-26 21:01 ` Sebastian Moeller
2022-09-26 21:10 ` Eugene Y Chang
2022-09-26 21:02 ` Bruce Perens
` (2 subsequent siblings)
3 siblings, 1 reply; 56+ messages in thread
From: Sebastian Moeller @ 2022-09-26 21:01 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: David Lang, Dave Taht via Starlink
Hi Eugene,
> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> Ok, we are getting into the details. I agree.
>
> Every node in the path has to implement this to be effective.
Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).
> In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
Yes and no, one of the clearest winners has been flow queueing, IMHO not because it is the most optimal capacity sharing scheme, but because it is the least pessimal scheme, allowing all (or none) flows forward progress. You can interpret that as a scheme in which flows below their capacity share are prioritized, but I am not sure that is the best way to look at these things.
Regards
Sebastian
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>
>> software updates can do far more than just improve recovery.
>>
>> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>>
>> (the example below is not completely accurate, but I think it gets the point across)
>>
>> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>>
>> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>>
>> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>>
>> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>>
>> David Lang
>>
>>
>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>
>>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>>
>>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>>
>>> Gene
>>> ----------------------------------------------
>>> Eugene Chang
>>> IEEE Senior Life Member
>>> eugene.chang@ieee.org
>>> 781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>>
>>>> Please help to explain. Here's a draft to start with:
>>>>
>>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>>
>>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>>
>>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>>
>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> wrote:
>>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>>
>>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>>> - gamers care but most people may think it is frivolous.
>>>> - musicians care but that is mostly for a hobby.
>>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>>
>>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>>
>>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>>
>>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>>
>>>> Gene
>>>> -----------------------------------
>>>> Eugene Chang
>>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>>>> +1-781-799-0233 (in Honolulu)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>
>>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>>
>>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>>
>>>>> Thanks
>>>>>
>>>>> Bruce
>>>>>
>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>>> go a long way towards shifting the tide.
>>>>>
>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>>>
>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>>>
>>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>>
>>>>>> That's a great idea. I have visions of crashing the washington
>>>>>> correspondents dinner, but perhaps
>>>>>> there is some set of gatherings journalists regularly attend?
>>>>>>
>>>>>>>
>>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>>>
>>>>>>> I still find it remarkable that reporters are still missing the
>>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>>> Dave Täht CEO, TekLibre, LLC
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>>
>>>>>
>>>>> --
>>>>> Bruce Perens K6BP
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>>
>>>>
>>>>
>>>> --
>>>> Bruce Perens K6BP
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:48 ` David Lang
@ 2022-09-26 20:54 ` Eugene Y Chang
2022-09-26 21:01 ` Sebastian Moeller
` (3 more replies)
0 siblings, 4 replies; 56+ messages in thread
From: Eugene Y Chang @ 2022-09-26 20:54 UTC (permalink / raw)
To: David Lang; +Cc: Eugene Chang, Bruce Perens, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 11130 bytes --]
Ok, we are getting into the details. I agree.
Every node in the path has to implement this to be effective.
In fact, every node in the path has to have the same prioritization or the scheme becomes ineffective.
Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honolulu)
> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>
> software updates can do far more than just improve recovery.
>
> In practice, large data transfers are less sensitive to latency than smaller data transfers (i.e. downloading a CD image vs a video conference), software can ensure better fairness in preventing a bulk transfer from hurting the more latency sensitive transfers.
>
> (the example below is not completely accurate, but I think it gets the point across)
>
> When buffers become excessivly large, you have the situation where a video call is going to generate a small amount of data at a regular interval, but a bulk data transfer is able to dump a huge amount of data into the buffer instantly.
>
> If you just do FIFO, then you get a small chunk of video call, then several seconds worth of CD transfer, followed by the next small chunk of the video call.
>
> But the software can prevent the one app from hogging so much of the connection and let the chunk of video call in sooner, avoiding the impact to the real time traffic. Historically this has required the admin classify all traffic and configure equipment to implement different treatment based on the classification (and this requires trust in the classification process), the bufferbloat team has developed options (fq_codel and cake) that can ensure fairness between applications/servers with little or no configuration, and no trust in other systems to properly classify their traffic.
>
> The one thing that Cake needs to work really well is to be able to know what the data rate available is. With Starlink, this changes frequently and cake integrated into the starlink dish/router software would be far better than anything that can be done externally as the rate changes can be fed directly into the settings (currently they are only indirectly detected)
>
> David Lang
>
>
> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>
>> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>>
>> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>>
>>> Please help to explain. Here's a draft to start with:
>>>
>>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>>
>>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>>
>>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>>
>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu><mailto:eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>>> wrote:
>>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>>
>>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>>> - gamers care but most people may think it is frivolous.
>>> - musicians care but that is mostly for a hobby.
>>> - business should care because of productivity but they don’t know how to “see” the impact.
>>>
>>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>>
>>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>>
>>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>>
>>> Gene
>>> -----------------------------------
>>> Eugene Chang
>>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu> <mailto:eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>>
>>> +1-781-799-0233 (in Honolulu)
>>>
>>>
>>>
>>>
>>>
>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net><mailto:starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>> wrote:
>>>>
>>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>>
>>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>>
>>>> Thanks
>>>>
>>>> Bruce
>>>>
>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net><mailto:starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>> wrote:
>>>> These days, if you want attention, you gotta buy it. A 50k half page
>>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>>> go a long way towards shifting the tide.
>>>>
>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com> <mailto:dave.taht@gmail.com <mailto:dave.taht@gmail.com>>> wrote:
>>>>>
>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com> <mailto:Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>>> wrote:
>>>>>>
>>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>>
>>>>> That's a great idea. I have visions of crashing the washington
>>>>> correspondents dinner, but perhaps
>>>>> there is some set of gatherings journalists regularly attend?
>>>>>
>>>>>>
>>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> <mailto:starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net>> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net> <mailto:starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>>> wrote:
>>>>>>
>>>>>> I still find it remarkable that reporters are still missing the
>>>>>> meaning of the huge latencies for starlink, under load.
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/><https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>>
>>>>> Dave Täht CEO, TekLibre, LLC
>>>>
>>>>
>>>>
>>>> --
>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/><https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>>
>>>> Dave Täht CEO, TekLibre, LLC
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net> <mailto:Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>>
>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink> <https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>>
>>>>
>>>>
>>>> --
>>>> Bruce Perens K6BP
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net> <mailto:Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>>
>>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink> <https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>>
>>>
>>>
>>>
>>> --
>>> Bruce Perens K6BP
[-- Attachment #1.2: Type: text/html, Size: 25175 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:19 ` Eugene Y Chang
2022-09-26 20:28 ` Dave Taht
2022-09-26 20:35 ` Bruce Perens
@ 2022-09-26 20:48 ` David Lang
2022-09-26 20:54 ` Eugene Y Chang
2 siblings, 1 reply; 56+ messages in thread
From: David Lang @ 2022-09-26 20:48 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: Bruce Perens, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 9227 bytes --]
software updates can do far more than just improve recovery.
In practice, large data transfers are less sensitive to latency than smaller
data transfers (i.e. downloading a CD image vs a video conference), software can
ensure better fairness in preventing a bulk transfer from hurting the more
latency sensitive transfers.
(the example below is not completely accurate, but I think it gets the point
across)
When buffers become excessivly large, you have the situation where a video call
is going to generate a small amount of data at a regular interval, but a bulk
data transfer is able to dump a huge amount of data into the buffer instantly.
If you just do FIFO, then you get a small chunk of video call, then several
seconds worth of CD transfer, followed by the next small chunk of the video
call.
But the software can prevent the one app from hogging so much of the connection
and let the chunk of video call in sooner, avoiding the impact to the real time
traffic. Historically this has required the admin classify all traffic and
configure equipment to implement different treatment based on the classification
(and this requires trust in the classification process), the bufferbloat team
has developed options (fq_codel and cake) that can ensure fairness between
applications/servers with little or no configuration, and no trust in other
systems to properly classify their traffic.
The one thing that Cake needs to work really well is to be able to know what the
data rate available is. With Starlink, this changes frequently and cake
integrated into the starlink dish/router software would be far better than
anything that can be done externally as the rate changes can be fed directly
into the settings (currently they are only indirectly detected)
David Lang
On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
> You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
>
> If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
>
> Gene
> ----------------------------------------------
> Eugene Chang
> IEEE Senior Life Member
> eugene.chang@ieee.org
> 781-799-0233 (in Honolulu)
>
>
>
>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>
>> Please help to explain. Here's a draft to start with:
>>
>> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>>
>> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>>
>> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
>> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>>
>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>> wrote:
>> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>>
>> First, we have to help people see the symptoms of latency and how it impacts something they care about.
>> - gamers care but most people may think it is frivolous.
>> - musicians care but that is mostly for a hobby.
>> - business should care because of productivity but they don’t know how to “see” the impact.
>>
>> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>>
>> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>>
>> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>>
>> Gene
>> -----------------------------------
>> Eugene Chang
>> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>> +1-781-799-0233 (in Honolulu)
>>
>>
>>
>>
>>
>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>
>>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>>
>>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>>
>>> Thanks
>>>
>>> Bruce
>>>
>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>> These days, if you want attention, you gotta buy it. A 50k half page
>>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>>> go a long way towards shifting the tide.
>>>
>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>>>
>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>>>> <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>>>>>
>>>>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>>>>
>>>> That's a great idea. I have visions of crashing the washington
>>>> correspondents dinner, but perhaps
>>>> there is some set of gatherings journalists regularly attend?
>>>>
>>>>>
>>>>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>>>>
>>>>> I still find it remarkable that reporters are still missing the
>>>>> meaning of the huge latencies for starlink, under load.
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>
>>>> Dave Täht CEO, TekLibre, LLC
>>>
>>>
>>>
>>> --
>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>
>>> Dave Täht CEO, TekLibre, LLC
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>>
>>>
>>> --
>>> Bruce Perens K6BP
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>
>>
>>
>> --
>> Bruce Perens K6BP
>
>
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:24 ` Bruce Perens
2022-09-26 20:32 ` Dave Taht
@ 2022-09-26 20:47 ` Ben Greear
1 sibling, 0 replies; 56+ messages in thread
From: Ben Greear @ 2022-09-26 20:47 UTC (permalink / raw)
To: Bruce Perens; +Cc: Dave Taht via Starlink
On 9/26/22 1:24 PM, Bruce Perens wrote:
> On Mon, Sep 26, 2022 at 1:14 PM Ben Greear via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>
> I think that engineers telling other engineers (military) that something isn't
> sufficient is making a lot of assumptions that should not be made.
>
>
> I don't think we need quite that call to inaction :-) . I can certainly see the problem on my Starlink connection, and can classify the degradation of
> performance under load that should not be there. It's insufficient for a low latency video call, which I think is an easy definition of a
> lowest-common-denominator for anything involving vehicle control.
A call for other people to fix problems is not doing a great deal of action. At my house here in USA,
starlink is better than anything else available (Other option being 3Mbps/768 DSL, and maybe some sketchy
fixed-wireless if I put up a tower). I can and do make zoom/whatever calls on starlink. It isn't always great, probably
mostly due to some trees I don't want to cut, but it is also functional enough.
The military and anyone else with a real need is going to be able to make use of things that are better
than what is currently available. Let *them* tell spacex what they really need, it may be completely
different from what you think they need. Having 'Scientists' make proclamations about assumptions
strikes me as detrimental to everyone involved.
As for vehicle control, NASA can control vehicles on Mars, and you can fly a drone
by near instant line of sight feedback. There is a long continuum of required latency between those, with
more latency/jitter requiring more intelligent local control and/or more room for errors.
>
> And if you want to propose some solution, then define the metrics of that solution. First,
> what is max latency/jitter/whatever that the application can handle and still be useful?
> Why exactly is your ham thing failing, and what latency/jitter would resolve it. And/or, what mitigation
> in your software/procedures would solve it.
>
>
> My ham application is equivalent to a low-latency voice-only WebRTC call. There are diagnostics for them, and for the video call mentioned above. I would hope
> that Taht could put together numbers.
Like how low latency do you actually need? And are you trying to do this low-latency thing at same time you
are doing download/upload tests, or are you just doing minimal traffic and seeing excessive latency/jitter?
>
> I know that Dave & crew have made some improvements to the wifi stack, but it is far from
> solved even today. Maybe effort is better done on wifi where developers that are not @spacex
> can actually make changes and test results.
>
>
> This does seem to be a call to inaction, doesn't it? Dave and Co. have been working on WiFi for quite some time and have good papers.
It is a call to action for engineers make progress where they can actually affect the
technology stack, not just complain that spacex should fix things itself.
Papers or not, wifi still can use plenty of work, relatively low cost of goods to build and test a network (though high
barrier for actually learning the code well enough to make useful progress...but not much to be done about that).
Thanks,
Ben
--
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc http://www.candelatech.com
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:32 ` Dave Taht
@ 2022-09-26 20:36 ` Bruce Perens
0 siblings, 0 replies; 56+ messages in thread
From: Bruce Perens @ 2022-09-26 20:36 UTC (permalink / raw)
To: Dave Taht; +Cc: Ben Greear, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 549 bytes --]
Figure out what you want to say, and I can get you the attention.
Thanks
Bruce
On Mon, Sep 26, 2022 at 1:32 PM Dave Taht <dave.taht@gmail.com> wrote:
> Getting more data over time is why I formed this list. I especially
> wanted to see the ISL stuff go up, and have multiple sites oing
> testing between themselves over ipv6...
>
> I like the idea of a call to action, but I'd like to pick on 5g, and
> also light a fire under the cable providers (other than comcast) that
> haven't enabled docsis-3,1 pie yet.
>
--
Bruce Perens K6BP
[-- Attachment #2: Type: text/html, Size: 1037 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:19 ` Eugene Y Chang
2022-09-26 20:28 ` Dave Taht
@ 2022-09-26 20:35 ` Bruce Perens
2022-09-26 20:48 ` David Lang
2 siblings, 0 replies; 56+ messages in thread
From: Bruce Perens @ 2022-09-26 20:35 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: Dave Taht, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 933 bytes --]
On Mon, Sep 26, 2022 at 1:19 PM Eugene Y Chang <eugene.chang@ieee.org>
wrote:
> You already know this. Bufferbloat is a symptom and not the cause.
> Bufferbloat grows when there are (1) periods of low or no bandwidth or (2)
> periods of insufficient bandwidth (aka network congestion).
>
> If I understand this correctly, just a software update cannot make
> bufferbloat go away. It might improve the speed of recovery (e.g. throw
> away all time sensitive UDP messages).
>
This is not my understanding.
Bufferbloat is caused by too much buffering in your host, the endpoint, and
all intermediate nodes. As a result, they feed packets into the network
faster than all of the intermediate nodes can pass them on. And then your
latency-sensitive packet gets stuck at the end of those buffers because
nobody across the network honors quality-of-service markings in the packet
or even uses them honestly.
Dave can no doubt say more.
[-- Attachment #2: Type: text/html, Size: 1367 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:24 ` Bruce Perens
@ 2022-09-26 20:32 ` Dave Taht
2022-09-26 20:36 ` Bruce Perens
2022-09-26 20:47 ` Ben Greear
1 sibling, 1 reply; 56+ messages in thread
From: Dave Taht @ 2022-09-26 20:32 UTC (permalink / raw)
To: Bruce Perens; +Cc: Ben Greear, Dave Taht via Starlink
Getting more data over time is why I formed this list. I especially
wanted to see the ISL stuff go up, and have multiple sites oing
testing between themselves over ipv6...
I like the idea of a call to action, but I'd like to pick on 5g, and
also light a fire under the cable providers (other than comcast) that
haven't enabled docsis-3,1 pie yet.
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:28 ` Dave Taht
@ 2022-09-26 20:32 ` Bruce Perens
0 siblings, 0 replies; 56+ messages in thread
From: Bruce Perens @ 2022-09-26 20:32 UTC (permalink / raw)
To: Dave Taht; +Cc: Eugene Y Chang, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 984 bytes --]
Yes, as an MVNO user, I get thrown off of T-Mobile when the non-MVNO users
want all of the system bandwidth. So we can moderate the statement. But we
definitely observe a problem *at our level.*
Thanks
* Bruce*
On Mon, Sep 26, 2022 at 1:28 PM Dave Taht <dave.taht@gmail.com> wrote:
> I appreciate you grabbing this bull by the horns bruce!
>
> However, it is extremely feasible that for their business class,
> military, and/or ukraine based services they's actually implemented at
> least some of the improvements we've been recommending to them for
> 3 years now. It would be smart if they had and weren't talking about it!
>
> Much like how (at last I heard) verizon puts the at home users in the
> lowest 5g service class, if starlink
> was trying to shed the first bout of residential users and retain
> those that don't care at a high price with less service, they could
> not be more effective at that by their baseline services degrading.
>
--
Bruce Perens K6BP
[-- Attachment #2: Type: text/html, Size: 1527 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:19 ` Eugene Y Chang
@ 2022-09-26 20:28 ` Dave Taht
2022-09-26 20:32 ` Bruce Perens
2022-09-26 20:35 ` Bruce Perens
2022-09-26 20:48 ` David Lang
2 siblings, 1 reply; 56+ messages in thread
From: Dave Taht @ 2022-09-26 20:28 UTC (permalink / raw)
To: Eugene Y Chang; +Cc: Bruce Perens, Dave Taht via Starlink
I appreciate you grabbing this bull by the horns bruce!
However, it is extremely feasible that for their business class,
military, and/or ukraine based services they's actually implemented at
least some of the improvements we've been recommending to them for
3 years now. It would be smart if they had and weren't talking about it!
Much like how (at last I heard) verizon puts the at home users in the
lowest 5g service class, if starlink
was trying to shed the first bout of residential users and retain
those that don't care at a high price with less service, they could
not be more effective at that by their baseline services degrading.
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:14 ` Ben Greear
@ 2022-09-26 20:24 ` Bruce Perens
2022-09-26 20:32 ` Dave Taht
2022-09-26 20:47 ` Ben Greear
0 siblings, 2 replies; 56+ messages in thread
From: Bruce Perens @ 2022-09-26 20:24 UTC (permalink / raw)
To: Ben Greear; +Cc: Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 1529 bytes --]
On Mon, Sep 26, 2022 at 1:14 PM Ben Greear via Starlink <
starlink@lists.bufferbloat.net> wrote:
> I think that engineers telling other engineers (military) that something
> isn't
> sufficient is making a lot of assumptions that should not be made.
>
I don't think we need quite that call to inaction :-) . I can certainly see
the problem on my Starlink connection, and can classify the degradation of
performance under load that should not be there. It's insufficient for a
low latency video call, which I think is an easy definition of a
lowest-common-denominator for anything involving vehicle control.
And if you want to propose some solution, then define the metrics of that
> solution. First,
> what is max latency/jitter/whatever that the application can handle and
> still be useful?
> Why exactly is your ham thing failing, and what latency/jitter would
> resolve it. And/or, what mitigation
> in your software/procedures would solve it.
>
My ham application is equivalent to a low-latency voice-only WebRTC call.
There are diagnostics for them, and for the video call mentioned above. I
would hope that Taht could put together numbers.
> I know that Dave & crew have made some improvements to the wifi stack, but
> it is far from
> solved even today. Maybe effort is better done on wifi where developers
> that are not @spacex
> can actually make changes and test results.
>
This does seem to be a call to inaction, doesn't it? Dave and Co. have been
working on WiFi for quite some time and have good papers.
[-- Attachment #2: Type: text/html, Size: 2418 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:04 ` Bruce Perens
2022-09-26 20:14 ` Ben Greear
@ 2022-09-26 20:19 ` Eugene Y Chang
2022-09-26 20:28 ` Dave Taht
` (2 more replies)
1 sibling, 3 replies; 56+ messages in thread
From: Eugene Y Chang @ 2022-09-26 20:19 UTC (permalink / raw)
To: Bruce Perens; +Cc: Eugene Chang, Dave Taht, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 7421 bytes --]
You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat grows when there are (1) periods of low or no bandwidth or (2) periods of insufficient bandwidth (aka network congestion).
If I understand this correctly, just a software update cannot make bufferbloat go away. It might improve the speed of recovery (e.g. throw away all time sensitive UDP messages).
Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honolulu)
> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>
> Please help to explain. Here's a draft to start with:
>
> Starlink Performance Not Sufficient for Military Applications, Say Scientists
>
> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about sustaining bandwidth as the customer base grows. It's latency and jitter. As load increases, latency, the time it takes for a packet to get through, increases more than it should. The scientists who have fought bufferbloat, a major cause of latency on the internet, know why. SpaceX needs to upgrade their system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but there are some tunable parameters. Jitter is a change in the speed of getting a packet through the network during a connection, which is inevitable in satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>
> We've done all of the work, SpaceX just needs to adopt it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the X Window System, chimed in: <fill in here please>
> Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at scale without adoption of bufferbloat-fighting strategies.
>
> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>> wrote:
> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>
> First, we have to help people see the symptoms of latency and how it impacts something they care about.
> - gamers care but most people may think it is frivolous.
> - musicians care but that is mostly for a hobby.
> - business should care because of productivity but they don’t know how to “see” the impact.
>
> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
>
> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
>
> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>
> Gene
> -----------------------------------
> Eugene Chang
> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
> +1-781-799-0233 (in Honolulu)
>
>
>
>
>
>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>
>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>>
>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>>
>> Thanks
>>
>> Bruce
>>
>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>> These days, if you want attention, you gotta buy it. A 50k half page
>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>> go a long way towards shifting the tide.
>>
>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>> >
>> > On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>> > <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>> > >
>> > > The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>> >
>> > That's a great idea. I have visions of crashing the washington
>> > correspondents dinner, but perhaps
>> > there is some set of gatherings journalists regularly attend?
>> >
>> > >
>> > > On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>> > >
>> > > I still find it remarkable that reporters are still missing the
>> > > meaning of the huge latencies for starlink, under load.
>> > >
>> > >
>> >
>> >
>> > --
>> > FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>
>> > Dave Täht CEO, TekLibre, LLC
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>
>>
>> --
>> Bruce Perens K6BP
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>
>
>
> --
> Bruce Perens K6BP
[-- Attachment #1.2: Type: text/html, Size: 18366 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 20:04 ` Bruce Perens
@ 2022-09-26 20:14 ` Ben Greear
2022-09-26 20:24 ` Bruce Perens
2022-09-26 20:19 ` Eugene Y Chang
1 sibling, 1 reply; 56+ messages in thread
From: Ben Greear @ 2022-09-26 20:14 UTC (permalink / raw)
To: starlink
[-- Attachment #1: Type: text/plain, Size: 9253 bytes --]
I think that engineers telling other engineers (military) that something isn't
sufficient is making a lot of assumptions that should not be made.
And if you want to propose some solution, then define the metrics of that solution. First,
what is max latency/jitter/whatever that the application can handle and still be useful?
Why exactly is your ham thing failing, and what latency/jitter would resolve it. And/or, what mitigation
in your software/procedures would solve it.
I know that Dave & crew have made some improvements to the wifi stack, but it is far from
solved even today. Maybe effort is better done on wifi where developers that are not @spacex
can actually make changes and test results.
Thanks,
Ben
On 9/26/22 1:04 PM, Bruce Perens via Starlink wrote:
> Please help to explain. Here's a draft to start with:
>
> *Starlink Performance Not Sufficient for Military Applications, Say Scientists*
>
> The problem is not availability: Starlink works where nothing but another satellite network would. It's not bandwidth, although others have questions about
> sustaining bandwidth as the customer base grows. It's /latency/ and /jitter. A/s load increases, latency, the time it takes for a packet to get through,
> increases more than it should. The scientists who have fought /bufferbloat, /a major cause of latency on the internet, know why. SpaceX needs to upgrade their
> system to use the scientist's Open Source modifications to Linux to fight bufferbloat, and thus reduce latency. This is mostly just using a newer version, but
> there are some tunable parameters. Jitter is a /change/ in the speed of getting a packet through the network during a connection, which is inevitable in
> satellite networks, but will be improved by making use of the bufferbloat-fighting software, and probably with the addition of more satellites.
>
> /We've done all of the work, SpaceX just needs to adopt it by upgrading their software, /said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator
> of the X Window System, chimed in: <fill in here please>
> Open Source luminary Bruce Perens said: /sometimes Starlink's latency and jitter make it inadequate to remote-control my ham radio station. But the military
> is experimenting with remote-control of vehicles on the battlefield and other applications that can be demonstrated, but won't happen at *scale* without
> adoption of bufferbloat-fighting strategies./
>
> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>> wrote:
>
> The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
>
> First, we have to help people see the symptoms of latency and how it impacts something they care about.
> - gamers care but most people may think it is frivolous.
> - musicians care but that is mostly for a hobby.
> - business should care because of productivity but they don’t know how to “see” the impact.
>
> Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this
> awakening, you can get all the press you want for free.
>
> Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the
> developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad
> economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency
> is hurting them. Unfair but most people don’t know the issue is latency.
>
> Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
>
> Gene
> -----------------------------------
> Eugene Chang
> eugene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
> +1-781-799-0233(in Honolulu)
>
>
>
>
>
>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>
>> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all
>> through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they
>> publish a link to our statement.
>>
>> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is
>> interested in doing much more, which they have demonstrated, but I don't see happening /at scale /without some technical work on the network. Being able
>> to say this isn't ready for the government's application would be an attention-getter.
>>
>> Thanks
>>
>> Bruce
>>
>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>
>> These days, if you want attention, you gotta buy it. A 50k half page
>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>> go a long way towards shifting the tide.
>>
>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>> >
>> > On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>> > <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
>> > >
>> > > The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background
>> briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to
>> focus on what they know...
>> >
>> > That's a great idea. I have visions of crashing the washington
>> > correspondents dinner, but perhaps
>> > there is some set of gatherings journalists regularly attend?
>> >
>> > >
>> > > On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net
>> <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>> > >
>> > > I still find it remarkable that reporters are still missing the
>> > > meaning of the huge latencies for starlink, under load.
>> > >
>> > >
>> >
>> >
>> > --
>> > FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__blog.cerowrt.org_post_state-5Fof-5Ffq-5Fcodel_&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=HYKqseB9xg-u2kz3egvegqfgyXnEBhQotXfR3iCfdgM&m=blDX6_rxt44xEoPiFsiJL_Lzz5vd5qQX5frndqg1CgQ&s=2IZ5wDH8NR59vMB84GYAMU19Drz7WqKmMbEK0HO4tQI&e=>
>> > Dave Täht CEO, TekLibre, LLC
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__blog.cerowrt.org_post_state-5Fof-5Ffq-5Fcodel_&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=HYKqseB9xg-u2kz3egvegqfgyXnEBhQotXfR3iCfdgM&m=blDX6_rxt44xEoPiFsiJL_Lzz5vd5qQX5frndqg1CgQ&s=2IZ5wDH8NR59vMB84GYAMU19Drz7WqKmMbEK0HO4tQI&e=>
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/starlink
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.bufferbloat.net_listinfo_starlink&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=HYKqseB9xg-u2kz3egvegqfgyXnEBhQotXfR3iCfdgM&m=blDX6_rxt44xEoPiFsiJL_Lzz5vd5qQX5frndqg1CgQ&s=BAj7jaHw1fOGKbH3FA_Oal05HQywpVkQpzi52zRKBwE&e=>
>>
>>
>>
>> --
>> Bruce Perens K6BP
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/starlink
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.bufferbloat.net_listinfo_starlink&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=HYKqseB9xg-u2kz3egvegqfgyXnEBhQotXfR3iCfdgM&m=blDX6_rxt44xEoPiFsiJL_Lzz5vd5qQX5frndqg1CgQ&s=BAj7jaHw1fOGKbH3FA_Oal05HQywpVkQpzi52zRKBwE&e=>
>
>
>
> --
> Bruce Perens K6BP
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
--
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc http://www.candelatech.com
[-- Attachment #2: Type: text/html, Size: 24811 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 19:59 ` Eugene Chang
@ 2022-09-26 20:04 ` Bruce Perens
2022-09-26 20:14 ` Ben Greear
2022-09-26 20:19 ` Eugene Y Chang
0 siblings, 2 replies; 56+ messages in thread
From: Bruce Perens @ 2022-09-26 20:04 UTC (permalink / raw)
To: Eugene Chang; +Cc: Dave Taht, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 6313 bytes --]
Please help to explain. Here's a draft to start with:
*Starlink Performance Not Sufficient for Military Applications, Say
Scientists*
The problem is not availability: Starlink works where nothing but another
satellite network would. It's not bandwidth, although others have questions
about sustaining bandwidth as the customer base grows. It's *latency*
and *jitter.
A*s load increases, latency, the time it takes for a packet to get through,
increases more than it should. The scientists who have fought
*bufferbloat, *a major cause of latency on the internet, know why. SpaceX
needs to upgrade their system to use the scientist's Open Source
modifications to Linux to fight bufferbloat, and thus reduce latency. This
is mostly just using a newer version, but there are some tunable
parameters. Jitter is a *change* in the speed of getting a packet through
the network during a connection, which is inevitable in satellite networks,
but will be improved by making use of the bufferbloat-fighting software,
and probably with the addition of more satellites.
*We've done all of the work, SpaceX just needs to adopt it by upgrading
their software, *said scientist Dave Taht. Jim Gettys, Taht's collaborator
and creator of the X Window System, chimed in: <fill in here please>
Open Source luminary Bruce Perens said: *sometimes Starlink's latency and
jitter make it inadequate to remote-control my ham radio station. But the
military is experimenting with remote-control of vehicles on the
battlefield and other applications that can be demonstrated, but won't
happen at scale without adoption of bufferbloat-fighting strategies.*
On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu>
wrote:
> The key issue is most people don’t understand why latency matters. They
> don’t see it or feel it’s impact.
>
> First, we have to help people see the symptoms of latency and how it
> impacts something they care about.
> - gamers care but most people may think it is frivolous.
> - musicians care but that is mostly for a hobby.
> - business should care because of productivity but they don’t know how to
> “see” the impact.
>
> Second, there needs to be a “OMG, I have been seeing the action of latency
> all this time and never knew it! I was being shafted.” Once you have this
> awakening, you can get all the press you want for free.
>
> Most of the time when business apps are developed, “we” hide the impact of
> poor performance (aka latency) or they hide from the discussion because the
> developers don’t have a way to fix the latency. Maybe businesses don’t care
> because any employees affected are just considered poor performers. (In bad
> economic times, the poor performers are just laid off.) For employees, if
> they happen to be at a location with bad latency, they don’t know that
> latency is hurting them. Unfair but most people don’t know the issue is
> latency.
>
> Talking and explaining why latency is bad is not as effective as showing
> why latency is bad. Showing has to be with something that has a person
> impact.
>
> Gene
> -----------------------------------
> Eugene Chang
> eugene.chang@alum.mit.edu
> +1-781-799-0233 (in Honolulu)
>
>
>
>
>
> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
> If you want to get attention, you can get it for free. I can place
> articles with various press if there is something interesting to say. Did
> this all through the evangelism of Open Source. All we need to do is write,
> sign, and publish a statement. What they actually write is less relevant if
> they publish a link to our statement.
>
> Right now I am concerned that the Starlink latency and jitter is going to
> be a problem even for remote controlling my ham station. The US Military is
> interested in doing much more, which they have demonstrated, but I don't
> see happening *at scale *without some technical work on the network.
> Being able to say this isn't ready for the government's application would
> be an attention-getter.
>
> Thanks
>
> Bruce
>
> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> These days, if you want attention, you gotta buy it. A 50k half page
>> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
>> signed by the kinds of luminaries we got for the fcc wifi fight, would
>> go a long way towards shifting the tide.
>>
>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com> wrote:
>> >
>> > On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>> > <Jason_Livingood@comcast.com> wrote:
>> > >
>> > > The awareness & understanding of latency & impact on QoE is nearly
>> unknown among reporters. IMO maybe there should be some kind of background
>> briefings for reporters - maybe like a simple YouTube video explainer that
>> is short & high level & visual? Otherwise reporters will just continue to
>> focus on what they know...
>> >
>> > That's a great idea. I have visions of crashing the washington
>> > correspondents dinner, but perhaps
>> > there is some set of gatherings journalists regularly attend?
>> >
>> > >
>> > > On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <
>> starlink-bounces@lists.bufferbloat.net on behalf of
>> starlink@lists.bufferbloat.net> wrote:
>> > >
>> > > I still find it remarkable that reporters are still missing the
>> > > meaning of the huge latencies for starlink, under load.
>> > >
>> > >
>> >
>> >
>> > --
>> > FQ World Domination pending:
>> https://blog.cerowrt.org/post/state_of_fq_codel/
>> > Dave Täht CEO, TekLibre, LLC
>>
>>
>>
>> --
>> FQ World Domination pending:
>> https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
>
> --
> Bruce Perens K6BP
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
>
--
Bruce Perens K6BP
[-- Attachment #2: Type: text/html, Size: 13754 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 16:32 ` Bruce Perens
@ 2022-09-26 19:59 ` Eugene Chang
2022-09-26 20:04 ` Bruce Perens
0 siblings, 1 reply; 56+ messages in thread
From: Eugene Chang @ 2022-09-26 19:59 UTC (permalink / raw)
To: Bruce Perens; +Cc: Eugene Chang, Dave Taht, Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 4738 bytes --]
The key issue is most people don’t understand why latency matters. They don’t see it or feel it’s impact.
First, we have to help people see the symptoms of latency and how it impacts something they care about.
- gamers care but most people may think it is frivolous.
- musicians care but that is mostly for a hobby.
- business should care because of productivity but they don’t know how to “see” the impact.
Second, there needs to be a “OMG, I have been seeing the action of latency all this time and never knew it! I was being shafted.” Once you have this awakening, you can get all the press you want for free.
Most of the time when business apps are developed, “we” hide the impact of poor performance (aka latency) or they hide from the discussion because the developers don’t have a way to fix the latency. Maybe businesses don’t care because any employees affected are just considered poor performers. (In bad economic times, the poor performers are just laid off.) For employees, if they happen to be at a location with bad latency, they don’t know that latency is hurting them. Unfair but most people don’t know the issue is latency.
Talking and explaining why latency is bad is not as effective as showing why latency is bad. Showing has to be with something that has a person impact.
Gene
-----------------------------------
Eugene Chang
eugene.chang@alum.mit.edu
+1-781-799-0233 (in Honolulu)
> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> If you want to get attention, you can get it for free. I can place articles with various press if there is something interesting to say. Did this all through the evangelism of Open Source. All we need to do is write, sign, and publish a statement. What they actually write is less relevant if they publish a link to our statement.
>
> Right now I am concerned that the Starlink latency and jitter is going to be a problem even for remote controlling my ham station. The US Military is interested in doing much more, which they have demonstrated, but I don't see happening at scale without some technical work on the network. Being able to say this isn't ready for the government's application would be an attention-getter.
>
> Thanks
>
> Bruce
>
> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
> These days, if you want attention, you gotta buy it. A 50k half page
> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
> signed by the kinds of luminaries we got for the fcc wifi fight, would
> go a long way towards shifting the tide.
>
> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
> >
> > On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
> > <Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:
> > >
> > > The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
> >
> > That's a great idea. I have visions of crashing the washington
> > correspondents dinner, but perhaps
> > there is some set of gatherings journalists regularly attend?
> >
> > >
> > > On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
> > >
> > > I still find it remarkable that reporters are still missing the
> > > meaning of the huge latencies for starlink, under load.
> > >
> > >
> >
> >
> > --
> > FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>
> > Dave Täht CEO, TekLibre, LLC
>
>
>
> --
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ <https://blog.cerowrt.org/post/state_of_fq_codel/>
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>
>
> --
> Bruce Perens K6BP
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #1.2: Type: text/html, Size: 17675 bytes --]
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 16:21 ` Dave Taht
@ 2022-09-26 16:32 ` Bruce Perens
2022-09-26 19:59 ` Eugene Chang
0 siblings, 1 reply; 56+ messages in thread
From: Bruce Perens @ 2022-09-26 16:32 UTC (permalink / raw)
To: Dave Taht; +Cc: Livingood, Jason, Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 2636 bytes --]
If you want to get attention, you can get it for free. I can place articles
with various press if there is something interesting to say. Did this all
through the evangelism of Open Source. All we need to do is write, sign,
and publish a statement. What they actually write is less relevant if they
publish a link to our statement.
Right now I am concerned that the Starlink latency and jitter is going to
be a problem even for remote controlling my ham station. The US Military is
interested in doing much more, which they have demonstrated, but I don't
see happening *at scale *without some technical work on the network. Being
able to say this isn't ready for the government's application would be an
attention-getter.
Thanks
Bruce
On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <
starlink@lists.bufferbloat.net> wrote:
> These days, if you want attention, you gotta buy it. A 50k half page
> ad in the wapo or NYT riffing off of It's the latency, Stupid!",
> signed by the kinds of luminaries we got for the fcc wifi fight, would
> go a long way towards shifting the tide.
>
> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com> wrote:
> >
> > On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
> > <Jason_Livingood@comcast.com> wrote:
> > >
> > > The awareness & understanding of latency & impact on QoE is nearly
> unknown among reporters. IMO maybe there should be some kind of background
> briefings for reporters - maybe like a simple YouTube video explainer that
> is short & high level & visual? Otherwise reporters will just continue to
> focus on what they know...
> >
> > That's a great idea. I have visions of crashing the washington
> > correspondents dinner, but perhaps
> > there is some set of gatherings journalists regularly attend?
> >
> > >
> > > On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <
> starlink-bounces@lists.bufferbloat.net on behalf of
> starlink@lists.bufferbloat.net> wrote:
> > >
> > > I still find it remarkable that reporters are still missing the
> > > meaning of the huge latencies for starlink, under load.
> > >
> > >
> >
> >
> > --
> > FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/
> > Dave Täht CEO, TekLibre, LLC
>
>
>
> --
> FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
--
Bruce Perens K6BP
[-- Attachment #2: Type: text/html, Size: 3953 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 15:29 ` Dave Taht
2022-09-26 15:57 ` Sebastian Moeller
@ 2022-09-26 16:21 ` Dave Taht
2022-09-26 16:32 ` Bruce Perens
1 sibling, 1 reply; 56+ messages in thread
From: Dave Taht @ 2022-09-26 16:21 UTC (permalink / raw)
To: Livingood, Jason; +Cc: Dave Taht via Starlink
These days, if you want attention, you gotta buy it. A 50k half page
ad in the wapo or NYT riffing off of It's the latency, Stupid!",
signed by the kinds of luminaries we got for the fcc wifi fight, would
go a long way towards shifting the tide.
On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com> wrote:
>
> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
> <Jason_Livingood@comcast.com> wrote:
> >
> > The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>
> That's a great idea. I have visions of crashing the washington
> correspondents dinner, but perhaps
> there is some set of gatherings journalists regularly attend?
>
> >
> > On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net on behalf of starlink@lists.bufferbloat.net> wrote:
> >
> > I still find it remarkable that reporters are still missing the
> > meaning of the huge latencies for starlink, under load.
> >
> >
>
>
> --
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC
--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 15:29 ` Dave Taht
@ 2022-09-26 15:57 ` Sebastian Moeller
2022-09-26 16:21 ` Dave Taht
1 sibling, 0 replies; 56+ messages in thread
From: Sebastian Moeller @ 2022-09-26 15:57 UTC (permalink / raw)
To: Dave Täht, Dave Taht via Starlink, Daniel AJ Sokolov
Hi Dave,
> On Sep 26, 2022, at 17:29, Dave Taht via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
> <Jason_Livingood@comcast.com> wrote:
>>
>> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
>
> That's a great idea. I have visions of crashing the washington
> correspondents dinner, but perhaps
> there is some set of gatherings journalists regularly attend?
I would assume the relevant tech-journalists might be easier to catch at PR events at trade shows, or tech-related PR events like CES.
However, starlink's own website does not seem to prominently advertise specific rates anyway, but does stress the latency advantage over geo stationary satellite links.
The rest of the residential market however has been trained for decades now that the most relevant numbers are the maximal throughput* (well mostly downloads only), hence I can understand that articles will mention regressions of that number prominently.
I just stumbled over a German article (https://www.heise.de/news/Speedtests-Satelliteninternet-Starlink-teilweise-deutlich-langsamer-geworden-7275243.html) apparently presenting the same Ookla numbers for starlink containing a single short paragraph about latency increases, but without explaining their relevance, so exactly what you see in the other article as well; @Daniel anything we could do to make your colleague? Martin Holland more latency aware?
>> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net on behalf of starlink@lists.bufferbloat.net> wrote:
>>
>> I still find it remarkable that reporters are still missing the
>> meaning of the huge latencies for starlink, under load.
In their "defense" a similar level of latency-awareness is often displayed when talking about wired internet connections. (And from my perspective, I consider absolute latency somewhat less important than latency variation)
*) I am not trying to assign responsibility/blame here, I think both ISPs and customers jointly "selected" maximal contracted throughput as measure of choice to justify differential prices.
>
> --
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-26 15:20 ` Livingood, Jason
@ 2022-09-26 15:29 ` Dave Taht
2022-09-26 15:57 ` Sebastian Moeller
2022-09-26 16:21 ` Dave Taht
0 siblings, 2 replies; 56+ messages in thread
From: Dave Taht @ 2022-09-26 15:29 UTC (permalink / raw)
To: Livingood, Jason; +Cc: Dave Taht via Starlink
On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
<Jason_Livingood@comcast.com> wrote:
>
> The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
That's a great idea. I have visions of crashing the washington
correspondents dinner, but perhaps
there is some set of gatherings journalists regularly attend?
>
> On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net on behalf of starlink@lists.bufferbloat.net> wrote:
>
> I still find it remarkable that reporters are still missing the
> meaning of the huge latencies for starlink, under load.
>
>
--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-21 18:35 Dave Taht
2022-09-22 11:41 ` Andrew Crane
@ 2022-09-26 15:20 ` Livingood, Jason
2022-09-26 15:29 ` Dave Taht
1 sibling, 1 reply; 56+ messages in thread
From: Livingood, Jason @ 2022-09-26 15:20 UTC (permalink / raw)
To: Dave Taht, Dave Taht via Starlink
The awareness & understanding of latency & impact on QoE is nearly unknown among reporters. IMO maybe there should be some kind of background briefings for reporters - maybe like a simple YouTube video explainer that is short & high level & visual? Otherwise reporters will just continue to focus on what they know...
On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <starlink-bounces@lists.bufferbloat.net on behalf of starlink@lists.bufferbloat.net> wrote:
I still find it remarkable that reporters are still missing the
meaning of the huge latencies for starlink, under load.
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-22 15:07 ` Dave Taht
@ 2022-09-22 15:26 ` Dave Taht
0 siblings, 0 replies; 56+ messages in thread
From: Dave Taht @ 2022-09-22 15:26 UTC (permalink / raw)
To: warren ponder; +Cc: Mike Puchol, Dave Täht via Starlink
This MIT paper went by today. It's really good.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4178804
On Thu, Sep 22, 2022 at 8:07 AM Dave Taht <dave.taht@gmail.com> wrote:
>
> They need to match the bandwidth to the buffering. RFC7567 ain't rocket science.
>
>
> On Thu, Sep 22, 2022 at 6:46 AM warren ponder via Starlink
> <starlink@lists.bufferbloat.net> wrote:
> >
> > Great description Mike. Elon acknowledged this 12+ months ago. 'Need more ground stations and less foolish routing'
> >
> > As Dave said though it mind blowing how tech writers come up with stuff. Granted SL does not really engage in that area but there are so many resources available and willing to validate their articles you would think they would have the integrity to use them.
> >
> > WP
> >
> > On Thu, Sep 22, 2022, 5:01 AM Mike Puchol via Starlink <starlink@lists.bufferbloat.net> wrote:
> >>
> >> Satellites don’t get re-positioned, they are in orbital planes and slots, to move them is expensive in time and fuel, thus, once they have been placed, they stay. You can see how the constellation operates at starlink.sx - you’ll quickly notice why coverage is a function of gateway availability, the constellation has enough density.
> >>
> >> The issues in the US are twofold - many customers added while not enough satellite capacity is available (the first constellation is only about 50% complete), and many gateways only have half the spectrum available, reducing available througput.
> >>
> >> Best,
> >>
> >> Mike
> >> On Sep 22, 2022 at 13:42 +0200, Andrew Crane via Starlink <starlink@lists.bufferbloat.net>, wrote:
> >>
> >> Even senior reporters for tech publications have been conditioned to not look beyond "speed" numbers.
> >>
> >> OT I wonder if the woes in North America are caused by the unplanned repositioning of satellites for Ukraine coverage.
> >> ~ Andrew
> >>
> >>
> >> On Wed, Sep 21, 2022 at 2:35 PM Dave Taht via Starlink <starlink@lists.bufferbloat.net> wrote:
> >>>
> >>> I still find it remarkable that reporters are still missing the
> >>> meaning of the huge latencies for starlink, under load. Just look at
> >>> the
> >>>
> >>> https://www.pcmag.com/news/starlink-speeds-drop-significantly-in-the-us-amid-congestion-woes
> >>>
> >>> --
> >>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
> >>> Dave Täht CEO, TekLibre, LLC
> >>> _______________________________________________
> >>> Starlink mailing list
> >>> Starlink@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/starlink
> >>
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/starlink
> >>
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/starlink
> >
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
>
>
>
> --
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC
--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-22 13:46 ` warren ponder
@ 2022-09-22 15:07 ` Dave Taht
2022-09-22 15:26 ` Dave Taht
0 siblings, 1 reply; 56+ messages in thread
From: Dave Taht @ 2022-09-22 15:07 UTC (permalink / raw)
To: warren ponder; +Cc: Mike Puchol, Dave Täht via Starlink
[-- Attachment #1: Type: text/plain, Size: 2956 bytes --]
They need to match the bandwidth to the buffering. RFC7567 ain't rocket science.
On Thu, Sep 22, 2022 at 6:46 AM warren ponder via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
> Great description Mike. Elon acknowledged this 12+ months ago. 'Need more ground stations and less foolish routing'
>
> As Dave said though it mind blowing how tech writers come up with stuff. Granted SL does not really engage in that area but there are so many resources available and willing to validate their articles you would think they would have the integrity to use them.
>
> WP
>
> On Thu, Sep 22, 2022, 5:01 AM Mike Puchol via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>> Satellites don’t get re-positioned, they are in orbital planes and slots, to move them is expensive in time and fuel, thus, once they have been placed, they stay. You can see how the constellation operates at starlink.sx - you’ll quickly notice why coverage is a function of gateway availability, the constellation has enough density.
>>
>> The issues in the US are twofold - many customers added while not enough satellite capacity is available (the first constellation is only about 50% complete), and many gateways only have half the spectrum available, reducing available througput.
>>
>> Best,
>>
>> Mike
>> On Sep 22, 2022 at 13:42 +0200, Andrew Crane via Starlink <starlink@lists.bufferbloat.net>, wrote:
>>
>> Even senior reporters for tech publications have been conditioned to not look beyond "speed" numbers.
>>
>> OT I wonder if the woes in North America are caused by the unplanned repositioning of satellites for Ukraine coverage.
>> ~ Andrew
>>
>>
>> On Wed, Sep 21, 2022 at 2:35 PM Dave Taht via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>
>>> I still find it remarkable that reporters are still missing the
>>> meaning of the huge latencies for starlink, under load. Just look at
>>> the
>>>
>>> https://www.pcmag.com/news/starlink-speeds-drop-significantly-in-the-us-amid-congestion-woes
>>>
>>> --
>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>>> Dave Täht CEO, TekLibre, LLC
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
[-- Attachment #2: tcp_nup_-_starlink-wifi3-long.png --]
[-- Type: image/png, Size: 159667 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-22 12:01 ` Mike Puchol
@ 2022-09-22 13:46 ` warren ponder
2022-09-22 15:07 ` Dave Taht
0 siblings, 1 reply; 56+ messages in thread
From: warren ponder @ 2022-09-22 13:46 UTC (permalink / raw)
To: Mike Puchol; +Cc: Dave Täht via Starlink
[-- Attachment #1: Type: text/plain, Size: 2480 bytes --]
Great description Mike. Elon acknowledged this 12+ months ago. 'Need more
ground stations and less foolish routing'
As Dave said though it mind blowing how tech writers come up with stuff.
Granted SL does not really engage in that area but there are so many
resources available and willing to validate their articles you would think
they would have the integrity to use them.
WP
On Thu, Sep 22, 2022, 5:01 AM Mike Puchol via Starlink <
starlink@lists.bufferbloat.net> wrote:
> Satellites don’t get re-positioned, they are in orbital planes and slots,
> to move them is expensive in time and fuel, thus, once they have been
> placed, they stay. You can see how the constellation operates at
> starlink.sx - you’ll quickly notice why coverage is a function of gateway
> availability, the constellation has enough density.
>
> The issues in the US are twofold - many customers added while not enough
> satellite capacity is available (the first constellation is only about 50%
> complete), and many gateways only have half the spectrum available,
> reducing available througput.
>
> Best,
>
> Mike
> On Sep 22, 2022 at 13:42 +0200, Andrew Crane via Starlink <
> starlink@lists.bufferbloat.net>, wrote:
>
> Even senior reporters for tech publications have been conditioned to not
> look beyond "speed" numbers.
>
> OT I wonder if the woes in North America are caused by the unplanned
> repositioning of satellites for Ukraine coverage.
> ~ Andrew
>
>
> On Wed, Sep 21, 2022 at 2:35 PM Dave Taht via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> I still find it remarkable that reporters are still missing the
>> meaning of the huge latencies for starlink, under load. Just look at
>> the
>>
>>
>> https://www.pcmag.com/news/starlink-speeds-drop-significantly-in-the-us-amid-congestion-woes
>>
>> --
>> FQ World Domination pending:
>> https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #2: Type: text/html, Size: 4564 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-22 11:41 ` Andrew Crane
@ 2022-09-22 12:01 ` Mike Puchol
2022-09-22 13:46 ` warren ponder
0 siblings, 1 reply; 56+ messages in thread
From: Mike Puchol @ 2022-09-22 12:01 UTC (permalink / raw)
To: Dave Täht via Starlink
[-- Attachment #1: Type: text/plain, Size: 1796 bytes --]
Satellites don’t get re-positioned, they are in orbital planes and slots, to move them is expensive in time and fuel, thus, once they have been placed, they stay. You can see how the constellation operates at starlink.sx - you’ll quickly notice why coverage is a function of gateway availability, the constellation has enough density.
The issues in the US are twofold - many customers added while not enough satellite capacity is available (the first constellation is only about 50% complete), and many gateways only have half the spectrum available, reducing available througput.
Best,
Mike
On Sep 22, 2022 at 13:42 +0200, Andrew Crane via Starlink <starlink@lists.bufferbloat.net>, wrote:
> Even senior reporters for tech publications have been conditioned to not look beyond "speed" numbers.
>
> OT I wonder if the woes in North America are caused by the unplanned repositioning of satellites for Ukraine coverage.
> ~ Andrew
>
>
> > On Wed, Sep 21, 2022 at 2:35 PM Dave Taht via Starlink <starlink@lists.bufferbloat.net> wrote:
> > > I still find it remarkable that reporters are still missing the
> > > meaning of the huge latencies for starlink, under load. Just look at
> > > the
> > >
> > > https://www.pcmag.com/news/starlink-speeds-drop-significantly-in-the-us-amid-congestion-woes
> > >
> > > --
> > > FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
> > > Dave Täht CEO, TekLibre, LLC
> > > _______________________________________________
> > > Starlink mailing list
> > > Starlink@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #2: Type: text/html, Size: 3093 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Starlink] It's still the starlink latency...
2022-09-21 18:35 Dave Taht
@ 2022-09-22 11:41 ` Andrew Crane
2022-09-22 12:01 ` Mike Puchol
2022-09-26 15:20 ` Livingood, Jason
1 sibling, 1 reply; 56+ messages in thread
From: Andrew Crane @ 2022-09-22 11:41 UTC (permalink / raw)
To: Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 879 bytes --]
Even senior reporters for tech publications have been conditioned to not
look beyond "speed" numbers.
OT I wonder if the woes in North America are caused by the unplanned
repositioning of satellites for Ukraine coverage.
~ Andrew
On Wed, Sep 21, 2022 at 2:35 PM Dave Taht via Starlink <
starlink@lists.bufferbloat.net> wrote:
> I still find it remarkable that reporters are still missing the
> meaning of the huge latencies for starlink, under load. Just look at
> the
>
>
> https://www.pcmag.com/news/starlink-speeds-drop-significantly-in-the-us-amid-congestion-woes
>
> --
> FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #2: Type: text/html, Size: 1712 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* [Starlink] It's still the starlink latency...
@ 2022-09-21 18:35 Dave Taht
2022-09-22 11:41 ` Andrew Crane
2022-09-26 15:20 ` Livingood, Jason
0 siblings, 2 replies; 56+ messages in thread
From: Dave Taht @ 2022-09-21 18:35 UTC (permalink / raw)
To: Dave Taht via Starlink
I still find it remarkable that reporters are still missing the
meaning of the huge latencies for starlink, under load. Just look at
the
https://www.pcmag.com/news/starlink-speeds-drop-significantly-in-the-us-amid-congestion-woes
--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 56+ messages in thread
end of thread, other threads:[~2022-10-17 16:53 UTC | newest]
Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-29 9:10 [Starlink] It's still the starlink latency David Fernández
2022-09-29 19:34 ` Eugene Chang
-- strict thread matches above, loose matches on Subject: below --
2022-10-17 13:50 David Fernández
2022-10-17 16:53 ` David Fernández
2022-09-21 18:35 Dave Taht
2022-09-22 11:41 ` Andrew Crane
2022-09-22 12:01 ` Mike Puchol
2022-09-22 13:46 ` warren ponder
2022-09-22 15:07 ` Dave Taht
2022-09-22 15:26 ` Dave Taht
2022-09-26 15:20 ` Livingood, Jason
2022-09-26 15:29 ` Dave Taht
2022-09-26 15:57 ` Sebastian Moeller
2022-09-26 16:21 ` Dave Taht
2022-09-26 16:32 ` Bruce Perens
2022-09-26 19:59 ` Eugene Chang
2022-09-26 20:04 ` Bruce Perens
2022-09-26 20:14 ` Ben Greear
2022-09-26 20:24 ` Bruce Perens
2022-09-26 20:32 ` Dave Taht
2022-09-26 20:36 ` Bruce Perens
2022-09-26 20:47 ` Ben Greear
2022-09-26 20:19 ` Eugene Y Chang
2022-09-26 20:28 ` Dave Taht
2022-09-26 20:32 ` Bruce Perens
2022-09-26 20:35 ` Bruce Perens
2022-09-26 20:48 ` David Lang
2022-09-26 20:54 ` Eugene Y Chang
2022-09-26 21:01 ` Sebastian Moeller
2022-09-26 21:10 ` Eugene Y Chang
2022-09-26 21:20 ` Sebastian Moeller
2022-09-26 21:35 ` Eugene Y Chang
2022-09-26 21:44 ` David Lang
2022-09-26 21:44 ` Bruce Perens
2022-09-27 0:35 ` Dave Taht
2022-09-27 0:55 ` Bruce Perens
2022-09-27 1:12 ` Dave Taht
2022-09-27 4:06 ` Eugene Y Chang
2022-09-27 3:50 ` Eugene Y Chang
2022-09-27 7:09 ` Sebastian Moeller
2022-09-27 22:46 ` Eugene Y Chang
2022-09-28 9:54 ` Sebastian Moeller
2022-09-28 18:49 ` Eugene Y Chang
2022-09-26 21:22 ` David Lang
2022-09-26 21:29 ` Sebastian Moeller
2022-09-27 3:47 ` Eugene Y Chang
2022-09-27 6:36 ` Sebastian Moeller
2022-09-27 13:55 ` Dave Taht
2022-09-28 0:20 ` Eugene Y Chang
2022-09-26 21:02 ` Bruce Perens
2022-09-26 21:14 ` Dave Taht
2022-09-26 21:10 ` Dave Taht
2022-09-26 21:28 ` warren ponder
2022-09-26 21:34 ` Bruce Perens
2022-09-27 16:14 ` Dave Taht
2022-09-26 21:17 ` David Lang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox