From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [IPv6:2a00:1450:4864:20::630]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 04CE33B29E for ; Mon, 26 Sep 2022 17:44:52 -0400 (EDT) Received: by mail-ej1-x630.google.com with SMTP id nb11so16913039ejc.5 for ; Mon, 26 Sep 2022 14:44:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=perens.com; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=6wgT739NFw/Esr58r66c5qfJkqZhYekfcEIpnk3202U=; b=MmQ7e4qnku/9Lw06daiIF7w0qxiXX2fhn24UuugiSIxj58nzM+8uThmyBuM1JpsbHr 0MAzExtJ4WdngCPwEfPsTMAY03o8qj7QH3Wa3mlpDXpvO243ehooVdwL1/0Xx11/2Fra VIO9j3TIZXjRgCASx/oDvUik1SwUwG2r7xemE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=6wgT739NFw/Esr58r66c5qfJkqZhYekfcEIpnk3202U=; b=3aY9DcM9ttUJtXmfNYvWze9YVvb40gbVkx6LgUXRNbW3WbxZA9OhBQRV6RWsVhiuUE t7EiwdFW9vymOgG32m9/nVMZ8BXgcTglfo0Ick9Dlq8Vyg7MWD0NVSLEQ2MnmYzgBe7M mzxKtTX1zVtK8OCRjGStmBw7N8Kxai9RkBlWH+leQ0shW4Ixy0M2jFpB3a40ed4yeZwv orx/I2gkwPiddJPbSSQnWR7Ka4I7MvwGJ/FoqJX/MFezI2EQU/nGI6fHJW3RqPCViczj IHpZ1qpMm0qwUGaFMw7lK++lsdYKwW7lUtCA5/tjtfRrm+yJx1WKj1kV6MYxqIwAV0oK wzig== X-Gm-Message-State: ACrzQf0GaRVstTxpD5kL/FhGT7cuDUzsSmGCpYW0/RPelL+epsHBLtPk 8sRzIpNdSXbjS6nL8EJAdrrbl/fvJBFoimIWV5AixQ== X-Google-Smtp-Source: AMsMyM4GXqC8SMZI3U7VMgTeqtIM6/2BVqfk41DXiKhinQOvOijAqUBaYuweNtFnIylOUnlnHp/xfmQJ06xm5eV+drI= X-Received: by 2002:a17:907:84a:b0:733:735:2b1a with SMTP id ww10-20020a170907084a00b0073307352b1amr19965445ejb.290.1664228691789; Mon, 26 Sep 2022 14:44:51 -0700 (PDT) MIME-Version: 1.0 References: <060F7695-D48E-413C-9501-54ECC651ABEB@cable.comcast.com> <07C46DD5-7359-410E-8820-82B319944618@alum.mit.edu> <39E525B8-D356-4F76-82FF-F1F0B3183908@ieee.org> <498p2p23-on1q-op89-p518-1874r3r6rpo@ynat.uz> <8DC6E5EE-2B46-4815-A909-E326507E95B1@ieee.org> <9D97FB02-58A5-48B0-9B43-6B7BD2A24099@gmx.de> In-Reply-To: From: Bruce Perens Date: Mon, 26 Sep 2022 14:44:40 -0700 Message-ID: To: Eugene Y Chang Cc: Sebastian Moeller , Dave Taht via Starlink Content-Type: multipart/alternative; boundary="000000000000ffde6e05e99b6f6c" Subject: Re: [Starlink] It's still the starlink latency... X-BeenThere: starlink@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Starlink has bufferbloat. Bad." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 Sep 2022 21:44:53 -0000 --000000000000ffde6e05e99b6f6c Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable That's a good maxim: Don't believe a speed test that is hosted by your own ISP. On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink < starlink@lists.bufferbloat.net> wrote: > Thank you for the dialog,. > This discussion with regards to Starlink is interesting as it confirms my > guesses about the gap between Starlinks overly simplified, over optimisti= c > marketing and the reality as they acquire subscribers. > > I am actually interested in a more perverse issue. I am seeing latency an= d > bufferbloat as a consequence from significant under provisioning. It > doesn=E2=80=99t matter that the ISP is selling a fiber drop, if (parts) o= f their > network is under provisioned. Two end points can be less than 5 mile apar= t > and realize 120+ ms latency. Two Labor Days ago (a holiday) the max laten= cy > was 230+ ms. The pattern I see suggest digital redlining. The older > communities appear to have much more severe under provisioning. > > Another observation. Running speedtest appears to go from the edge of the > network by layer 2 to the speedtest host operated by the ISP. Yup, bypass= es > the (suspected overloaded) routers. > > Anyway, just observing. > > Gene > ---------------------------------------------- > Eugene Chang > IEEE Senior Life Member > eugene.chang@ieee.org > 781-799-0233 (in Honolulu) > > > > On Sep 26, 2022, at 11:20 AM, Sebastian Moeller wrote: > > Hi Gene, > > > On Sep 26, 2022, at 23:10, Eugene Y Chang wrote: > > Comments inline below. > > Gene > ---------------------------------------------- > Eugene Chang > IEEE Senior Life Member > eugene.chang@ieee.org > 781-799-0233 (in Honolulu) > > > > On Sep 26, 2022, at 11:01 AM, Sebastian Moeller wrote: > > Hi Eugene, > > > On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink < > starlink@lists.bufferbloat.net> wrote: > > Ok, we are getting into the details. I agree. > > Every node in the path has to implement this to be effective. > > > Amazingly the biggest bang for the buck is gotten by fixing those nodes > that actually contain a network path's bottleneck. Often these are pretty > stable. So yes for fully guaranteed service quality all nodes would need = to > participate, but for improving things noticeably it is sufficient to > improve the usual bottlenecks, e.g. for many internet access links the ho= me > gateway is a decent point to implement better buffer management. (In shor= t > the problem are over-sized and under-managed buffers, and one of the best > solution is better/smarter buffer management). > > > This is not completely true. > > > [SM] You are likely right, trying to summarize things leads to partially > incorrect generalizations. > > > Say the bottleneck is at node N. During the period of congestion, the > upstream node N-1 will have to buffer. When node N recovers, the > bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. > etc. Making node N better will reduce the extent of the backup at N-1, b= ut > N-1 should implement the better code. > > > [SM] It is the node that builds up the queue that profits most from bette= r > queue management.... (again I generalize, the node with the queue itself > probably does not care all that much, but the endpoints will profit if th= e > queue experiencing node deals with that queue more gracefully). > > > > > > In fact, every node in the path has to have the same prioritization or th= e > scheme becomes ineffective. > > > Yes and no, one of the clearest winners has been flow queueing, IMHO not > because it is the most optimal capacity sharing scheme, but because it is > the least pessimal scheme, allowing all (or none) flows forward progress. > You can interpret that as a scheme in which flows below their capacity > share are prioritized, but I am not sure that is the best way to look at > these things. > > > The hardest part is getting competing ISPs to implement and coordinate. > > > [SM] Yes, but it turned out even with non-cooperating ISPs there is a lot > end-users can do unilaterally on their side to improve both ingress and > egress congestion. Admittedly especially ingress congestion would be even > better handled with cooperation of the ISP. > > Bufferbloat and handoff between ISPs will be hard. The only way to fix > this is to get the unwashed public to care. Then they can say =E2=80=9Cwe= don=E2=80=99t > care about the technical issues, just fix it.=E2=80=9D Until then =E2=80= =A6.. > > > [SM] Well we do this one home network at a time (not because that is > efficient or ideal, but simply because it is possible). Maybe, if you hav= e > not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate > in addition) on your home internet access link for say a week and let us > know ih/how your experience changed? > > Regards > Sebastian > > > > > > > Regards > Sebastian > > > > Gene > ---------------------------------------------- > Eugene Chang > IEEE Senior Life Member > eugene.chang@ieee.org > 781-799-0233 (in Honolulu) > > > > On Sep 26, 2022, at 10:48 AM, David Lang wrote: > > software updates can do far more than just improve recovery. > > In practice, large data transfers are less sensitive to latency than > smaller data transfers (i.e. downloading a CD image vs a video conference= ), > software can ensure better fairness in preventing a bulk transfer from > hurting the more latency sensitive transfers. > > (the example below is not completely accurate, but I think it gets the > point across) > > When buffers become excessivly large, you have the situation where a vide= o > call is going to generate a small amount of data at a regular interval, b= ut > a bulk data transfer is able to dump a huge amount of data into the buffe= r > instantly. > > If you just do FIFO, then you get a small chunk of video call, then > several seconds worth of CD transfer, followed by the next small chunk of > the video call. > > But the software can prevent the one app from hogging so much of the > connection and let the chunk of video call in sooner, avoiding the impact > to the real time traffic. Historically this has required the admin classi= fy > all traffic and configure equipment to implement different treatment base= d > on the classification (and this requires trust in the classification > process), the bufferbloat team has developed options (fq_codel and cake) > that can ensure fairness between applications/servers with little or no > configuration, and no trust in other systems to properly classify their > traffic. > > The one thing that Cake needs to work really well is to be able to know > what the data rate available is. With Starlink, this changes frequently a= nd > cake integrated into the starlink dish/router software would be far bette= r > than anything that can be done externally as the rate changes can be fed > directly into the settings (currently they are only indirectly detected) > > David Lang > > > On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote: > > You already know this. Bufferbloat is a symptom and not the cause. > Bufferbloat grows when there are (1) periods of low or no bandwidth or (2= ) > periods of insufficient bandwidth (aka network congestion). > > If I understand this correctly, just a software update cannot make > bufferbloat go away. It might improve the speed of recovery (e.g. throw > away all time sensitive UDP messages). > > Gene > ---------------------------------------------- > Eugene Chang > IEEE Senior Life Member > eugene.chang@ieee.org > 781-799-0233 (in Honolulu) > > > > On Sep 26, 2022, at 10:04 AM, Bruce Perens wrote: > > Please help to explain. Here's a draft to start with: > > Starlink Performance Not Sufficient for Military Applications, Say > Scientists > > The problem is not availability: Starlink works where nothing but another > satellite network would. It's not bandwidth, although others have questio= ns > about sustaining bandwidth as the customer base grows. It's latency and > jitter. As load increases, latency, the time it takes for a packet to get > through, increases more than it should. The scientists who have fought > bufferbloat, a major cause of latency on the internet, know why. SpaceX > needs to upgrade their system to use the scientist's Open Source > modifications to Linux to fight bufferbloat, and thus reduce latency. Thi= s > is mostly just using a newer version, but there are some tunable > parameters. Jitter is a change in the speed of getting a packet through t= he > network during a connection, which is inevitable in satellite networks, b= ut > will be improved by making use of the bufferbloat-fighting software, and > probably with the addition of more satellites. > > We've done all of the work, SpaceX just needs to adopt it by upgrading > their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator > and creator of the X Window System, chimed in: > Open Source luminary Bruce Perens said: sometimes Starlink's latency and > jitter make it inadequate to remote-control my ham radio station. But the > military is experimenting with remote-control of vehicles on the > battlefield and other applications that can be demonstrated, but won't > happen at scale without adoption of bufferbloat-fighting strategies. > > On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang > wrote: > The key issue is most people don=E2=80=99t understand why latency matters= . They > don=E2=80=99t see it or feel it=E2=80=99s impact. > > First, we have to help people see the symptoms of latency and how it > impacts something they care about. > - gamers care but most people may think it is frivolous. > - musicians care but that is mostly for a hobby. > - business should care because of productivity but they don=E2=80=99t kno= w how to > =E2=80=9Csee=E2=80=9D the impact. > > Second, there needs to be a =E2=80=9COMG, I have been seeing the action o= f latency > all this time and never knew it! I was being shafted.=E2=80=9D Once you h= ave this > awakening, you can get all the press you want for free. > > Most of the time when business apps are developed, =E2=80=9Cwe=E2=80=9D h= ide the impact of > poor performance (aka latency) or they hide from the discussion because t= he > developers don=E2=80=99t have a way to fix the latency. Maybe businesses = don=E2=80=99t care > because any employees affected are just considered poor performers. (In b= ad > economic times, the poor performers are just laid off.) For employees, if > they happen to be at a location with bad latency, they don=E2=80=99t know= that > latency is hurting them. Unfair but most people don=E2=80=99t know the is= sue is > latency. > > Talking and explaining why latency is bad is not as effective as showing > why latency is bad. Showing has to be with something that has a person > impact. > > Gene > ----------------------------------- > Eugene Chang > eugene.chang@alum.mit.edu > +1-781-799-0233 (in Honolulu) > > > > > > On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink < > starlink@lists.bufferbloat.net> > wrote: > > If you want to get attention, you can get it for free. I can place > articles with various press if there is something interesting to say. Did > this all through the evangelism of Open Source. All we need to do is writ= e, > sign, and publish a statement. What they actually write is less relevant = if > they publish a link to our statement. > > Right now I am concerned that the Starlink latency and jitter is going to > be a problem even for remote controlling my ham station. The US Military = is > interested in doing much more, which they have demonstrated, but I don't > see happening at scale without some technical work on the network. Being > able to say this isn't ready for the government's application would be an > attention-getter. > > Thanks > > Bruce > > On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink < > starlink@lists.bufferbloat.net> > wrote: > These days, if you want attention, you gotta buy it. A 50k half page > ad in the wapo or NYT riffing off of It's the latency, Stupid!", > signed by the kinds of luminaries we got for the fcc wifi fight, would > go a long way towards shifting the tide. > > On Mon, Sep 26, 2022 at 8:29 AM Dave Taht dave.taht@gmail.com>> wrote: > > > On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason > > wrote: > > > The awareness & understanding of latency & impact on QoE is nearly unknow= n > among reporters. IMO maybe there should be some kind of background > briefings for reporters - maybe like a simple YouTube video explainer tha= t > is short & high level & visual? Otherwise reporters will just continue to > focus on what they know... > > > That's a great idea. I have visions of crashing the washington > correspondents dinner, but perhaps > there is some set of gatherings journalists regularly attend? > > > =EF=BB=BFOn 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink= " < > starlink-bounces@lists.bufferbloat.net starlink-bounces@lists.bufferbloat.net> on behalf of > starlink@lists.bufferbloat.net > > wrote: > > I still find it remarkable that reporters are still missing the > meaning of the huge latencies for starlink, under load. > > > > > -- > FQ World Domination pending: > https://blog.cerowrt.org/post/state_of_fq_codel/< > https://blog.cerowrt.org/post/state_of_fq_codel/> > Dave T=C3=A4ht CEO, TekLibre, LLC > > > > > -- > FQ World Domination pending: > https://blog.cerowrt.org/post/state_of_fq_codel/< > https://blog.cerowrt.org/post/state_of_fq_codel/> > Dave T=C3=A4ht CEO, TekLibre, LLC > _______________________________________________ > Starlink mailing list > Starlink@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/starlink < > https://lists.bufferbloat.net/listinfo/starlink> > > > -- > Bruce Perens K6BP > _______________________________________________ > Starlink mailing list > Starlink@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/starlink < > https://lists.bufferbloat.net/listinfo/starlink> > > > > > -- > Bruce Perens K6BP > > > _______________________________________________ > Starlink mailing list > Starlink@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/starlink > > > _______________________________________________ > Starlink mailing list > Starlink@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/starlink > --=20 Bruce Perens K6BP --000000000000ffde6e05e99b6f6c Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
That's a good maxim: Don't believe a speed test th= at is hosted by your own ISP.

On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chan= g via Starlink <starli= nk@lists.bufferbloat.net> wrote:
Thank you= for the dialog,.
This discussion with regards to Starlink is interesti= ng as it confirms my guesses about the gap between Starlinks overly simplif= ied, over optimistic marketing and the reality as they acquire subscribers.=

I am actually interested in a more perverse issue= . I am seeing latency and bufferbloat as a consequence from significant und= er provisioning. It doesn=E2=80=99t matter that the ISP is selling a fiber = drop, if (parts) of their network is under provisioned. Two end points can = be less than 5 mile apart and realize 120+ ms latency. Two Labor Days ago (= a holiday) the max latency was 230+ ms. The pattern I see suggest digital r= edlining. The older communities appear to have much more severe under provi= sioning.=C2=A0

Another observation. Running speedt= est appears to go from the edge of the network by layer 2 to the speedtest = host operated by the ISP. Yup, bypasses the (suspected overloaded) routers.=

Anyway, just observing.

Gene
----------------------------= ------------------
Eugene Chang
= IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honolulu)

=

On Sep 26, 2022, at 11:20 AM, Sebas= tian Moeller <moell= er0@gmx.de> wrote:

Hi Gene,


On Sep 26, 2022= , at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:

Comments inline = below.

Gene
----------------------------------------------
Eug= ene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in Honol= ulu)



On Sep 26, 2022, at 11:01 AM,= Sebastian Moeller <moeller0@gmx.de> wrote:

Hi Eugene,


On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@li= sts.bufferbloat.net> wrote:

Ok, we are getting into the detai= ls. I agree.

Every node in the path has to implement this to be effe= ctive.

Ama= zingly the biggest bang for the buck is gotten by fixing those nodes that a= ctually contain a network path's bottleneck. Often these are pretty sta= ble. So yes for fully guaranteed service quality all nodes would need to pa= rticipate, but for improving things noticeably it is sufficient to improve = the usual bottlenecks, e.g. for many internet access links the home gateway= is a decent point to implement better buffer management. (In short the pro= blem are over-sized and under-managed buffers, and one of the best solution= is better/smarter buffer management).


This is not = completely true.

[SM] You are l= ikely right, trying to summarize things leads to partially incorrect genera= lizations.


Say the bottleneck is at no= de N. During the period of congestion, the upstream node N-1 will have to b= uffer. When node N recovers, the bufferbloat at N-1 will be blocking until = the bufferbloat drains. Etc. etc.=C2=A0 Making node N better will reduce th= e extent of the backup at N-1, but N-1 should implement the better code.

[SM] It is the node that builds u= p the queue that profits most from better queue management.... (again I gen= eralize, the node with the queue itself probably does not care all that muc= h, but the endpoints will profit if the queue experiencing node deals with = that queue more gracefully).



<= blockquote type=3D"cite">
In fact, every node = in the path has to have the same prioritization or the scheme becomes ineff= ective.

Ye= s and no, one of the clearest winners has been flow queueing, IMHO not beca= use it is the most optimal capacity sharing scheme, but because it is the l= east pessimal scheme, allowing all (or none) flows forward progress. You ca= n interpret that as a scheme in which flows below their capacity share are = prioritized, but I am not sure that is the best way to look at these things= .

The hardest part is getting competing ISPs to impleme= nt and coordinate.=C2=A0

= [SM] Yes, but it turned out even with non-cooperating ISPs there is = a lot end-users can do unilaterally on their side to improve both ingress a= nd egress congestion. Admittedly especially ingress congestion would be eve= n better handled with cooperation of the ISP.
Bufferbloat a= nd handoff between ISPs will be hard. The only way to fix this is to get th= e unwashed public to care. Then they can say =E2=80=9Cwe don=E2=80=99t care= about the technical issues, just fix it.=E2=80=9D Until then =E2=80=A6..

[SM] Well we do this one home ne= twork at a time (not because that is efficient or ideal, but simply because= it is possible). Maybe, if you have not done so already try OpenWrt with s= qm-scripts (and maybe cake-autorate in addition) on your home internet acce= ss link for say a week and let us know ih/how your experience changed?

Regards
Sebastian






R= egards
Sebastian


Gene
------------------------------------= ----------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-7= 99-0233 (in Honolulu)



On Sep 26, 2= 022, at 10:48 AM, David Lang <david@lang.hm> wrote:

software updates can do far mo= re than just improve recovery.

In practice, large data transfers are= less sensitive to latency than smaller data transfers (i.e. downloading a = CD image vs a video conference), software can ensure better fairness in pre= venting a bulk transfer from hurting the more latency sensitive transfers.<= br>
(the example below is not completely accurate, but I think it gets t= he point across)

When buffers become excessivly large, you have the = situation where a video call is going to generate a small amount of data at= a regular interval, but a bulk data transfer is able to dump a huge amount= of data into the buffer instantly.

If you just do FIFO, then you ge= t a small chunk of video call, then several seconds worth of CD transfer, f= ollowed by the next small chunk of the video call.

But the software = can prevent the one app from hogging so much of the connection and let the = chunk of video call in sooner, avoiding the impact to the real time traffic= . Historically this has required the admin classify all traffic and configu= re equipment to implement different treatment based on the classification (= and this requires trust in the classification process), the bufferbloat tea= m has developed options (fq_codel and cake) that can ensure fairness betwee= n applications/servers with little or no configuration, and no trust in oth= er systems to properly classify their traffic.

The one thing that Ca= ke needs to work really well is to be able to know what the data rate avail= able is. With Starlink, this changes frequently and cake integrated into th= e starlink dish/router software would be far better than anything that can = be done externally as the rate changes can be fed directly into the setting= s (currently they are only indirectly detected)

David Lang

On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:

You already know this. Bufferbloat is a symptom and not the= cause. Bufferbloat grows when there are (1) periods of low or no bandwidth= or (2) periods of insufficient bandwidth (aka network congestion).

= If I understand this correctly, just a software update cannot make bufferbl= oat go away. It might improve the speed of recovery (e.g. throw away all ti= me sensitive UDP messages).

Gene
--------------------------------= --------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
7= 81-799-0233 (in Honolulu)



On Sep 2= 6, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:

Please help to exp= lain. Here's a draft to start with:

Starlink Performance Not Suf= ficient for Military Applications, Say Scientists

The problem is not= availability: Starlink works where nothing but another satellite network w= ould. It's not bandwidth, although others have questions about sustaini= ng bandwidth as the customer base grows. It's latency and jitter. As lo= ad increases, latency, the time it takes for a packet to get through, incre= ases more than it should. The scientists who have fought bufferbloat, a maj= or cause of latency on the internet, know why. SpaceX needs to upgrade thei= r system to use the scientist's Open Source modifications to Linux to f= ight bufferbloat, and thus reduce latency. This is mostly just using a newe= r version, but there are some tunable parameters. Jitter is a change in the= speed of getting a packet through the network during a connection, which i= s inevitable in satellite networks, but will be improved by making use of t= he bufferbloat-fighting software, and probably with the addition of more sa= tellites.

We've done all of the work, SpaceX just needs to adopt= it by upgrading their software, said scientist Dave Taht. Jim Gettys, Taht= 's collaborator and creator of the X Window System, chimed in: <fill= in here please>
Open Source luminary Bruce Perens said: sometimes St= arlink's latency and jitter make it inadequate to remote-control my ham= radio station. But the military is experimenting with remote-control of ve= hicles on the battlefield and other applications that can be demonstrated, = but won't happen at scale without adoption of bufferbloat-fighting stra= tegies.

On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.= edu<mailto:eugene.chang@alum.mit.edu>> wrote:
The key issue is most= people don=E2=80=99t understand why latency matters. They don=E2=80=99t se= e it or feel it=E2=80=99s impact.

First, we have to help people see = the symptoms of latency and how it impacts something they care about.
- = gamers care but most people may think it is frivolous.
- musicians care = but that is mostly for a hobby.
- business should care because of produc= tivity but they don=E2=80=99t know how to =E2=80=9Csee=E2=80=9D the impact.=

Second, there needs to be a =E2=80=9COMG, I have been seeing the ac= tion of latency all this time and never knew it! I was being shafted.=E2=80= =9D Once you have this awakening, you can get all the press you want for fr= ee.

Most of the time when business apps are developed, =E2=80=9Cwe= =E2=80=9D hide the impact of poor performance (aka latency) or they hide fr= om the discussion because the developers don=E2=80=99t have a way to fix th= e latency. Maybe businesses don=E2=80=99t care because any employees affect= ed are just considered poor performers. (In bad economic times, the poor pe= rformers are just laid off.) For employees, if they happen to be at a locat= ion with bad latency, they don=E2=80=99t know that latency is hurting them.= Unfair but most people don=E2=80=99t know the issue is latency.

Tal= king and explaining why latency is bad is not as effective as showing why l= atency is bad. Showing has to be with something that has a person impact.
Gene
-----------------------------------
Eugene Chang
eugene.chang@alum.= mit.edu <mailto:eugene.chang@alum.mit.edu>
+1-781-799-0233 (in Honolu= lu)





On Sep 26, 2022, at 6:= 32 AM, Bruce Perens via Starlink <starlink@lists.bufferbloat.net<mailto:= starlin= k@lists.bufferbloat.net>> wrote:

If you want to get attent= ion, you can get it for free. I can place articles with various press if th= ere is something interesting to say. Did this all through the evangelism of= Open Source. All we need to do is write, sign, and publish a statement. Wh= at they actually write is less relevant if they publish a link to our state= ment.

Right now I am concerned that the Starlink latency and jitter = is going to be a problem even for remote controlling my ham station. The US= Military is interested in doing much more, which they have demonstrated, b= ut I don't see happening at scale without some technical work on the ne= twork. Being able to say this isn't ready for the government's appl= ication would be an attention-getter.

=C2=A0Thanks

=C2=A0Bruc= e

On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists= .bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:These days, if you want attention, you gotta buy it. A 50k half page
a= d in the wapo or NYT riffing off of It's the latency, Stupid!",signed by the kinds of luminaries we got for the fcc wifi fight, would
= go a long way towards shifting the tide.

On Mon, Sep 26, 2022 at 8:2= 9 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:

On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
<Jason_Livingoo= d@comcast.com <mailto:Jason_Livingood@comcast.com>> wrote:

The awareness & understanding of latency & = impact on QoE is nearly unknown among reporters. IMO maybe there should be = some kind of background briefings for reporters - maybe like a simple YouTu= be video explainer that is short & high level & visual? Otherwise r= eporters will just continue to focus on what they know...
<= br>That's a great idea. I have visions of crashing the washington
co= rrespondents dinner, but perhaps
there is some set of gatherings journal= ists regularly attend?


=EF=BB=BFOn 9/2= 1/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" <<= a href=3D"mailto:starlink-bounces@lists.bufferbloat.net" target=3D"_blank">= starlink-bounces@lists.bufferbloat.net <mailto:starlink-bounces@lis= ts.bufferbloat.net> on behalf of starlink@lists.bufferbloat.net <mai= lto:sta= rlink@lists.bufferbloat.net>> wrote:

=C2=A0I still find it= remarkable that reporters are still missing the
=C2=A0meaning of the hu= ge latencies for starlink, under load.




--FQ World Domination pending: https://blog.cerowrt.org/post/state_of_f= q_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>= ;
Dave T=C3=A4ht CEO, TekLibre, LLC



--
FQ= World Domination pending: https://blog.cerowrt.org/post/state_of_fq_co= del/<https://blog.cerowrt.org/post/state_of_fq_codel/>Dave T=C3=A4ht CEO, TekLibre, LLC
_____________________________________= __________
Starlink mailing list
Starlink@lists.bufferbloat.net <mail= to:Star= link@lists.bufferbloat.net>
https://lists.bufferbloat.net/list= info/starlink <https://lists.bufferbloat.net/listinfo/starlink>


--
Bruce Perens K6BP
_______________________________= ________________
Starlink mailing list
Starlink@lists.bufferbloat.net &l= t;mailto:Starlink@lists.bufferbloat.net>
https://lists.bufferbloat.ne= t/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starl= ink>



--
Bruce Perens K6BP

________________________________________= _______
Starlink mailing list
Starlink@lists.bufferbloat.net
http= s://lists.bufferbloat.net/listinfo/starlink

_______________________= ________________________
Starlink mailing list
Starlin= k@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink

--
Bruce Per= ens K6BP
--000000000000ffde6e05e99b6f6c--