From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [IPv6:2a00:1450:4864:20::635]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 177583CB37 for ; Mon, 26 Sep 2022 20:55:26 -0400 (EDT) Received: by mail-ej1-x635.google.com with SMTP id dv25so17513267ejb.12 for ; Mon, 26 Sep 2022 17:55:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=perens.com; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=TlzKiVbBs2OWwZyCYroF2aYDF3IbUaM8Sjv2XXmAJ04=; b=easTuY+p1QKLrye46uvlSk5QVX4gOwXEe3wTDZqL1nGJI7pNj0HC+cKTGj3uCdabZ9 k2WrVPBSrJ7b5aEs97RSusjKqKKy4vEGE2seoST4ZOSD2Z01g3/bF+63yu+1hWlaYxAv 7cfW39SaO78SpuXfwsbez+72ZyDJxjbeuzblk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=TlzKiVbBs2OWwZyCYroF2aYDF3IbUaM8Sjv2XXmAJ04=; b=6hO7iTEffYwVS6VgEXZLP4J/vCXiBemqE1JGzBOQGRbEDWGrl+tWG3jzfogz8sN9ur QJWFebQP0saHFkZuVhbA5S+jF6+t8HjONdmtRnRKF2bWnZxoE+XO/jypiUQ4QqO+V0SJ +GaMl89R+ZMzCg0pnHGZ/9FcVR3DmTRNn9UC8XR7pIWiY73evulXcW9EqyfzY97jfKSc q3q8l+H7YGGrFpNBLaZl2zP97REvaN4zBiL7EGfnK54zI8EBq5Kq1Elr8M2QCjWmv/AR YCL37IpV9qp53w/ollONa10ASal7JiovPZayBYj4Ox4J/e4s7Q2YGxrjAOwtc+Zmpi60 WE6g== X-Gm-Message-State: ACrzQf37WSRsFWBVDVGhjw6YRpeEV9FyAfE+OM+Z1828EMhwY6GoS9e/ Y2lHcfvjoVXL89rokIIw2f+ZppKi1YtFQPPZ5qpoQFW6MZ/amA== X-Google-Smtp-Source: AMsMyM4UQl5ETR3t8yDi4jVnUfQf6Wv8D4jHh4HPbsQjU+t5RWctfvVsNxKQT5eMtAn7fZa5GYOlrqy3swzwQtmJBGc= X-Received: by 2002:a17:907:7214:b0:783:415e:70c7 with SMTP id dr20-20020a170907721400b00783415e70c7mr9076142ejc.636.1664240124770; Mon, 26 Sep 2022 17:55:24 -0700 (PDT) MIME-Version: 1.0 References: <060F7695-D48E-413C-9501-54ECC651ABEB@cable.comcast.com> <07C46DD5-7359-410E-8820-82B319944618@alum.mit.edu> <39E525B8-D356-4F76-82FF-F1F0B3183908@ieee.org> <498p2p23-on1q-op89-p518-1874r3r6rpo@ynat.uz> <8DC6E5EE-2B46-4815-A909-E326507E95B1@ieee.org> <9D97FB02-58A5-48B0-9B43-6B7BD2A24099@gmx.de> In-Reply-To: From: Bruce Perens Date: Mon, 26 Sep 2022 17:55:13 -0700 Message-ID: To: Dave Taht Cc: Eugene Y Chang , Dave Taht via Starlink Content-Type: multipart/alternative; boundary="000000000000754f8105e99e1907" Subject: Re: [Starlink] It's still the starlink latency... X-BeenThere: starlink@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Starlink has bufferbloat. Bad." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 27 Sep 2022 00:55:26 -0000 --000000000000754f8105e99e1907 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Why not write an RFC on internet metrics? Then evangelize customers to rely on metrics compliant with the RFC. On Mon, Sep 26, 2022 at 5:36 PM Dave Taht wrote: > On Mon, Sep 26, 2022 at 2:45 PM Bruce Perens via Starlink > wrote: > > > > That's a good maxim: Don't believe a speed test that is hosted by your > own ISP. > > A network designed for speedtest.net, is a network... designed for > speedtest. Starlink seemingly was designed for speedtest - the 15 > second "cycle" to sense/change their bandwidth setting is just within > the 20s cycle speedtest terminates at, and speedtest returns the last > number for the bandwidth. It is a brutal test - using 8 or more flows > - much harder on the network than your typical web page load which, > while that is often 15 or so, most never run long enough to get out of > slow start. At least some of qualifying for the RDOF money was > achieving 100mbits down on "speedtest". > > A knowledgeable user concerned about web PLT should be looking a the > first 3 s of a given test, and even then once the bandwidth cracks > 20Mbit, it's of no help for most web traffic ( we've been citing mike > belshe's original work here a lot, > and more recent measurements still show that ) > > Speedtest also does nothing to measure how well a given > videoconference or voip session might go. There isn't a test (at least > not when last I looked) in the FCC broadband measurements for just > videoconferencing, and their latency under load test for many years > now, is buried deep in the annual report. > > I hope that with both ookla and samknows more publicly recording and > displaying latency under load (still, sigh, I think only displaying > the last number and only sampling every 250ms) that we can shift the > needle on this, but I started off this thread complaining nobody was > picking up on those numbers... and neither service tests the worst > case scenario of a simultaneous up/download, which was the principal > scenario we explored with the flent "rrul" series of tests, which were > originally designed to emulate and deeply understand what bittorrent > was doing to networks, and our principal tool in designing new fq and > aqm and transport CCs, along with the rtt_fair test for testing near > and far destinations at the same time. > > My model has always been a family of four, one person uploading, > another doing web, one doing videoconferencing, > and another doing voip or gaming, and no test anyone has emulates > that. With 16 wifi devices > per household, the rrul scenario is actually not "worst case", but > increasingly the state of things "normally". > > Another irony about speedtest is that users are inspired^Wtrained to > use it when the "network feels slow", and self-initiate something that > makes it worse, for both them and their portion of the network. > > Since the internet architecture board met last year, ( > https://www.iab.org/activities/workshops/network-quality/ ) there > seems to be an increasing amount of work on better metrics and tests > for QoE, with stuff like apple's responsiveness test, etc. > > I have a new one - prototyped in some starlink tests so far, and > elsewhere - called "SPOM" - steady packets over milliseconds, which, > when run simultaneously with capacity seeking traffic, might be a > better predictor of videoconferencing performance. > > There's also a really good "P99" conference coming up for those, that > like me, are OCD about a few sigmas. > > > > > On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink < > starlink@lists.bufferbloat.net> wrote: > >> > >> Thank you for the dialog,. > >> This discussion with regards to Starlink is interesting as it confirms > my guesses about the gap between Starlinks overly simplified, over > optimistic marketing and the reality as they acquire subscribers. > >> > >> I am actually interested in a more perverse issue. I am seeing latency > and bufferbloat as a consequence from significant under provisioning. It > doesn=E2=80=99t matter that the ISP is selling a fiber drop, if (parts) o= f their > network is under provisioned. Two end points can be less than 5 mile apar= t > and realize 120+ ms latency. Two Labor Days ago (a holiday) the max laten= cy > was 230+ ms. The pattern I see suggest digital redlining. The older > communities appear to have much more severe under provisioning. > >> > >> Another observation. Running speedtest appears to go from the edge of > the network by layer 2 to the speedtest host operated by the ISP. Yup, > bypasses the (suspected overloaded) routers. > >> > >> Anyway, just observing. > >> > >> Gene > >> ---------------------------------------------- > >> Eugene Chang > >> IEEE Senior Life Member > >> eugene.chang@ieee.org > >> 781-799-0233 (in Honolulu) > >> > >> > >> > >> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller > wrote: > >> > >> Hi Gene, > >> > >> > >> On Sep 26, 2022, at 23:10, Eugene Y Chang > wrote: > >> > >> Comments inline below. > >> > >> Gene > >> ---------------------------------------------- > >> Eugene Chang > >> IEEE Senior Life Member > >> eugene.chang@ieee.org > >> 781-799-0233 (in Honolulu) > >> > >> > >> > >> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller > wrote: > >> > >> Hi Eugene, > >> > >> > >> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink < > starlink@lists.bufferbloat.net> wrote: > >> > >> Ok, we are getting into the details. I agree. > >> > >> Every node in the path has to implement this to be effective. > >> > >> > >> Amazingly the biggest bang for the buck is gotten by fixing those node= s > that actually contain a network path's bottleneck. Often these are pretty > stable. So yes for fully guaranteed service quality all nodes would need = to > participate, but for improving things noticeably it is sufficient to > improve the usual bottlenecks, e.g. for many internet access links the ho= me > gateway is a decent point to implement better buffer management. (In shor= t > the problem are over-sized and under-managed buffers, and one of the best > solution is better/smarter buffer management). > >> > >> > >> This is not completely true. > >> > >> > >> [SM] You are likely right, trying to summarize things leads to > partially incorrect generalizations. > >> > >> > >> Say the bottleneck is at node N. During the period of congestion, the > upstream node N-1 will have to buffer. When node N recovers, the > bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. > etc. Making node N better will reduce the extent of the backup at N-1, b= ut > N-1 should implement the better code. > >> > >> > >> [SM] It is the node that builds up the queue that profits most from > better queue management.... (again I generalize, the node with the queue > itself probably does not care all that much, but the endpoints will profi= t > if the queue experiencing node deals with that queue more gracefully). > >> > >> > >> > >> > >> > >> In fact, every node in the path has to have the same prioritization or > the scheme becomes ineffective. > >> > >> > >> Yes and no, one of the clearest winners has been flow queueing, IMHO > not because it is the most optimal capacity sharing scheme, but because i= t > is the least pessimal scheme, allowing all (or none) flows forward > progress. You can interpret that as a scheme in which flows below their > capacity share are prioritized, but I am not sure that is the best way to > look at these things. > >> > >> > >> The hardest part is getting competing ISPs to implement and coordinate= . > >> > >> > >> [SM] Yes, but it turned out even with non-cooperating ISPs there is a > lot end-users can do unilaterally on their side to improve both ingress a= nd > egress congestion. Admittedly especially ingress congestion would be even > better handled with cooperation of the ISP. > >> > >> Bufferbloat and handoff between ISPs will be hard. The only way to fix > this is to get the unwashed public to care. Then they can say =E2=80=9Cwe= don=E2=80=99t > care about the technical issues, just fix it.=E2=80=9D Until then =E2=80= =A6.. > >> > >> > >> [SM] Well we do this one home network at a time (not because that is > efficient or ideal, but simply because it is possible). Maybe, if you hav= e > not done so already try OpenWrt with sqm-scripts (and maybe cake-autorate > in addition) on your home internet access link for say a week and let us > know ih/how your experience changed? > >> > >> Regards > >> Sebastian > >> > >> > >> > >> > >> > >> > >> Regards > >> Sebastian > >> > >> > >> > >> Gene > >> ---------------------------------------------- > >> Eugene Chang > >> IEEE Senior Life Member > >> eugene.chang@ieee.org > >> 781-799-0233 (in Honolulu) > >> > >> > >> > >> On Sep 26, 2022, at 10:48 AM, David Lang wrote: > >> > >> software updates can do far more than just improve recovery. > >> > >> In practice, large data transfers are less sensitive to latency than > smaller data transfers (i.e. downloading a CD image vs a video conference= ), > software can ensure better fairness in preventing a bulk transfer from > hurting the more latency sensitive transfers. > >> > >> (the example below is not completely accurate, but I think it gets the > point across) > >> > >> When buffers become excessivly large, you have the situation where a > video call is going to generate a small amount of data at a regular > interval, but a bulk data transfer is able to dump a huge amount of data > into the buffer instantly. > >> > >> If you just do FIFO, then you get a small chunk of video call, then > several seconds worth of CD transfer, followed by the next small chunk of > the video call. > >> > >> But the software can prevent the one app from hogging so much of the > connection and let the chunk of video call in sooner, avoiding the impact > to the real time traffic. Historically this has required the admin classi= fy > all traffic and configure equipment to implement different treatment base= d > on the classification (and this requires trust in the classification > process), the bufferbloat team has developed options (fq_codel and cake) > that can ensure fairness between applications/servers with little or no > configuration, and no trust in other systems to properly classify their > traffic. > >> > >> The one thing that Cake needs to work really well is to be able to kno= w > what the data rate available is. With Starlink, this changes frequently a= nd > cake integrated into the starlink dish/router software would be far bette= r > than anything that can be done externally as the rate changes can be fed > directly into the settings (currently they are only indirectly detected) > >> > >> David Lang > >> > >> > >> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote: > >> > >> You already know this. Bufferbloat is a symptom and not the cause. > Bufferbloat grows when there are (1) periods of low or no bandwidth or (2= ) > periods of insufficient bandwidth (aka network congestion). > >> > >> If I understand this correctly, just a software update cannot make > bufferbloat go away. It might improve the speed of recovery (e.g. throw > away all time sensitive UDP messages). > >> > >> Gene > >> ---------------------------------------------- > >> Eugene Chang > >> IEEE Senior Life Member > >> eugene.chang@ieee.org > >> 781-799-0233 (in Honolulu) > >> > >> > >> > >> On Sep 26, 2022, at 10:04 AM, Bruce Perens wrote: > >> > >> Please help to explain. Here's a draft to start with: > >> > >> Starlink Performance Not Sufficient for Military Applications, Say > Scientists > >> > >> The problem is not availability: Starlink works where nothing but > another satellite network would. It's not bandwidth, although others have > questions about sustaining bandwidth as the customer base grows. It's > latency and jitter. As load increases, latency, the time it takes for a > packet to get through, increases more than it should. The scientists who > have fought bufferbloat, a major cause of latency on the internet, know > why. SpaceX needs to upgrade their system to use the scientist's Open > Source modifications to Linux to fight bufferbloat, and thus reduce > latency. This is mostly just using a newer version, but there are some > tunable parameters. Jitter is a change in the speed of getting a packet > through the network during a connection, which is inevitable in satellite > networks, but will be improved by making use of the bufferbloat-fighting > software, and probably with the addition of more satellites. > >> > >> We've done all of the work, SpaceX just needs to adopt it by upgrading > their software, said scientist Dave Taht. Jim Gettys, Taht's collaborator > and creator of the X Window System, chimed in: > >> Open Source luminary Bruce Perens said: sometimes Starlink's latency > and jitter make it inadequate to remote-control my ham radio station. But > the military is experimenting with remote-control of vehicles on the > battlefield and other applications that can be demonstrated, but won't > happen at scale without adoption of bufferbloat-fighting strategies. > >> > >> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang < > eugene.chang@alum.mit.edu> wrote: > >> The key issue is most people don=E2=80=99t understand why latency matt= ers. They > don=E2=80=99t see it or feel it=E2=80=99s impact. > >> > >> First, we have to help people see the symptoms of latency and how it > impacts something they care about. > >> - gamers care but most people may think it is frivolous. > >> - musicians care but that is mostly for a hobby. > >> - business should care because of productivity but they don=E2=80=99t = know how > to =E2=80=9Csee=E2=80=9D the impact. > >> > >> Second, there needs to be a =E2=80=9COMG, I have been seeing the actio= n of > latency all this time and never knew it! I was being shafted.=E2=80=9D On= ce you > have this awakening, you can get all the press you want for free. > >> > >> Most of the time when business apps are developed, =E2=80=9Cwe=E2=80= =9D hide the impact > of poor performance (aka latency) or they hide from the discussion becaus= e > the developers don=E2=80=99t have a way to fix the latency. Maybe busines= ses don=E2=80=99t > care because any employees affected are just considered poor performers. > (In bad economic times, the poor performers are just laid off.) For > employees, if they happen to be at a location with bad latency, they don= =E2=80=99t > know that latency is hurting them. Unfair but most people don=E2=80=99t k= now the > issue is latency. > >> > >> Talking and explaining why latency is bad is not as effective as > showing why latency is bad. Showing has to be with something that has a > person impact. > >> > >> Gene > >> ----------------------------------- > >> Eugene Chang > >> eugene.chang@alum.mit.edu > >> +1-781-799-0233 (in Honolulu) > >> > >> > >> > >> > >> > >> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink < > starlink@lists.bufferbloat.net> > wrote: > >> > >> If you want to get attention, you can get it for free. I can place > articles with various press if there is something interesting to say. Did > this all through the evangelism of Open Source. All we need to do is writ= e, > sign, and publish a statement. What they actually write is less relevant = if > they publish a link to our statement. > >> > >> Right now I am concerned that the Starlink latency and jitter is going > to be a problem even for remote controlling my ham station. The US Milita= ry > is interested in doing much more, which they have demonstrated, but I don= 't > see happening at scale without some technical work on the network. Being > able to say this isn't ready for the government's application would be an > attention-getter. > >> > >> Thanks > >> > >> Bruce > >> > >> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink < > starlink@lists.bufferbloat.net> > wrote: > >> These days, if you want attention, you gotta buy it. A 50k half page > >> ad in the wapo or NYT riffing off of It's the latency, Stupid!", > >> signed by the kinds of luminaries we got for the fcc wifi fight, would > >> go a long way towards shifting the tide. > >> > >> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht dave.taht@gmail.com>> wrote: > >> > >> > >> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason > >> > > wrote: > >> > >> > >> The awareness & understanding of latency & impact on QoE is nearly > unknown among reporters. IMO maybe there should be some kind of backgroun= d > briefings for reporters - maybe like a simple YouTube video explainer tha= t > is short & high level & visual? Otherwise reporters will just continue to > focus on what they know... > >> > >> > >> That's a great idea. I have visions of crashing the washington > >> correspondents dinner, but perhaps > >> there is some set of gatherings journalists regularly attend? > >> > >> > >> =EF=BB=BFOn 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starl= ink" < > starlink-bounces@lists.bufferbloat.net starlink-bounces@lists.bufferbloat.net> on behalf of > starlink@lists.bufferbloat.net > > wrote: > >> > >> I still find it remarkable that reporters are still missing the > >> meaning of the huge latencies for starlink, under load. > >> > >> > >> > >> > >> -- > >> FQ World Domination pending: > https://blog.cerowrt.org/post/state_of_fq_codel/< > https://blog.cerowrt.org/post/state_of_fq_codel/> > >> Dave T=C3=A4ht CEO, TekLibre, LLC > >> > >> > >> > >> > >> -- > >> FQ World Domination pending: > https://blog.cerowrt.org/post/state_of_fq_codel/< > https://blog.cerowrt.org/post/state_of_fq_codel/> > >> Dave T=C3=A4ht CEO, TekLibre, LLC > >> _______________________________________________ > >> Starlink mailing list > >> Starlink@lists.bufferbloat.net > >> https://lists.bufferbloat.net/listinfo/starlink < > https://lists.bufferbloat.net/listinfo/starlink> > >> > >> > >> -- > >> Bruce Perens K6BP > >> _______________________________________________ > >> Starlink mailing list > >> Starlink@lists.bufferbloat.net > >> https://lists.bufferbloat.net/listinfo/starlink < > https://lists.bufferbloat.net/listinfo/starlink> > >> > >> > >> > >> > >> -- > >> Bruce Perens K6BP > >> > >> > >> _______________________________________________ > >> Starlink mailing list > >> Starlink@lists.bufferbloat.net > >> https://lists.bufferbloat.net/listinfo/starlink > >> > >> > >> _______________________________________________ > >> Starlink mailing list > >> Starlink@lists.bufferbloat.net > >> https://lists.bufferbloat.net/listinfo/starlink > > > > > > > > -- > > Bruce Perens K6BP > > _______________________________________________ > > Starlink mailing list > > Starlink@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/starlink > > > > -- > FQ World Domination pending: > https://blog.cerowrt.org/post/state_of_fq_codel/ > Dave T=C3=A4ht CEO, TekLibre, LLC > --=20 Bruce Perens K6BP --000000000000754f8105e99e1907 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Why not write an RFC on internet metrics? Then evangelize = customers to rely on metrics compliant with the RFC.

On Mon, Sep 26, 2022 at 5= :36 PM Dave Taht <dave.taht@gmail= .com> wrote:
On Mon, Sep 26, 2022 at 2:45 PM Bruce Perens via Starlink
<sta= rlink@lists.bufferbloat.net> wrote:
>
> That's a good maxim: Don't believe a speed test that is hosted= by your own ISP.

A network designed for speedtest.net, is a network... designed for
speedtest. Starlink seemingly was designed for speedtest - the 15
second "cycle" to sense/change their bandwidth setting is just wi= thin
the 20s cycle speedtest terminates at, and speedtest returns the last
number for the bandwidth. It is a brutal test - using 8 or more flows
- much harder on the network than your typical web page load which,
while that is often 15 or so, most never run long enough to get out of
slow start. At least some of qualifying for the RDOF money was
achieving 100mbits down on "speedtest".

A knowledgeable user concerned about web PLT should be looking a the
first 3 s of a given test, and even then once the bandwidth cracks
20Mbit, it's of no help for most web traffic ( we've been citing mi= ke
belshe's original work here a lot,
and more recent measurements still show that )

Speedtest also does nothing to measure how well a given
videoconference or voip session might go. There isn't a test (at least<= br> not when last I looked) in the FCC broadband measurements for just
videoconferencing, and their latency under load test for many years
now, is buried deep in the annual report.

I hope that with both ookla and samknows more publicly recording and
displaying latency under load (still, sigh, I think only displaying
the last number and only sampling every 250ms) that we can shift the
needle on this, but I started off this thread complaining nobody was
picking up on those numbers... and neither service tests the worst
case scenario of a simultaneous up/download, which was the principal
scenario we explored with the flent "rrul" series of tests, which= were
originally designed to emulate and deeply understand what bittorrent
was doing to networks, and our principal tool in designing new fq and
aqm and transport CCs, along with the rtt_fair test for testing near
and far destinations at the same time.

My model has always been a family of four, one person uploading,
another doing web, one doing videoconferencing,
and another doing voip or gaming, and no test anyone has emulates
that. With 16 wifi devices
per household, the rrul scenario is actually not "worst case", bu= t
increasingly the state of things "normally".

Another irony about speedtest is that users are inspired^Wtrained to
use it when the "network feels slow", and self-initiate something= that
makes it worse, for both them and their portion of the network.

Since the internet architecture board met last year, (
https://www.iab.org/activities/workshops/= network-quality/=C2=A0 ) there
seems to be an increasing amount of work on better metrics and tests
for QoE, with stuff like apple's responsiveness test, etc.

I have a new one - prototyped in some starlink tests so far, and
elsewhere - called "SPOM" - steady packets over milliseconds, whi= ch,
when run simultaneously with capacity seeking traffic, might be a
better predictor of videoconferencing performance.

There's also a really good "P99" conference coming up for tho= se, that
like me, are OCD about a few sigmas.

>
> On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink <starlink@list= s.bufferbloat.net> wrote:
>>
>> Thank you for the dialog,.
>> This discussion with regards to Starlink is interesting as it conf= irms my guesses about the gap between Starlinks overly simplified, over opt= imistic marketing and the reality as they acquire subscribers.
>>
>> I am actually interested in a more perverse issue. I am seeing lat= ency and bufferbloat as a consequence from significant under provisioning. = It doesn=E2=80=99t matter that the ISP is selling a fiber drop, if (parts) = of their network is under provisioned. Two end points can be less than 5 mi= le apart and realize 120+ ms latency. Two Labor Days ago (a holiday) the ma= x latency was 230+ ms. The pattern I see suggest digital redlining. The old= er communities appear to have much more severe under provisioning.
>>
>> Another observation. Running speedtest appears to go from the edge= of the network by layer 2 to the speedtest host operated by the ISP. Yup, = bypasses the (suspected overloaded) routers.
>>
>> Anyway, just observing.
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.= chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Gene,
>>
>>
>> On Sep 26, 2022, at 23:10, Eugene Y Chang <eugene.chang@ieee.org> wrote:=
>>
>> Comments inline below.
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.= chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi Eugene,
>>
>>
>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <starlink@lists= .bufferbloat.net> wrote:
>>
>> Ok, we are getting into the details. I agree.
>>
>> Every node in the path has to implement this to be effective.
>>
>>
>> Amazingly the biggest bang for the buck is gotten by fixing those = nodes that actually contain a network path's bottleneck. Often these ar= e pretty stable. So yes for fully guaranteed service quality all nodes woul= d need to participate, but for improving things noticeably it is sufficient= to improve the usual bottlenecks, e.g. for many internet access links the = home gateway is a decent point to implement better buffer management. (In s= hort the problem are over-sized and under-managed buffers, and one of the b= est solution is better/smarter buffer management).
>>
>>
>> This is not completely true.
>>
>>
>> [SM] You are likely right, trying to summarize things leads to par= tially incorrect generalizations.
>>
>>
>> Say the bottleneck is at node N. During the period of congestion, = the upstream node N-1 will have to buffer. When node N recovers, the buffer= bloat at N-1 will be blocking until the bufferbloat drains. Etc. etc.=C2=A0= Making node N better will reduce the extent of the backup at N-1, but N-1 = should implement the better code.
>>
>>
>> [SM] It is the node that builds up the queue that profits most fro= m better queue management.... (again I generalize, the node with the queue = itself probably does not care all that much, but the endpoints will profit = if the queue experiencing node deals with that queue more gracefully).
>>
>>
>>
>>
>>
>> In fact, every node in the path has to have the same prioritizatio= n or the scheme becomes ineffective.
>>
>>
>> Yes and no, one of the clearest winners has been flow queueing, IM= HO not because it is the most optimal capacity sharing scheme, but because = it is the least pessimal scheme, allowing all (or none) flows forward progr= ess. You can interpret that as a scheme in which flows below their capacity= share are prioritized, but I am not sure that is the best way to look at t= hese things.
>>
>>
>> The hardest part is getting competing ISPs to implement and coordi= nate.
>>
>>
>> [SM] Yes, but it turned out even with non-cooperating ISPs there i= s a lot end-users can do unilaterally on their side to improve both ingress= and egress congestion. Admittedly especially ingress congestion would be e= ven better handled with cooperation of the ISP.
>>
>> Bufferbloat and handoff between ISPs will be hard. The only way to= fix this is to get the unwashed public to care. Then they can say =E2=80= =9Cwe don=E2=80=99t care about the technical issues, just fix it.=E2=80=9D = Until then =E2=80=A6..
>>
>>
>> [SM] Well we do this one home network at a time (not because that = is efficient or ideal, but simply because it is possible). Maybe, if you ha= ve not done so already try OpenWrt with sqm-scripts (and maybe cake-autorat= e in addition) on your home internet access link for say a week and let us = know ih/how your experience changed?
>>
>> Regards
>> Sebastian
>>
>>
>>
>>
>>
>>
>> Regards
>> Sebastian
>>
>>
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.= chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>> On Sep 26, 2022, at 10:48 AM, David Lang <david@lang.hm> wrote:
>>
>> software updates can do far more than just improve recovery.
>>
>> In practice, large data transfers are less sensitive to latency th= an smaller data transfers (i.e. downloading a CD image vs a video conferenc= e), software can ensure better fairness in preventing a bulk transfer from = hurting the more latency sensitive transfers.
>>
>> (the example below is not completely accurate, but I think it gets= the point across)
>>
>> When buffers become excessivly large, you have the situation where= a video call is going to generate a small amount of data at a regular inte= rval, but a bulk data transfer is able to dump a huge amount of data into t= he buffer instantly.
>>
>> If you just do FIFO, then you get a small chunk of video call, the= n several seconds worth of CD transfer, followed by the next small chunk of= the video call.
>>
>> But the software can prevent the one app from hogging so much of t= he connection and let the chunk of video call in sooner, avoiding the impac= t to the real time traffic. Historically this has required the admin classi= fy all traffic and configure equipment to implement different treatment bas= ed on the classification (and this requires trust in the classification pro= cess), the bufferbloat team has developed options (fq_codel and cake) that = can ensure fairness between applications/servers with little or no configur= ation, and no trust in other systems to properly classify their traffic. >>
>> The one thing that Cake needs to work really well is to be able to= know what the data rate available is. With Starlink, this changes frequent= ly and cake integrated into the starlink dish/router software would be far = better than anything that can be done externally as the rate changes can be= fed directly into the settings (currently they are only indirectly detecte= d)
>>
>> David Lang
>>
>>
>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:
>>
>> You already know this. Bufferbloat is a symptom and not the cause.= Bufferbloat grows when there are (1) periods of low or no bandwidth or (2)= periods of insufficient bandwidth (aka network congestion).
>>
>> If I understand this correctly, just a software update cannot make= bufferbloat go away. It might improve the speed of recovery (e.g. throw aw= ay all time sensitive UDP messages).
>>
>> Gene
>> ----------------------------------------------
>> Eugene Chang
>> IEEE Senior Life Member
>> eugene.= chang@ieee.org
>> 781-799-0233 (in Honolulu)
>>
>>
>>
>> On Sep 26, 2022, at 10:04 AM, Bruce Perens <bruce@perens.com> wrote:
>>
>> Please help to explain. Here's a draft to start with:
>>
>> Starlink Performance Not Sufficient for Military Applications, Say= Scientists
>>
>> The problem is not availability: Starlink works where nothing but = another satellite network would. It's not bandwidth, although others ha= ve questions about sustaining bandwidth as the customer base grows. It'= s latency and jitter. As load increases, latency, the time it takes for a p= acket to get through, increases more than it should. The scientists who hav= e fought bufferbloat, a major cause of latency on the internet, know why. S= paceX needs to upgrade their system to use the scientist's Open Source = modifications to Linux to fight bufferbloat, and thus reduce latency. This = is mostly just using a newer version, but there are some tunable parameters= . Jitter is a change in the speed of getting a packet through the network d= uring a connection, which is inevitable in satellite networks, but will be = improved by making use of the bufferbloat-fighting software, and probably w= ith the addition of more satellites.
>>
>> We've done all of the work, SpaceX just needs to adopt it by u= pgrading their software, said scientist Dave Taht. Jim Gettys, Taht's c= ollaborator and creator of the X Window System, chimed in: <fill in here= please>
>> Open Source luminary Bruce Perens said: sometimes Starlink's l= atency and jitter make it inadequate to remote-control my ham radio station= . But the military is experimenting with remote-control of vehicles on the = battlefield and other applications that can be demonstrated, but won't = happen at scale without adoption of bufferbloat-fighting strategies.
>>
>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang <eugene.chang@alum.mit.edu<mailto:= eugene.chang@alum.mit.edu>> wrote:
>> The key issue is most people don=E2=80=99t understand why latency = matters. They don=E2=80=99t see it or feel it=E2=80=99s impact.
>>
>> First, we have to help people see the symptoms of latency and how = it impacts something they care about.
>> - gamers care but most people may think it is frivolous.
>> - musicians care but that is mostly for a hobby.
>> - business should care because of productivity but they don=E2=80= =99t know how to =E2=80=9Csee=E2=80=9D the impact.
>>
>> Second, there needs to be a =E2=80=9COMG, I have been seeing the a= ction of latency all this time and never knew it! I was being shafted.=E2= =80=9D Once you have this awakening, you can get all the press you want for= free.
>>
>> Most of the time when business apps are developed, =E2=80=9Cwe=E2= =80=9D hide the impact of poor performance (aka latency) or they hide from = the discussion because the developers don=E2=80=99t have a way to fix the l= atency. Maybe businesses don=E2=80=99t care because any employees affected = are just considered poor performers. (In bad economic times, the poor perfo= rmers are just laid off.) For employees, if they happen to be at a location= with bad latency, they don=E2=80=99t know that latency is hurting them. Un= fair but most people don=E2=80=99t know the issue is latency.
>>
>> Talking and explaining why latency is bad is not as effective as s= howing why latency is bad. Showing has to be with something that has a pers= on impact.
>>
>> Gene
>> -----------------------------------
>> Eugene Chang
>> eug= ene.chang@alum.mit.edu <mailto:eugene.chang@alum.mit.edu>
>> +1-781-799-0233 (in Honolulu)
>>
>>
>>
>>
>>
>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink <starlink@lists= .bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote: >>
>> If you want to get attention, you can get it for free. I can place= articles with various press if there is something interesting to say. Did = this all through the evangelism of Open Source. All we need to do is write,= sign, and publish a statement. What they actually write is less relevant i= f they publish a link to our statement.
>>
>> Right now I am concerned that the Starlink latency and jitter is g= oing to be a problem even for remote controlling my ham station. The US Mil= itary is interested in doing much more, which they have demonstrated, but I= don't see happening at scale without some technical work on the networ= k. Being able to say this isn't ready for the government's applicat= ion would be an attention-getter.
>>
>>=C2=A0 Thanks
>>
>>=C2=A0 Bruce
>>
>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink <starlink@lists= .bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote: >> These days, if you want attention, you gotta buy it. A 50k half pa= ge
>> ad in the wapo or NYT riffing off of It's the latency, Stupid!= ",
>> signed by the kinds of luminaries we got for the fcc wifi fight, w= ould
>> go a long way towards shifting the tide.
>>
>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com <mailto:dave.taht@gmail.com>> wrote:
>>
>>
>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
>> <
Jason_Livingood@comcast.com <mailto:Jason_Livingood@comcast.com>>= wrote:
>>
>>
>> The awareness & understanding of latency & impact on QoE i= s nearly unknown among reporters. IMO maybe there should be some kind of ba= ckground briefings for reporters - maybe like a simple YouTube video explai= ner that is short & high level & visual? Otherwise reporters will j= ust continue to focus on what they know...
>>
>>
>> That's a great idea. I have visions of crashing the washington=
>> correspondents dinner, but perhaps
>> there is some set of gatherings journalists regularly attend?
>>
>>
>> =EF=BB=BFOn 9/21/22, 14:35, "Starlink on behalf of Dave Taht = via Starlink" <starlink-bounces@lists.bufferbloat.net <mail= to:starlink-bounces@lists.bufferbloat.net> on behalf of starlink@lists.buf= ferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:
>>
>>=C2=A0 I still find it remarkable that reporters are still missing = the
>>=C2=A0 meaning of the huge latencies for starlink, under load.
>>
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.c= erowrt.org/post/state_of_fq_codel/<https://bl= og.cerowrt.org/post/state_of_fq_codel/>
>> Dave T=C3=A4ht CEO, TekLibre, LLC
>>
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.c= erowrt.org/post/state_of_fq_codel/<https://bl= og.cerowrt.org/post/state_of_fq_codel/>
>> Dave T=C3=A4ht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net= >
>> https://lists.bufferbloat.net/listinfo/starl= ink <https://lists.bufferbloat.net/listinfo/st= arlink>
>>
>>
>> --
>> Bruce Perens K6BP
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net= >
>> https://lists.bufferbloat.net/listinfo/starl= ink <https://lists.bufferbloat.net/listinfo/st= arlink>
>>
>>
>>
>>
>> --
>> Bruce Perens K6BP
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starl= ink
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starl= ink
>
>
>
> --
> Bruce Perens K6BP
> _______________________________________________
> Starlink mailing list
> St= arlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink<= /a>



--
FQ World Domination pending:
https://blog.cerowrt.or= g/post/state_of_fq_codel/
Dave T=C3=A4ht CEO, TekLibre, LLC


--
Bruce Per= ens K6BP
--000000000000754f8105e99e1907--