From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id A7ED13CB37 for ; Mon, 26 Sep 2022 20:36:10 -0400 (EDT) Received: by mail-wm1-x32d.google.com with SMTP id ay36so5583200wmb.0 for ; Mon, 26 Sep 2022 17:36:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date; bh=P+5FGrWv/0b4X8Ow6/vXX0E4FX6BowkVCXvC3KdcrW0=; b=KQYCYruNo+EUdE+pr8/xBG213sP58cjqrwpNDp5BbKoLU7pQLehsqiMpHglNlGDF6Y ERNLnl/gb1URPFICrNJKGkRn1BY6bOVpUnxOSl7dULbGCR4WHdtGBQ8S7BJu76vQt4V7 3Ll1ZL/juglbEB/LSPiINbfkXZ9UgnmysRPnnx0+fAdICkB8FtQ0wvbFQFvbJW/tYI80 gvK4RYGyaCc/7keDV1Ay10v3seeg9151N96S04VP3eYStvxNtnKUhe511J7FhLuB5TGC KZtQKCM6X0CIXfIgLWlXd931dPbbx2XcMoyeh8FAWM0N1qTM39+CR7F06/z6qdvk0XtI G2iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date; bh=P+5FGrWv/0b4X8Ow6/vXX0E4FX6BowkVCXvC3KdcrW0=; b=GGNdG43RdPh8DKNdWomIZ1ljllQH71GOClRdnFmJa49DJgt+0q8251vLORmYvclYbc BziYjxsYgL+YOgM1LUSdEe1Vk8fGkbBF1AnBNAlpXAfpStF41AWrqJo2/ws6+82LT0z8 ZpWO57rvxUeBhJsAUhI67axQxZmYxq9OU8TYdV7iTPJXvnmszVuExkGLL5Kex0u1VShf QfpEQE3uK/HQKr/SwCH+Q1yPYUNwZH/W+asqG7TSNBtSMB4yHHgdH9aeZ6/U42d+PvQG M5hs5dfJHVfcapP/nC5zzilC2Xb/NBacc/UsQEKrAzqh+vyGk4X+n2WHTYNfR28CR0Mh 9SJQ== X-Gm-Message-State: ACrzQf3+rxek08P3GgA3qyQQ2gadPxM9WzCobnxofNzIafOK5jHpy+Ze ND+zrGKMpJVvfIrAZ5oV6uYL18lq8B5mjXHpgUw= X-Google-Smtp-Source: AMsMyM4VlkAhNJlV8NsDknx3I2lEw5kdp+6QNW7rscL7bJNNOWdxZ+A4hMPVPQRHfDFpjll0dNXHeBiuFxSXevWu2Tg= X-Received: by 2002:a7b:cc99:0:b0:3b4:76f2:192b with SMTP id p25-20020a7bcc99000000b003b476f2192bmr795521wma.138.1664238968980; Mon, 26 Sep 2022 17:36:08 -0700 (PDT) MIME-Version: 1.0 References: <060F7695-D48E-413C-9501-54ECC651ABEB@cable.comcast.com> <07C46DD5-7359-410E-8820-82B319944618@alum.mit.edu> <39E525B8-D356-4F76-82FF-F1F0B3183908@ieee.org> <498p2p23-on1q-op89-p518-1874r3r6rpo@ynat.uz> <8DC6E5EE-2B46-4815-A909-E326507E95B1@ieee.org> <9D97FB02-58A5-48B0-9B43-6B7BD2A24099@gmx.de> In-Reply-To: From: Dave Taht Date: Mon, 26 Sep 2022 17:35:55 -0700 Message-ID: To: Bruce Perens Cc: Eugene Y Chang , Dave Taht via Starlink Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [Starlink] It's still the starlink latency... X-BeenThere: starlink@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Starlink has bufferbloat. Bad." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 27 Sep 2022 00:36:10 -0000 On Mon, Sep 26, 2022 at 2:45 PM Bruce Perens via Starlink wrote: > > That's a good maxim: Don't believe a speed test that is hosted by your ow= n ISP. A network designed for speedtest.net, is a network... designed for speedtest. Starlink seemingly was designed for speedtest - the 15 second "cycle" to sense/change their bandwidth setting is just within the 20s cycle speedtest terminates at, and speedtest returns the last number for the bandwidth. It is a brutal test - using 8 or more flows - much harder on the network than your typical web page load which, while that is often 15 or so, most never run long enough to get out of slow start. At least some of qualifying for the RDOF money was achieving 100mbits down on "speedtest". A knowledgeable user concerned about web PLT should be looking a the first 3 s of a given test, and even then once the bandwidth cracks 20Mbit, it's of no help for most web traffic ( we've been citing mike belshe's original work here a lot, and more recent measurements still show that ) Speedtest also does nothing to measure how well a given videoconference or voip session might go. There isn't a test (at least not when last I looked) in the FCC broadband measurements for just videoconferencing, and their latency under load test for many years now, is buried deep in the annual report. I hope that with both ookla and samknows more publicly recording and displaying latency under load (still, sigh, I think only displaying the last number and only sampling every 250ms) that we can shift the needle on this, but I started off this thread complaining nobody was picking up on those numbers... and neither service tests the worst case scenario of a simultaneous up/download, which was the principal scenario we explored with the flent "rrul" series of tests, which were originally designed to emulate and deeply understand what bittorrent was doing to networks, and our principal tool in designing new fq and aqm and transport CCs, along with the rtt_fair test for testing near and far destinations at the same time. My model has always been a family of four, one person uploading, another doing web, one doing videoconferencing, and another doing voip or gaming, and no test anyone has emulates that. With 16 wifi devices per household, the rrul scenario is actually not "worst case", but increasingly the state of things "normally". Another irony about speedtest is that users are inspired^Wtrained to use it when the "network feels slow", and self-initiate something that makes it worse, for both them and their portion of the network. Since the internet architecture board met last year, ( https://www.iab.org/activities/workshops/network-quality/ ) there seems to be an increasing amount of work on better metrics and tests for QoE, with stuff like apple's responsiveness test, etc. I have a new one - prototyped in some starlink tests so far, and elsewhere - called "SPOM" - steady packets over milliseconds, which, when run simultaneously with capacity seeking traffic, might be a better predictor of videoconferencing performance. There's also a really good "P99" conference coming up for those, that like me, are OCD about a few sigmas. > > On Mon, Sep 26, 2022 at 2:36 PM Eugene Y Chang via Starlink wrote: >> >> Thank you for the dialog,. >> This discussion with regards to Starlink is interesting as it confirms m= y guesses about the gap between Starlinks overly simplified, over optimisti= c marketing and the reality as they acquire subscribers. >> >> I am actually interested in a more perverse issue. I am seeing latency a= nd bufferbloat as a consequence from significant under provisioning. It doe= sn=E2=80=99t matter that the ISP is selling a fiber drop, if (parts) of the= ir network is under provisioned. Two end points can be less than 5 mile apa= rt and realize 120+ ms latency. Two Labor Days ago (a holiday) the max late= ncy was 230+ ms. The pattern I see suggest digital redlining. The older com= munities appear to have much more severe under provisioning. >> >> Another observation. Running speedtest appears to go from the edge of th= e network by layer 2 to the speedtest host operated by the ISP. Yup, bypass= es the (suspected overloaded) routers. >> >> Anyway, just observing. >> >> Gene >> ---------------------------------------------- >> Eugene Chang >> IEEE Senior Life Member >> eugene.chang@ieee.org >> 781-799-0233 (in Honolulu) >> >> >> >> On Sep 26, 2022, at 11:20 AM, Sebastian Moeller wrote: >> >> Hi Gene, >> >> >> On Sep 26, 2022, at 23:10, Eugene Y Chang wrote: >> >> Comments inline below. >> >> Gene >> ---------------------------------------------- >> Eugene Chang >> IEEE Senior Life Member >> eugene.chang@ieee.org >> 781-799-0233 (in Honolulu) >> >> >> >> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller wrote: >> >> Hi Eugene, >> >> >> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink wrote: >> >> Ok, we are getting into the details. I agree. >> >> Every node in the path has to implement this to be effective. >> >> >> Amazingly the biggest bang for the buck is gotten by fixing those nodes = that actually contain a network path's bottleneck. Often these are pretty s= table. So yes for fully guaranteed service quality all nodes would need to = participate, but for improving things noticeably it is sufficient to improv= e the usual bottlenecks, e.g. for many internet access links the home gatew= ay is a decent point to implement better buffer management. (In short the p= roblem are over-sized and under-managed buffers, and one of the best soluti= on is better/smarter buffer management). >> >> >> This is not completely true. >> >> >> [SM] You are likely right, trying to summarize things leads to partially= incorrect generalizations. >> >> >> Say the bottleneck is at node N. During the period of congestion, the up= stream node N-1 will have to buffer. When node N recovers, the bufferbloat = at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making nod= e N better will reduce the extent of the backup at N-1, but N-1 should impl= ement the better code. >> >> >> [SM] It is the node that builds up the queue that profits most from bett= er queue management.... (again I generalize, the node with the queue itself= probably does not care all that much, but the endpoints will profit if the= queue experiencing node deals with that queue more gracefully). >> >> >> >> >> >> In fact, every node in the path has to have the same prioritization or t= he scheme becomes ineffective. >> >> >> Yes and no, one of the clearest winners has been flow queueing, IMHO not= because it is the most optimal capacity sharing scheme, but because it is = the least pessimal scheme, allowing all (or none) flows forward progress. Y= ou can interpret that as a scheme in which flows below their capacity share= are prioritized, but I am not sure that is the best way to look at these t= hings. >> >> >> The hardest part is getting competing ISPs to implement and coordinate. >> >> >> [SM] Yes, but it turned out even with non-cooperating ISPs there is a lo= t end-users can do unilaterally on their side to improve both ingress and e= gress congestion. Admittedly especially ingress congestion would be even be= tter handled with cooperation of the ISP. >> >> Bufferbloat and handoff between ISPs will be hard. The only way to fix t= his is to get the unwashed public to care. Then they can say =E2=80=9Cwe do= n=E2=80=99t care about the technical issues, just fix it.=E2=80=9D Until th= en =E2=80=A6.. >> >> >> [SM] Well we do this one home network at a time (not because that is eff= icient or ideal, but simply because it is possible). Maybe, if you have not= done so already try OpenWrt with sqm-scripts (and maybe cake-autorate in a= ddition) on your home internet access link for say a week and let us know i= h/how your experience changed? >> >> Regards >> Sebastian >> >> >> >> >> >> >> Regards >> Sebastian >> >> >> >> Gene >> ---------------------------------------------- >> Eugene Chang >> IEEE Senior Life Member >> eugene.chang@ieee.org >> 781-799-0233 (in Honolulu) >> >> >> >> On Sep 26, 2022, at 10:48 AM, David Lang wrote: >> >> software updates can do far more than just improve recovery. >> >> In practice, large data transfers are less sensitive to latency than sma= ller data transfers (i.e. downloading a CD image vs a video conference), so= ftware can ensure better fairness in preventing a bulk transfer from hurtin= g the more latency sensitive transfers. >> >> (the example below is not completely accurate, but I think it gets the p= oint across) >> >> When buffers become excessivly large, you have the situation where a vid= eo call is going to generate a small amount of data at a regular interval, = but a bulk data transfer is able to dump a huge amount of data into the buf= fer instantly. >> >> If you just do FIFO, then you get a small chunk of video call, then seve= ral seconds worth of CD transfer, followed by the next small chunk of the v= ideo call. >> >> But the software can prevent the one app from hogging so much of the con= nection and let the chunk of video call in sooner, avoiding the impact to t= he real time traffic. Historically this has required the admin classify all= traffic and configure equipment to implement different treatment based on = the classification (and this requires trust in the classification process),= the bufferbloat team has developed options (fq_codel and cake) that can en= sure fairness between applications/servers with little or no configuration,= and no trust in other systems to properly classify their traffic. >> >> The one thing that Cake needs to work really well is to be able to know = what the data rate available is. With Starlink, this changes frequently and= cake integrated into the starlink dish/router software would be far better= than anything that can be done externally as the rate changes can be fed d= irectly into the settings (currently they are only indirectly detected) >> >> David Lang >> >> >> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote: >> >> You already know this. Bufferbloat is a symptom and not the cause. Buffe= rbloat grows when there are (1) periods of low or no bandwidth or (2) perio= ds of insufficient bandwidth (aka network congestion). >> >> If I understand this correctly, just a software update cannot make buffe= rbloat go away. It might improve the speed of recovery (e.g. throw away all= time sensitive UDP messages). >> >> Gene >> ---------------------------------------------- >> Eugene Chang >> IEEE Senior Life Member >> eugene.chang@ieee.org >> 781-799-0233 (in Honolulu) >> >> >> >> On Sep 26, 2022, at 10:04 AM, Bruce Perens wrote: >> >> Please help to explain. Here's a draft to start with: >> >> Starlink Performance Not Sufficient for Military Applications, Say Scien= tists >> >> The problem is not availability: Starlink works where nothing but anothe= r satellite network would. It's not bandwidth, although others have questio= ns about sustaining bandwidth as the customer base grows. It's latency and = jitter. As load increases, latency, the time it takes for a packet to get t= hrough, increases more than it should. The scientists who have fought buffe= rbloat, a major cause of latency on the internet, know why. SpaceX needs to= upgrade their system to use the scientist's Open Source modifications to L= inux to fight bufferbloat, and thus reduce latency. This is mostly just usi= ng a newer version, but there are some tunable parameters. Jitter is a chan= ge in the speed of getting a packet through the network during a connection= , which is inevitable in satellite networks, but will be improved by making= use of the bufferbloat-fighting software, and probably with the addition o= f more satellites. >> >> We've done all of the work, SpaceX just needs to adopt it by upgrading t= heir software, said scientist Dave Taht. Jim Gettys, Taht's collaborator an= d creator of the X Window System, chimed in: >> Open Source luminary Bruce Perens said: sometimes Starlink's latency and= jitter make it inadequate to remote-control my ham radio station. But the = military is experimenting with remote-control of vehicles on the battlefiel= d and other applications that can be demonstrated, but won't happen at scal= e without adoption of bufferbloat-fighting strategies. >> >> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang > wrote: >> The key issue is most people don=E2=80=99t understand why latency matter= s. They don=E2=80=99t see it or feel it=E2=80=99s impact. >> >> First, we have to help people see the symptoms of latency and how it imp= acts something they care about. >> - gamers care but most people may think it is frivolous. >> - musicians care but that is mostly for a hobby. >> - business should care because of productivity but they don=E2=80=99t kn= ow how to =E2=80=9Csee=E2=80=9D the impact. >> >> Second, there needs to be a =E2=80=9COMG, I have been seeing the action = of latency all this time and never knew it! I was being shafted.=E2=80=9D O= nce you have this awakening, you can get all the press you want for free. >> >> Most of the time when business apps are developed, =E2=80=9Cwe=E2=80=9D = hide the impact of poor performance (aka latency) or they hide from the dis= cussion because the developers don=E2=80=99t have a way to fix the latency.= Maybe businesses don=E2=80=99t care because any employees affected are jus= t considered poor performers. (In bad economic times, the poor performers a= re just laid off.) For employees, if they happen to be at a location with b= ad latency, they don=E2=80=99t know that latency is hurting them. Unfair bu= t most people don=E2=80=99t know the issue is latency. >> >> Talking and explaining why latency is bad is not as effective as showing= why latency is bad. Showing has to be with something that has a person imp= act. >> >> Gene >> ----------------------------------- >> Eugene Chang >> eugene.chang@alum.mit.edu >> +1-781-799-0233 (in Honolulu) >> >> >> >> >> >> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink > wrote: >> >> If you want to get attention, you can get it for free. I can place artic= les with various press if there is something interesting to say. Did this a= ll through the evangelism of Open Source. All we need to do is write, sign,= and publish a statement. What they actually write is less relevant if they= publish a link to our statement. >> >> Right now I am concerned that the Starlink latency and jitter is going t= o be a problem even for remote controlling my ham station. The US Military = is interested in doing much more, which they have demonstrated, but I don't= see happening at scale without some technical work on the network. Being a= ble to say this isn't ready for the government's application would be an at= tention-getter. >> >> Thanks >> >> Bruce >> >> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink > wrote: >> These days, if you want attention, you gotta buy it. A 50k half page >> ad in the wapo or NYT riffing off of It's the latency, Stupid!", >> signed by the kinds of luminaries we got for the fcc wifi fight, would >> go a long way towards shifting the tide. >> >> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht > wrote: >> >> >> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason >> > wrote= : >> >> >> The awareness & understanding of latency & impact on QoE is nearly unkno= wn among reporters. IMO maybe there should be some kind of background brief= ings for reporters - maybe like a simple YouTube video explainer that is sh= ort & high level & visual? Otherwise reporters will just continue to focus = on what they know... >> >> >> That's a great idea. I have visions of crashing the washington >> correspondents dinner, but perhaps >> there is some set of gatherings journalists regularly attend? >> >> >> =EF=BB=BFOn 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlin= k" on behalf of starlink@lists.bufferbloat.net > wrote: >> >> I still find it remarkable that reporters are still missing the >> meaning of the huge latencies for starlink, under load. >> >> >> >> >> -- >> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_c= odel/ >> Dave T=C3=A4ht CEO, TekLibre, LLC >> >> >> >> >> -- >> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_c= odel/ >> Dave T=C3=A4ht CEO, TekLibre, LLC >> _______________________________________________ >> Starlink mailing list >> Starlink@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/starlink >> >> >> -- >> Bruce Perens K6BP >> _______________________________________________ >> Starlink mailing list >> Starlink@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/starlink >> >> >> >> >> -- >> Bruce Perens K6BP >> >> >> _______________________________________________ >> Starlink mailing list >> Starlink@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/starlink >> >> >> _______________________________________________ >> Starlink mailing list >> Starlink@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/starlink > > > > -- > Bruce Perens K6BP > _______________________________________________ > Starlink mailing list > Starlink@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/starlink --=20 FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_code= l/ Dave T=C3=A4ht CEO, TekLibre, LLC