From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id D5E653CB37 for ; Mon, 26 Sep 2022 23:47:53 -0400 (EDT) Received: by mail-pj1-x1029.google.com with SMTP id bu5-20020a17090aee4500b00202e9ca2182so632141pjb.0 for ; Mon, 26 Sep 2022 20:47:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ieee.org; s=google; h=references:to:cc:in-reply-to:date:subject:mime-version:message-id :from:from:to:cc:subject:date; bh=1DB7MpC/Bq1h3YlAyknpsfdxt+yrxIx1EPtajAYZCdU=; b=ICaP0UdXtuW2qy0m6ObJo21j+7KNr462P6PM/KMHfBsR+CYCHfsPDKwRBH+SID5a/9 8pnEQX4ClL5NVV+qVkAHeK9sRzVCmDiC+LxzIMXQ/cAJriMLrpUAQztak0KaeGqRzVxB hZUaQ3GebPK52Rai/YjPcAVidwOfqRTmBa3fs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=references:to:cc:in-reply-to:date:subject:mime-version:message-id :from:x-gm-message-state:from:to:cc:subject:date; bh=1DB7MpC/Bq1h3YlAyknpsfdxt+yrxIx1EPtajAYZCdU=; b=lTIx0QiqvX206k/3Nr0BLiXIgiwtsWbUqV8W3UW80v7XhFyYS6WY3E7YQ5insQmm4+ DotiWm8TGeF7yyZ9QptPGGqp0mDUtSyg9a+Xhg0oSYlqz2cyv4AOy5wMTTEZR3pjR+z4 mjEC0O80wb1IXX65TTXv6zC/L3X60JXEWHsPJcS+CLTMSKlAuyvsye+Y+LzK+Ai8p+5d F17MspiAYhaYHCp8BdOqPAuVbIpgkW0T5HodiFDVnrd+jNdEmHyf2NsnrnZZZweZ2wSg xufOPFucdd1wGavgPqzu7OuYLcCik+WS7r6lgfcV1aqceSfnLH5q+RG2cOYJOEGTn2mS cwjw== X-Gm-Message-State: ACrzQf36JREOMpeZ1bD/1k+X3p9seA6y0sGZAXSYqHffmugXqSJl6mf4 I6H+Bj1K/0LJPICVJ8yq0ecPM45GLdXL4asE X-Google-Smtp-Source: AMsMyM76bIWNvsul1fJqP6/wyCmj6XNBTePxuv07wr+E7OHf0EUnyvMvhGmRbyA/7xYOmKaDYVvp4A== X-Received: by 2002:a17:903:2286:b0:178:349b:d21c with SMTP id b6-20020a170903228600b00178349bd21cmr24923287plh.71.1664250472460; Mon, 26 Sep 2022 20:47:52 -0700 (PDT) Received: from smtpclient.apple (dhcp-72-253-196-65.hawaiiantel.net. [72.253.196.65]) by smtp.gmail.com with ESMTPSA id bm13-20020a656e8d000000b0043949b480a8sm297188pgb.29.2022.09.26.20.47.50 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 26 Sep 2022 20:47:51 -0700 (PDT) From: Eugene Y Chang Message-Id: <8E6A87E1-8424-44BE-B582-DF3C8424F584@ieee.org> Content-Type: multipart/signed; boundary="Apple-Mail=_EB11DEDF-6F48-450D-AFA4-9F97877C67FF"; protocol="application/pgp-signature"; micalg=pgp-sha256 Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.1\)) Date: Mon, 26 Sep 2022 17:47:48 -1000 In-Reply-To: <2086C010-B91E-450A-A77F-D0840BC5FCC1@gmx.de> Cc: Eugene Chang , David Lang , Dave Taht via Starlink To: Sebastian Moeller References: <060F7695-D48E-413C-9501-54ECC651ABEB@cable.comcast.com> <07C46DD5-7359-410E-8820-82B319944618@alum.mit.edu> <39E525B8-D356-4F76-82FF-F1F0B3183908@ieee.org> <498p2p23-on1q-op89-p518-1874r3r6rpo@ynat.uz> <8DC6E5EE-2B46-4815-A909-E326507E95B1@ieee.org> <2086C010-B91E-450A-A77F-D0840BC5FCC1@gmx.de> X-Mailer: Apple Mail (2.3696.120.41.1.1) Subject: Re: [Starlink] It's still the starlink latency... X-BeenThere: starlink@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Starlink has bufferbloat. Bad." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 27 Sep 2022 03:47:54 -0000 --Apple-Mail=_EB11DEDF-6F48-450D-AFA4-9F97877C67FF Content-Type: multipart/alternative; boundary="Apple-Mail=_DA3176B4-4519-4B08-A30E-C74B4A1A2A7C" --Apple-Mail=_DA3176B4-4519-4B08-A30E-C74B4A1A2A7C Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 > [SM] I note that typically for ingress shaping a post-true-bottleneck = shaper will not work unless we create an artificial bottleneck by = shaping the traffic to below true bottleneck (thereby creating a new = true but artificial bottleneck so the queue develops at a point where we = can control it). > Also if the difference between "true structural" and artificial = bottleneck is small in comparison to the traffic inrush we can see = "traffic back-spill" into the typically oversized and under-managed = upstream buffers, but for reasonably well behaved that happens = relatively rarely. Rarely enough that ingress traffic shaping noticeably = improves latency-under-load in spite of not beeing a guranteed solution. Perhaps I am overthinking this. In general, I don=E2=80=99t think this = really works. Consider a router with M ports from the network edge and N = ports toward the network core. You are only trying to influence one of = the M ports. Even if N=3D1, what actually happens with the buffering at = N depends on all the traffic, not just the traffic you are shaping. > [SM] Well, sometimes such links are congested too for economic = reasons=E2=80=A6 It is always economic. The network is always undersized because of some = economic (or management) policy. These days it is more and more true with the preference of taking fiber = to the subscriber. It is physically possible to send more traffic to the = network than the router can handle. My ISP happily markets 1Gbps service = but the fine print of their contract says they don=E2=80=99t promise = more than 700Mbps. Worst, I waited 9 months for them to resolve why I = was only getting 300Mbps on my 1Gbps service (oops, sorry my 700Mbps = service). Gene ---------------------------------------------- Eugene Chang IEEE Senior Life Member eugene.chang@ieee.org 781-799-0233 (in Honolulu) > On Sep 26, 2022, at 11:29 AM, Sebastian Moeller = wrote: >=20 > Hi David, >=20 >> On Sep 26, 2022, at 23:22, David Lang wrote: >>=20 >> On Mon, 26 Sep 2022, Eugene Y Chang wrote: >>=20 >>>> On Sep 26, 2022, at 11:01 AM, Sebastian Moeller = wrote: >>>>=20 >>>> Hi Eugene, >>>>=20 >>>>=20 >>>>> On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink = > = wrote: >>>>>=20 >>>>> Ok, we are getting into the details. I agree. >>>>>=20 >>>>> Every node in the path has to implement this to be effective. >>>>=20 >>>> Amazingly the biggest bang for the buck is gotten by fixing = those nodes that actually contain a network path's bottleneck. Often = these are pretty stable. So yes for fully guaranteed service quality all = nodes would need to participate, but for improving things noticeably it = is sufficient to improve the usual bottlenecks, e.g. for many internet = access links the home gateway is a decent point to implement better = buffer management. (In short the problem are over-sized and = under-managed buffers, and one of the best solution is better/smarter = buffer management). >>>>=20 >>>=20 >>> This is not completely true. Say the bottleneck is at node N. During = the period of congestion, the upstream node N-1 will have to buffer. = When node N recovers, the bufferbloat at N-1 will be blocking until the = bufferbloat drains. Etc. etc. Making node N better will reduce the = extent of the backup at N-1, but N-1 should implement the better code. >>=20 >> only if node N and node N-1 handle the same traffic with the same = link speeds. In practice this is almost never the case. >=20 > [SM] I note that typically for ingress shaping a = post-true-bottleneck shaper will not work unless we create an artificial = bottleneck by shaping the traffic to below true bottleneck (thereby = creating a new true but artificial bottleneck so the queue develops at a = point where we can control it). > Also if the difference between "true structural" and artificial = bottleneck is small in comparison to the traffic inrush we can see = "traffic back-spill" into the typically oversized and under-managed = upstream buffers, but for reasonably well behaved that happens = relatively rarely. Rarely enough that ingress traffic shaping noticeably = improves latency-under-load in spite of not beeing a guranteed solution. >=20 >=20 >> Until you get to gigabit last-mile links, the last mile is almost = always the bottleneck from both sides, so implementing cake on the home = router makes a huge improvement (and if you can get it on the last-mile = ISP router, even better). Once you get into the Internet fabric, = bottlenecks are fairly rare, they do happen, but ISPs carefully watch = for those and add additional paths and/or increase bandwith to avoid = them. >=20 > [SM] Well, sometimes such links are congested too for economic = reasons... >=20 > Regards > Sebastian >=20 >=20 >>=20 >> David Lang >>=20 >>>>=20 >>>>> In fact, every node in the path has to have the same = prioritization or the scheme becomes ineffective. >>>>=20 >>>> Yes and no, one of the clearest winners has been flow queueing, = IMHO not because it is the most optimal capacity sharing scheme, but = because it is the least pessimal scheme, allowing all (or none) flows = forward progress. You can interpret that as a scheme in which flows = below their capacity share are prioritized, but I am not sure that is = the best way to look at these things. >>>=20 >>> The hardest part is getting competing ISPs to implement and = coordinate. Bufferbloat and handoff between ISPs will be hard. The only = way to fix this is to get the unwashed public to care. Then they can say = =E2=80=9Cwe don=E2=80=99t care about the technical issues, just fix = it.=E2=80=9D Until then =E2=80=A6.. >>>=20 >>>=20 >>>=20 >>>>=20 >>>> Regards >>>> Sebastian >>>>=20 >>>>=20 >>>>>=20 >>>>> Gene >>>>> ---------------------------------------------- >>>>> Eugene Chang >>>>> IEEE Senior Life Member >>>>> eugene.chang@ieee.org >>>>> 781-799-0233 (in Honolulu) >>>>>=20 >>>>>=20 >>>>>=20 >>>>>> On Sep 26, 2022, at 10:48 AM, David Lang wrote: >>>>>>=20 >>>>>> software updates can do far more than just improve recovery. >>>>>>=20 >>>>>> In practice, large data transfers are less sensitive to latency = than smaller data transfers (i.e. downloading a CD image vs a video = conference), software can ensure better fairness in preventing a bulk = transfer from hurting the more latency sensitive transfers. >>>>>>=20 >>>>>> (the example below is not completely accurate, but I think it = gets the point across) >>>>>>=20 >>>>>> When buffers become excessivly large, you have the situation = where a video call is going to generate a small amount of data at a = regular interval, but a bulk data transfer is able to dump a huge amount = of data into the buffer instantly. >>>>>>=20 >>>>>> If you just do FIFO, then you get a small chunk of video call, = then several seconds worth of CD transfer, followed by the next small = chunk of the video call. >>>>>>=20 >>>>>> But the software can prevent the one app from hogging so much of = the connection and let the chunk of video call in sooner, avoiding the = impact to the real time traffic. Historically this has required the = admin classify all traffic and configure equipment to implement = different treatment based on the classification (and this requires trust = in the classification process), the bufferbloat team has developed = options (fq_codel and cake) that can ensure fairness between = applications/servers with little or no configuration, and no trust in = other systems to properly classify their traffic. >>>>>>=20 >>>>>> The one thing that Cake needs to work really well is to be able = to know what the data rate available is. With Starlink, this changes = frequently and cake integrated into the starlink dish/router software = would be far better than anything that can be done externally as the = rate changes can be fed directly into the settings (currently they are = only indirectly detected) >>>>>>=20 >>>>>> David Lang >>>>>>=20 >>>>>>=20 >>>>>> On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote: >>>>>>=20 >>>>>>> You already know this. Bufferbloat is a symptom and not the = cause. Bufferbloat grows when there are (1) periods of low or no = bandwidth or (2) periods of insufficient bandwidth (aka network = congestion). >>>>>>>=20 >>>>>>> If I understand this correctly, just a software update cannot = make bufferbloat go away. It might improve the speed of recovery (e.g. = throw away all time sensitive UDP messages). >>>>>>>=20 >>>>>>> Gene >>>>>>> ---------------------------------------------- >>>>>>> Eugene Chang >>>>>>> IEEE Senior Life Member >>>>>>> eugene.chang@ieee.org >>>>>>> 781-799-0233 (in Honolulu) >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>>> On Sep 26, 2022, at 10:04 AM, Bruce Perens = wrote: >>>>>>>>=20 >>>>>>>> Please help to explain. Here's a draft to start with: >>>>>>>>=20 >>>>>>>> Starlink Performance Not Sufficient for Military Applications, = Say Scientists >>>>>>>>=20 >>>>>>>> The problem is not availability: Starlink works where nothing = but another satellite network would. It's not bandwidth, although others = have questions about sustaining bandwidth as the customer base grows. = It's latency and jitter. As load increases, latency, the time it takes = for a packet to get through, increases more than it should. The = scientists who have fought bufferbloat, a major cause of latency on the = internet, know why. SpaceX needs to upgrade their system to use the = scientist's Open Source modifications to Linux to fight bufferbloat, and = thus reduce latency. This is mostly just using a newer version, but = there are some tunable parameters. Jitter is a change in the speed of = getting a packet through the network during a connection, which is = inevitable in satellite networks, but will be improved by making use of = the bufferbloat-fighting software, and probably with the addition of = more satellites. >>>>>>>>=20 >>>>>>>> We've done all of the work, SpaceX just needs to adopt it by = upgrading their software, said scientist Dave Taht. Jim Gettys, Taht's = collaborator and creator of the X Window System, chimed in: >>>>>>>> Open Source luminary Bruce Perens said: sometimes Starlink's = latency and jitter make it inadequate to remote-control my ham radio = station. But the military is experimenting with remote-control of = vehicles on the battlefield and other applications that can be = demonstrated, but won't happen at scale without adoption of = bufferbloat-fighting strategies. >>>>>>>>=20 >>>>>>>> On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang = > wrote: >>>>>>>> The key issue is most people don=E2=80=99t understand why = latency matters. They don=E2=80=99t see it or feel it=E2=80=99s impact. >>>>>>>>=20 >>>>>>>> First, we have to help people see the symptoms of latency and = how it impacts something they care about. >>>>>>>> - gamers care but most people may think it is frivolous. >>>>>>>> - musicians care but that is mostly for a hobby. >>>>>>>> - business should care because of productivity but they don=E2=80= =99t know how to =E2=80=9Csee=E2=80=9D the impact. >>>>>>>>=20 >>>>>>>> Second, there needs to be a =E2=80=9COMG, I have been seeing = the action of latency all this time and never knew it! I was being = shafted.=E2=80=9D Once you have this awakening, you can get all the = press you want for free. >>>>>>>>=20 >>>>>>>> Most of the time when business apps are developed, =E2=80=9Cwe=E2= =80=9D hide the impact of poor performance (aka latency) or they hide = from the discussion because the developers don=E2=80=99t have a way to = fix the latency. Maybe businesses don=E2=80=99t care because any = employees affected are just considered poor performers. (In bad economic = times, the poor performers are just laid off.) For employees, if they = happen to be at a location with bad latency, they don=E2=80=99t know = that latency is hurting them. Unfair but most people don=E2=80=99t know = the issue is latency. >>>>>>>>=20 >>>>>>>> Talking and explaining why latency is bad is not as effective = as showing why latency is bad. Showing has to be with something that has = a person impact. >>>>>>>>=20 >>>>>>>> Gene >>>>>>>> ----------------------------------- >>>>>>>> Eugene Chang >>>>>>>> eugene.chang@alum.mit.edu >>>>>>>> +1-781-799-0233 (in Honolulu) >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>> On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink = > = wrote: >>>>>>>>>=20 >>>>>>>>> If you want to get attention, you can get it for free. I can = place articles with various press if there is something interesting to = say. Did this all through the evangelism of Open Source. All we need to = do is write, sign, and publish a statement. What they actually write is = less relevant if they publish a link to our statement. >>>>>>>>>=20 >>>>>>>>> Right now I am concerned that the Starlink latency and jitter = is going to be a problem even for remote controlling my ham station. The = US Military is interested in doing much more, which they have = demonstrated, but I don't see happening at scale without some technical = work on the network. Being able to say this isn't ready for the = government's application would be an attention-getter. >>>>>>>>>=20 >>>>>>>>> Thanks >>>>>>>>>=20 >>>>>>>>> Bruce >>>>>>>>>=20 >>>>>>>>> On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink = > = wrote: >>>>>>>>> These days, if you want attention, you gotta buy it. A 50k = half page >>>>>>>>> ad in the wapo or NYT riffing off of It's the latency, = Stupid!", >>>>>>>>> signed by the kinds of luminaries we got for the fcc wifi = fight, would >>>>>>>>> go a long way towards shifting the tide. >>>>>>>>>=20 >>>>>>>>> On Mon, Sep 26, 2022 at 8:29 AM Dave Taht > wrote: >>>>>>>>>>=20 >>>>>>>>>> On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason >>>>>>>>>> > wrote: >>>>>>>>>>>=20 >>>>>>>>>>> The awareness & understanding of latency & impact on QoE is = nearly unknown among reporters. IMO maybe there should be some kind of = background briefings for reporters - maybe like a simple YouTube video = explainer that is short & high level & visual? Otherwise reporters will = just continue to focus on what they know... >>>>>>>>>>=20 >>>>>>>>>> That's a great idea. I have visions of crashing the = washington >>>>>>>>>> correspondents dinner, but perhaps >>>>>>>>>> there is some set of gatherings journalists regularly attend? >>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>> =EF=BB=BFOn 9/21/22, 14:35, "Starlink on behalf of Dave Taht = via Starlink" on behalf of = starlink@lists.bufferbloat.net > = wrote: >>>>>>>>>>>=20 >>>>>>>>>>> I still find it remarkable that reporters are still missing = the >>>>>>>>>>> meaning of the huge latencies for starlink, under load. >>>>>>>>>>>=20 >>>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>>=20 >>>>>>>>>> -- >>>>>>>>>> FQ World Domination pending: = https://blog.cerowrt.org/post/state_of_fq_codel/ >>>>>>>>>> Dave T=C3=A4ht CEO, TekLibre, LLC >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> -- >>>>>>>>> FQ World Domination pending: = https://blog.cerowrt.org/post/state_of_fq_codel/ >>>>>>>>> Dave T=C3=A4ht CEO, TekLibre, LLC >>>>>>>>> _______________________________________________ >>>>>>>>> Starlink mailing list >>>>>>>>> Starlink@lists.bufferbloat.net = >>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink = >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> -- >>>>>>>>> Bruce Perens K6BP >>>>>>>>> _______________________________________________ >>>>>>>>> Starlink mailing list >>>>>>>>> Starlink@lists.bufferbloat.net = >>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink = >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> -- >>>>>>>> Bruce Perens K6BP >>>>>=20 >>>>> _______________________________________________ >>>>> Starlink mailing list >>>>> Starlink@lists.bufferbloat.net = >>>>> https://lists.bufferbloat.net/listinfo/starlink = --Apple-Mail=_DA3176B4-4519-4B08-A30E-C74B4A1A2A7C Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8
[SM] I note that typically for ingress shaping a = post-true-bottleneck shaper will not work unless we create an artificial = bottleneck by shaping the traffic to below true bottleneck (thereby = creating a new true but artificial bottleneck so the queue develops at a = point where we can control it).
Also if the difference between "true structural" and = artificial bottleneck is small in comparison to the traffic inrush we = can see "traffic back-spill" into the typically oversized and = under-managed upstream buffers, but for reasonably well behaved that = happens relatively rarely. Rarely enough that ingress traffic shaping = noticeably improves latency-under-load in spite of not beeing a = guranteed solution.

Perhaps I am overthinking this. In = general, I don=E2=80=99t think this really works. Consider a router with = M ports from the network edge and N ports toward the network core. You = are only trying to influence one of the M ports. Even if N=3D1, what = actually happens with the buffering at N depends on all the traffic, not = just the traffic you are shaping.

[SM] Well, sometimes such links are congested too for = economic reasons=E2=80=A6

It is always economic. The network is always undersized = because of some economic (or management) policy.

These days it is more and more true = with the preference of taking fiber to the subscriber. It is physically = possible to send more traffic to the network than the router can handle. = My ISP happily markets 1Gbps service but the fine print of their = contract says they don=E2=80=99t promise more than 700Mbps. Worst, I = waited 9 months for them to resolve why I was only getting 300Mbps on my = 1Gbps service (oops, sorry my 700Mbps service).


Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life = Member
eugene.chang@ieee.org
781-799-0233 (in = Honolulu)



On Sep 26, 2022, at 11:29 AM, Sebastian Moeller <moeller0@gmx.de> = wrote:

Hi David,

On Sep 26, 2022, at 23:22, David Lang = <david@lang.hm> = wrote:

On Mon, 26 Sep 2022, Eugene Y Chang = wrote:

On Sep 26, 2022, at = 11:01 AM, Sebastian Moeller <moeller0@gmx.de> wrote:

Hi = Eugene,


On Sep 26, 2022, at 22:54, Eugene Y Chang via = Starlink <starlink@lists.bufferbloat.net <mailto:starlink@lists.bufferbloat.net>> wrote:

Ok, we are getting into the details. I = agree.

Every node in the path has to = implement this to be effective.

= Amazingly the biggest bang for the buck is gotten by fixing those = nodes that actually contain a network path's bottleneck. Often these are = pretty stable. So yes for fully guaranteed service quality all nodes = would need to participate, but for improving things noticeably it is = sufficient to improve the usual bottlenecks, e.g. for many internet = access links the home gateway is a decent point to implement better = buffer management. (In short the problem are over-sized and = under-managed buffers, and one of the best solution is better/smarter = buffer management).


This is not completely true. Say the bottleneck is at node N. = During the period of congestion, the upstream node N-1 will have to = buffer. When node N recovers, the bufferbloat at N-1 will be blocking = until the bufferbloat drains. Etc. etc.  Making node N better will = reduce the extent of the backup at N-1, but N-1 should implement the = better code.

only if node N = and node N-1 handle the same traffic with the same link speeds. In = practice this is almost never the case.

[SM] I note that typically for ingress shaping a = post-true-bottleneck shaper will not work unless we create an artificial = bottleneck by shaping the traffic to below true bottleneck (thereby = creating a new true but artificial bottleneck so the queue develops at a = point where we can control it).
= Also if the = difference between "true structural" and artificial bottleneck is small = in comparison to the traffic inrush we can see "traffic back-spill" into = the typically oversized and under-managed upstream buffers, but for = reasonably well behaved that happens relatively rarely. Rarely enough = that ingress traffic shaping noticeably improves latency-under-load in = spite of not beeing a guranteed solution.


Until you get to gigabit last-mile = links, the last mile is almost always the bottleneck from both sides, so = implementing cake on the home router makes a huge improvement (and if = you can get it on the last-mile ISP router, even better). Once you get = into the Internet fabric, bottlenecks are fairly rare, they do happen, = but ISPs carefully watch for those and add additional paths and/or = increase bandwith to avoid them.

[SM] Well, sometimes such links are congested too for = economic reasons...

Regards
= Sebastian



David Lang


In fact, every node in the path has to have the same = prioritization or the scheme becomes ineffective.

Yes and no, one of the clearest = winners has been flow queueing, IMHO not because it is the most optimal = capacity sharing scheme, but because it is the least pessimal scheme, = allowing all (or none) flows forward progress. You can interpret that as = a scheme in which flows below their capacity share are prioritized, but = I am not sure that is the best way to look at these things.

The hardest part is getting = competing ISPs to implement and coordinate. Bufferbloat and handoff = between ISPs will be hard. The only way to fix this is to get the = unwashed public to care. Then they can say =E2=80=9Cwe don=E2=80=99t = care about the technical issues, just fix it.=E2=80=9D Until then = =E2=80=A6..




Regards
= Sebastian



Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in = Honolulu)



On Sep 26, 2022, at = 10:48 AM, David Lang <david@lang.hm> wrote:

software updates can do far more than just improve = recovery.

In practice, large data transfers = are less sensitive to latency than smaller data transfers (i.e. = downloading a CD image vs a video conference), software can ensure = better fairness in preventing a bulk transfer from hurting the more = latency sensitive transfers.

(the example = below is not completely accurate, but I think it gets the point = across)

When buffers become excessivly = large, you have the situation where a video call is going to generate a = small amount of data at a regular interval, but a bulk data transfer is = able to dump a huge amount of data into the buffer instantly.

If you just do FIFO, then you get a small = chunk of video call, then several seconds worth of CD transfer, followed = by the next small chunk of the video call.

But the software can prevent the one app from hogging so much = of the connection and let the chunk of video call in sooner, avoiding = the impact to the real time traffic. Historically this has required the = admin classify all traffic and configure equipment to implement = different treatment based on the classification (and this requires trust = in the classification process), the bufferbloat team has developed = options (fq_codel and cake) that can ensure fairness between = applications/servers with little or no configuration, and no trust in = other systems to properly classify their traffic.

The one thing that Cake needs to work really well is to be = able to know what the data rate available is. With Starlink, this = changes frequently and cake integrated into the starlink dish/router = software would be far better than anything that can be done externally = as the rate changes can be fed directly into the settings (currently = they are only indirectly detected)

David = Lang


On Mon, 26 Sep 2022, = Eugene Y Chang via Starlink wrote:

You already know this. = Bufferbloat is a symptom and not the cause. Bufferbloat grows when there = are (1) periods of low or no bandwidth or (2) periods of insufficient = bandwidth (aka network congestion).

If I = understand this correctly, just a software update cannot make = bufferbloat go away. It might improve the speed of recovery (e.g. throw = away all time sensitive UDP messages).

Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
eugene.chang@ieee.org
781-799-0233 (in = Honolulu)



On Sep 26, 2022, at = 10:04 AM, Bruce Perens <bruce@perens.com> wrote:

Please help to explain. Here's a draft to start with:

Starlink Performance Not Sufficient for = Military Applications, Say Scientists

The = problem is not availability: Starlink works where nothing but another = satellite network would. It's not bandwidth, although others have = questions about sustaining bandwidth as the customer base grows. It's = latency and jitter. As load increases, latency, the time it takes for a = packet to get through, increases more than it should. The scientists who = have fought bufferbloat, a major cause of latency on the internet, know = why. SpaceX needs to upgrade their system to use the scientist's Open = Source modifications to Linux to fight bufferbloat, and thus reduce = latency. This is mostly just using a newer version, but there are some = tunable parameters. Jitter is a change in the speed of getting a packet = through the network during a connection, which is inevitable in = satellite networks, but will be improved by making use of the = bufferbloat-fighting software, and probably with the addition of more = satellites.

We've done all of the work, = SpaceX just needs to adopt it by upgrading their software, said = scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of the = X Window System, chimed in: <fill in here please>
Open= Source luminary Bruce Perens said: sometimes Starlink's latency and = jitter make it inadequate to remote-control my ham radio station. But = the military is experimenting with remote-control of vehicles on the = battlefield and other applications that can be demonstrated, but won't = happen at scale without adoption of bufferbloat-fighting strategies.

On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang = <eugene.chang@alum.mit.edu<mailto:eugene.chang@alum.mit.edu>> = wrote:
The key issue is most people don=E2=80=99t = understand why latency matters. They don=E2=80=99t see it or feel it=E2=80= =99s impact.

First, we have to help people = see the symptoms of latency and how it impacts something they care = about.
- gamers care but most people may think it is = frivolous.
- musicians care but that is mostly for a = hobby.
- business should care because of productivity but = they don=E2=80=99t know how to =E2=80=9Csee=E2=80=9D the impact.

Second, there needs to be a =E2=80=9COMG, I = have been seeing the action of latency all this time and never knew it! = I was being shafted.=E2=80=9D Once you have this awakening, you can get = all the press you want for free.

Most of = the time when business apps are developed, =E2=80=9Cwe=E2=80=9D hide the = impact of poor performance (aka latency) or they hide from the = discussion because the developers don=E2=80=99t have a way to fix the = latency. Maybe businesses don=E2=80=99t care because any employees = affected are just considered poor performers. (In bad economic times, = the poor performers are just laid off.) For employees, if they happen to = be at a location with bad latency, they don=E2=80=99t know that latency = is hurting them. Unfair but most people don=E2=80=99t know the issue is = latency.

Talking and explaining why latency = is bad is not as effective as showing why latency is bad. Showing has to = be with something that has a person impact.

Gene
-----------------------------------
Eugene Chang
eugene.chang@alum.mit.edu = <mailto:eugene.chang@alum.mit.edu>
+1-781-799-0233 = (in Honolulu)





On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink = <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.ne= t>> wrote:

If you want to get = attention, you can get it for free. I can place articles with various = press if there is something interesting to say. Did this all through the = evangelism of Open Source. All we need to do is write, sign, and publish = a statement. What they actually write is less relevant if they publish a = link to our statement.

Right now I am = concerned that the Starlink latency and jitter is going to be a problem = even for remote controlling my ham station. The US Military is = interested in doing much more, which they have demonstrated, but I don't = see happening at scale without some technical work on the network. Being = able to say this isn't ready for the government's application would be = an attention-getter.

Thanks
Bruce

On Mon, Sep 26, 2022 at = 9:21 AM Dave Taht via Starlink = <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.ne= t>> wrote:
These days, if you want attention, you = gotta buy it. A 50k half page
ad in the wapo or NYT = riffing off of It's the latency, Stupid!",
signed by the = kinds of luminaries we got for the fcc wifi fight, would
go = a long way towards shifting the tide.

On = Mon, Sep 26, 2022 at 8:29 AM Dave Taht <dave.taht@gmail.com = <mailto:dave.taht@gmail.com>> wrote:

On Mon, Sep 26, 2022 at 8:20 AM = Livingood, Jason
<Jason_Livingood@comcast.com = <mailto:Jason_Livingood@comcast.com>> wrote:

The = awareness & understanding of latency & impact on QoE is nearly = unknown among reporters. IMO maybe there should be some kind of = background briefings for reporters - maybe like a simple YouTube video = explainer that is short & high level & visual? Otherwise = reporters will just continue to focus on what they know...

That's a great idea. I have = visions of crashing the washington
correspondents dinner, = but perhaps
there is some set of gatherings journalists = regularly attend?


=EF=BB=BFOn 9/21/22, 14:35, "Starlink on = behalf of Dave Taht via Starlink" = <starlink-bounces@lists.bufferbloat.net = <mailto:starlink-bounces@lists.bufferbloat.net> on behalf of = starlink@lists.bufferbloat.net = <mailto:starlink@lists.bufferbloat.net>> wrote:
I still find it remarkable that reporters are still missing = the
meaning of the huge latencies for starlink, under = load.




--
FQ World Domination pending: = https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.o= rg/post/state_of_fq_codel/>
Dave T=C3=A4ht CEO, = TekLibre, LLC


--
FQ World Domination pending: = https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.o= rg/post/state_of_fq_codel/>
Dave T=C3=A4ht CEO, = TekLibre, LLC
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net = <mailto:Starlink@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/starlink = <https://lists.bufferbloat.net/listinfo/starlink>


--
Bruce Perens K6BP
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net = <mailto:Starlink@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/starlink = <https://lists.bufferbloat.net/listinfo/starlink>



--Bruce Perens K6BP

_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net = <mailto:Starlink@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/starlink = <https://lists.bufferbloat.net/listinfo/starlink>

= --Apple-Mail=_DA3176B4-4519-4B08-A30E-C74B4A1A2A7C-- --Apple-Mail=_EB11DEDF-6F48-450D-AFA4-9F97877C67FF Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename=signature.asc Content-Type: application/pgp-signature; name=signature.asc Content-Description: Message signed with OpenPGP -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEERPTGiBqcibajhTSsv0/8FiYdKmAFAmMycmQACgkQv0/8FiYd KmDQ3xAAj3euxj8JPRMeT6dbM9n6lJplLFdy9EjclqkjZ+624e/EkqtPJUnA1P/x pwYjQo4onweL2ZfGaFTUdNAeIkp3d1l6Ryen3VFDI6jBxmhmVG9r/kEaf0ZbCcrH D9QrnPY8rzmiPb9IrTElDl/zimskJB2U+m9QaPhSRK3Zi+/Z2gMoz4QcRuqIGgEf EHbU3QU1XU/q5wKauAvTyMSoXGK+efwFtKFcTKHSWTLROcJrjbmxNc4aVfWhRUjc oEKsi6vltpYT5JuzUXkf5Wv3gJDdkrPeJp2v9VDtk4WACo8gA2HE0+v/y7OIuwAe opg6InAMUJr+GKhiKFJgxgImRlPNO4nXT2GkcLW1MSMObpeA9++xErxSsTja5N7N pquEOubyCdZPdRUBjxU7L5iwZyx477DIc9MVkWcb41/X1ghLrg2Zy3R8i77lfQy6 wGy1uAAapMKzBOMe3nBdgiEwm39KrNeTmfgVkRl1ufrd3yLRmKFvnL01IQHYRkLD QEbRuf+mkwgVbH28oV9DlR66WaI3hZ9LHEOiRv454pb+DLa3ZLFtfb8rjkkMgC42 ZseSJzEEAoUV1axwygXYQ2uH4Ey6sEQqEtuFkP1Rx4l+K7O/FsFL58F4+MF2Tpw5 smMADTijPuM6awNET7HhlIa4cOUaOpBMCWrAKAnBgDTMRA8GcUs= =GnES -----END PGP SIGNATURE----- --Apple-Mail=_EB11DEDF-6F48-450D-AFA4-9F97877C67FF--