From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id D58443CB35; Fri, 29 Jul 2022 11:29:49 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1659108587; bh=zdz7m2J6FE5oJMsDubxBBfmQKLw0kG0WACgJnd3HCNI=; h=X-UI-Sender-Class:Date:From:To:CC:Subject:In-Reply-To:References; b=ETsZNPmak+FcHSc/p/w7Oerj0QpQa6nzgshbdPomsMmhdJ7av7pQPAsHd1h4o0pHe OAvFNf1f+aF82+3aii+zR+TXuAG2ljIJhNCfLZPB0upnGYx+c5i8CA2N/jjwGlWrKe dJAJbKGZ7Ou6ldUTubQH1Wnc/yIGJjhHIIy0nfEE= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from [127.0.0.1] ([80.187.115.8]) by mail.gmx.net (mrgmx004 [212.227.17.190]) with ESMTPSA (Nemesis) id 1N6bfw-1nOjXp25jK-0180RD; Fri, 29 Jul 2022 17:29:47 +0200 Date: Fri, 29 Jul 2022 17:29:43 +0200 From: Sebastian Moeller To: Dave Taht , =?ISO-8859-1?Q?Bj=F8rn_Ivar_Teigen?= CC: Dave Taht via Starlink , codel@lists.bufferbloat.net User-Agent: K-9 Mail for Android In-Reply-To: References: Message-ID: <63E4019E-958A-4772-8D87-F972B67CD0AE@gmx.de> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary=----Y8RTZE61V75OLN4VBD8BQZLF1JBURV Content-Transfer-Encoding: 7bit X-Provags-ID: V03:K1:IWegWom4m71fkFM6CDDEgQodN0yuv/ZpFUkmhHWgdWoKm4bBxwR +C+C2bJRpxRxErjwmiQKW/IUZ+XTWOnxqDujaH9JlU9MxLTqDS3D+haC2UO7sbXLYuinhdM 7pPL4cLbMjOdNjG1rlCl36s6XVHbPwVCRwTy+TQjUQIu6Q93WiERspzjhzpzQAWqC9kcwqk OQ5bNmMjfMiOi0OD7P5OA== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:TGVGGWY0TrI=:UKM+K2P5Yt6SVh6YspzEWr 8sGzy5fIn3YbMYqyq/b00xxGdS8n1hD6B6kl8ytXSvQ6/FWnQhxgA4Cb7ZQS32Tld5JWeBCBB H54qr6y+NLryE9Zb+blzT7cv+aTHjC9p6lJ47mYUr7dPzq0RNVs8j2F9YeKxD024ZvgcWddMp p4Zm1cwFAlSl9EWidS7KT2ji6bxeLMGzmewYpGX7+brjvjpzYu/BUS3DTR6ajEDkQ6FrYzpjn sz188NGrifniafLS9iL4LPHpOj6FzgdrY/SnpUtB5pwbJyItjW39m1uCxKy+ZjqIFktE4aW/J Wyv9dceSjgBv4yv9F5lmeNdV+l8/LJ3mJrWkPiB2zGtfU5SCYpwU5symA7E2NlZOXm77DrjQk 0HcuP3xhaBsGy5bLkm3pBs6TJkq041a00eUS7WGHjs0eWSo+G8AcRRR7qOwqyd4nkgLAdV+ND KanD5PWc5+QQYgPRCWoU1GyGxjHqQSUMEzxB597TmeFfLpbraUDCkff9XqaatYkY3Sik1owlz cRPWGSIEQVm2nhC1A0kUqoU2Hhh98NuUId+LO70eX/VIGzKyd55yQtQDznFqHfdSTQ7W34p/q tE4sT4aXAODWLkfJB4j+lpTUPmPNQ6/ToyCMqoT9M0Hymt3+vGSVUvtplrO35VSALnUTf+RYH lqx5MGQsZtH5xuD+ksQyLrTeRf9rmPgXeLiN0oGRdqPYUuwWSqTBiMntROArf4av2RrdEZI8y ljKf7K7EJIwG7TpEKSjSkKCK1AYyZnEbzWY0qsO+5TLoWz699g9YeG60LTZ/ATe+YHtdjcjK/ RPHGNB09+g/4nhZ/8HL3Jf4XoePVeZztor3NZTi9ALG245Ws2VsLEDkSaBNgJD0xASmjbiRwb v2BT/xsslvn5/1A6v80WfrumZftw0bdfnAgC87e+DJFmQ4AfQl3NuxooNlrdgSfoXfw0CWnUV LQt6abRJnmQL3WYmznWOYtHTNwjDEemEPMv/NUIY8lkOKstw49Xtd7myLpc72Yg2a0IfWx2j9 CsLT8HU2XBitYRjydS4WLzWUjl2VblnNA76OtIi1GT4hlqXmR/SiPZCkMqRQbUlM45yEJ+mLA BWx4AuXUm7Deg/fyWgLACh5OEYNFt5lS2XPgYRbLTwk9UKuO4nCtoXN/Q== Subject: Re: [Codel] [Starlink] Finite-Buffer M/G/1 Queues with Time and Space Priorities X-BeenThere: codel@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: CoDel AQM discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 29 Jul 2022 15:29:50 -0000 ------Y8RTZE61V75OLN4VBD8BQZLF1JBURV Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi Dave, I thought it was accepted knowledge that inter-packet delays across the in= ternet are not reliable? Why are paced chirps immune from that problem? Or = asked differently after all noise filtering required for robust and reliabl= e operation over the existing internet, is this still going to be noticeabl= y faster than current slow-start? In spite of the slow in the name doubling= every RTT is IMHO already a pretty aggressive growth function=2E=2E=2E=2E Sidenote, you really think the second paper nailed PCs coffin shut? On 29 July 2022 16:55:42 CEST, Dave Taht wrote: >To give credit where credit is due, "packet chirping" had been >explored before in the context >of the l4s early marking ecn effort: > >https://www=2Ebobbriscoe=2Enet/presents/1802paced-chirps/1802paced-chirps= =2Epdf > >It died here: https://bobbriscoe=2Enet/projects/netsvc_i-f/chirp_pfldnet1= 0=2Epdf >For now=2E > >On Thu, Jul 28, 2022 at 6:50 AM Dave Taht wrote= : >> >> thx for the comments everyone! >> >> On Thu, Jul 28, 2022 at 3:16 AM Bj=C3=B8rn Ivar Teigen via Starlink >> wrote: >> > >> > Very good point=2E Perhaps we can think of it as "at what point does = delay equal loss?"=2E As you say, too much latency (causing reordering for = instance, or triggering an algorithm to smooth over missing data), is funct= ionally equivalent to loss, and therefore buffering beyond that point is ma= king things worse by delaying more traffic=2E The point at which this kicks= in varies a lot between applications though, so some kind of classificatio= n might still make sense=2E >> > >> > In a way, I think FQ_Codel does this classification implicitly by tre= ating sparse and non-sparse flows differently=2E >> >> the implicit flow analysis of fq_codel paper toke did is here: >> http://www=2Ediva-portal=2Eorg/smash/get/diva2:1251687/FULLTEXT01=2Epdf >> It's a really nice feature!, and helps a lot when also applied to wifi >> station scheduling=2E >> >> I have sometimes thought that increasing to quantum to account for two >> paced packets in a row (at high rates) was a good idea, >> other times having paced transports analyze the "beat frequency" of >> sending packets through fq_codel vs a vs the ack flow characteristics >> (for example, filtering) might be useful=2E >> >> Imagine that instead of sending packets on a fixed but increasing >> pacing schedule within an RTT thusly >> >> PPPPPPPPPP # IW10 burst >> PP PP PP PP PP # often about 24 packets in what we >> think the RTT is >> >> PP PP PP PP PP PP PP >> >> PPPPPPPPPPPPPPPPPP >> >> PPPPPPPPPPPPPPPPPPPPPPP stready buffering and ultimately a drop (and >> yes this is inaccurate a model in a zillion ways, forgive me for >> purposes of extrapolation in ascii text) >> >> If instead=2E=2E=2E >> >> You broke up the pacing within an RTT on an actual curve, selecting >> some random segment out of PI as your actual starting point, say, at >> 3=2E14596 here=2E >> >> PPPPPP PPPPP PPP >> PPPPP PPPPPPPP >> PPPPPPPPP PPP PP >> >> 3=2E14159265358979323846264338327950288419716939937510 >> 58209749445923078164062862089986280348253421170679 >> 82148086513282306647093844609550582231725359408128 >> 48111745028410270193852110555964462294895493038196 >> 44288109756659334461284756482337867831652712019091 >> 45648566923460348610454326648213393607260249141273 >> 72458700660631558817488152092096282925409171536436 >> 78925903600113305305488204665213841469519415116094 >> 33057270365759591953092186117381932611793105118548 >> 07446237996274956735188575272489122793818301194912 >> 98336733624406566430860213949463952247371907021798 >> 60943702770539217176293176752384674818467669405132 >> 00056812714526356082778577134275778960917363717872 >> 14684409012249534301465495853710507922796892589235 >> 42019956112129021960864034418159813629774771309960 >> 51870721134999999837297804995105973173281609631859 >> 50244594553469083026425223082533446850352619311881 >> 71010003137838752886587533208381420617177669147303 >> 59825349042875546873115956286388235378759375195778 >> 18577805321712268066130019278766111959092164201989 >> >> what could you learn? >> >> >> > - Bj=C3=B8rn Ivar >> > >> > On Thu, 28 Jul 2022 at 11:55, Sebastian Moeller w= rote: >> >> >> >> Hi all, >> >> >> >> >> >> > On Jul 28, 2022, at 11:26, Bj=C3=B8rn Ivar Teigen via Starlink wrote: >> >> > >> >> > Hi everyone, >> >> > >> >> > Interesting paper Dave, I've got a few thoughts: >> >> > >> >> > I like the split into delay-sensitive and loss-sensitive data=2E >> >> >> >> However often real data is slightly different (e=2Eg=2E not nicely e= ither delay- or loss-sensitive)=2E=2E=2E e=2Eg=2E for "real-time" games you= have both delay and loss sensitivity (similarly for VoIP), however both ca= n deal with occasional lost or delayed packets (if the delay is large enoug= h to say be re-ordered with the temporally next data packet (voice sample i= n VoIP, server-tick update in games), that packet's data will likely not be= evaluated at all)=2E And large scale bulk downloads are both tolerant to d= elay and occasional loss=2E So if we think about a state space spanned by a= delay and a loss-sensitivity axis, I predict most real traffic types will = cluster somewhat around the diagonal (more or less closely)=2E >> >> >> >> About the rest of the paper I have nothing to contribute, since I di= d not spend the time to work though it=2E >> >> >> >> Regards >> >> Sebastian >> >> >> >> >> >> >> >> > Different applications can have different needs and this split all= ows a queuing algorithm to take those differences into account=2E Not the f= irst time I've seen this kind of split, but the other one I've seen used M/= M/1/k queues (document here: https://www=2Eresearchgate=2Enet/publication/2= 452029_A_Queueing_Theory_Model_that_Enables_Control_of_Loss_and_Delay_of_Tr= affic_at_a_Network_Switch) >> >> > >> >> > That said, the performance metrics are derived from the embedded M= arkov chain of the queuing system=2E This means the metrics are averages ov= er *all of time*, and thus there can be shorter periods (seconds, minutes, = hours) of much worse than average performance=2E Therefore the conclusions = of the paper should be taken with a grain of salt in my opinion=2E >> >> > >> >> > On Thu, 28 Jul 2022 at 10:45, Bless, Roland (TM) via Starlink wrote: >> >> > Hi Dave, >> >> > >> >> > IMHO the problem w=2Er=2Et the applicability of most models from >> >> > queueing theory is that they only work for load < 1, whereas >> >> > we are using the network with load values ~1 (i=2Ee=2E, around one= ) due to >> >> > congestion control feedback loops that drive the bottleneck link >> >> > to saturation (unless you consider application limited traffic sou= rces)=2E >> >> > >> >> > To be fair there are queuing theory models that include packet los= s (which is the case for the paper Dave is asking about here), and these ca= n work perfectly well for load > 1=2E Agree about the CC feedback loops aff= ecting the results though=2E Even if the distributions are general in the p= aper, they still assume samples are IID which is not true for real networks= =2E Feedback loops make real traffic self-correlated, which makes the short= periods of worse than average performance worse and more frequent than IID= models might suggest=2E >> >> > >> >> > Regards, >> >> > Bj=C3=B8rn Ivar >> >> > >> >> > >> >> > Regards, >> >> > Roland >> >> > >> >> > On 27=2E07=2E22 at 17:34 Dave Taht via Starlink wrote: >> >> > > Occasionally I pass along a recent paper that I don't understand= in >> >> > > the hope that someone can enlighten me=2E >> >> > > This is one of those occasions, where I am trying to leverage wh= at I >> >> > > understand of existing FQ-codel behaviors against real traffic= =2E >> >> > > >> >> > > https://www=2Ehindawi=2Ecom/journals/mpe/2022/4539940/ >> >> > > >> >> > > Compared to the previous study on finite-buffer M/M/1 priority q= ueues >> >> > > with time and space priority, where service times are identical = and >> >> > > exponentially distributed for both types of traffic, in our mode= l we >> >> > > assume that service times are different and are generally distri= buted >> >> > > for different types of traffic=2E As a result, our model is more >> >> > > suitable for the performance analysis of communication systems >> >> > > accommodating multiple types of traffic with different service-t= ime >> >> > > distributions=2E For the proposed queueing model, we derive the >> >> > > queue-length distributions, loss probabilities, and mean waiting= times >> >> > > of both types of traffic, as well as the push-out probability of >> >> > > delay-sensitive traffic=2E >> >> > _______________________________________________ >> >> > Starlink mailing list >> >> > Starlink@lists=2Ebufferbloat=2Enet >> >> > https://lists=2Ebufferbloat=2Enet/listinfo/starlink >> >> > >> >> > >> >> > -- >> >> > Bj=C3=B8rn Ivar Teigen >> >> > Head of Research >> >> > +47 47335952 | bjorn@domos=2Eno | www=2Edomos=2Eno >> >> > _______________________________________________ >> >> > Starlink mailing list >> >> > Starlink@lists=2Ebufferbloat=2Enet >> >> > https://lists=2Ebufferbloat=2Enet/listinfo/starlink >> >> >> > >> > >> > -- >> > Bj=C3=B8rn Ivar Teigen >> > Head of Research >> > +47 47335952 | bjorn@domos=2Eno | www=2Edomos=2Eno >> > _______________________________________________ >> > Starlink mailing list >> > Starlink@lists=2Ebufferbloat=2Enet >> > https://lists=2Ebufferbloat=2Enet/listinfo/starlink >> >> >> >> -- >> FQ World Domination pending: https://blog=2Ecerowrt=2Eorg/post/state_of= _fq_codel/ >> Dave T=C3=A4ht CEO, TekLibre, LLC > > > >--=20 >FQ World Domination pending: https://blog=2Ecerowrt=2Eorg/post/state_of_f= q_codel/ >Dave T=C3=A4ht CEO, TekLibre, LLC --=20 Sent from my Android device with K-9 Mail=2E Please excuse my brevity=2E ------Y8RTZE61V75OLN4VBD8BQZLF1JBURV Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi Dave,

I thought it was accepted knowledg= e that inter-packet delays across the internet are not reliable? Why are pa= ced chirps immune from that problem? Or asked differently after all noise f= iltering required for robust and reliable operation over the existing inter= net, is this still going to be noticeably faster than current slow-start? I= n spite of the slow in the name doubling every RTT is IMHO already a pretty= aggressive growth function=2E=2E=2E=2E

Sidenote, you really think t= he second paper nailed PCs coffin shut?

O= n 29 July 2022 16:55:42 CEST, Dave Taht <dave=2Etaht@gmail=2Ecom> wro= te:
To give credit where credit is due, "pa=
cket chirping" had been
explored before in the context
of the l4s ear= ly marking ecn effort:

https://www=2Ebobbriscoe=2En= et/presents/1802paced-chirps/1802paced-chirps=2Epdf

It died here= : https://bobbriscoe=2Enet/projects/netsvc_i-f/chirp_pfldnet10=2Epdf<= /a>
For now=2E

On Thu, Jul 28, 2022 at 6:50 AM Dave Taht <dave= =2Etaht@gmail=2Ecom> wrote:

thx for the comments everyone!

On Thu, Jul 28, 2022 = at 3:16 AM Bj=C3=B8rn Ivar Teigen via Starlink
<starlink@lists=2Ebuf= ferbloat=2Enet> wrote:

Very good point=2E Perhaps we can think of it as "at what point doe= s delay equal loss?"=2E As you say, too much latency (causing reordering fo= r instance, or triggering an algorithm to smooth over missing data), is fun= ctionally equivalent to loss, and therefore buffering beyond that point is = making things worse by delaying more traffic=2E The point at which this kic= ks in varies a lot between applications though, so some kind of classificat= ion might still make sense=2E

In a way, I think FQ_Codel does this = classification implicitly by treating sparse and non-sparse flows different= ly=2E

the implicit flow analysis of fq_codel paper tok= e did is here:
http://www=2Ediva-portal=2Eorg/smash/get/diva2:= 1251687/FULLTEXT01=2Epdf
It's a really nice feature!, and helps a l= ot when also applied to wifi
station scheduling=2E

I have somet= imes thought that increasing to quantum to account for two
paced packet= s in a row (at high rates) was a good idea,
other times having paced tr= ansports analyze the "beat frequency" of
sending packets through fq_cod= el vs a vs the ack flow characteristics
(for example, filtering) might = be useful=2E

Imagine that instead of sending packets on a fixed but= increasing
pacing schedule within an RTT thusly

PPPPPPPPPP # I= W10 burst
PP PP PP PP PP # often about 24 packets in w= hat we
think the RTT is

PP PP PP PP PP PP PP

PPPPPPP= PPPPPPPPPPP

PPPPPPPPPPPPPPPPPPPPPPP stready buffering and ultimatel= y a drop (and
yes this is inaccurate a model in a zillion ways, forgive= me for
purposes of extrapolation in ascii text)

If instead=2E= =2E=2E

You broke up the pacing within an RTT on an actual curve, se= lecting
some random segment out of PI as your actual starting point, sa= y, at
3=2E14596 here=2E

PPPPPP PPPPP PPP
PPPPP PPPPPPPP
= PPPPPPPPP PPP PP

3=2E141592653589793238462643383279502884197169399= 37510
58209749445923078164062862089986280348253421170679
821480= 86513282306647093844609550582231725359408128
481117450284102701938521= 10555964462294895493038196
442881097566593344612847564823378678316527= 12019091
45648566923460348610454326648213393607260249141273
724= 58700660631558817488152092096282925409171536436
789259036001133053054= 88204665213841469519415116094
330572703657595919530921861173819326117= 93105118548
07446237996274956735188575272489122793818301194912
= 98336733624406566430860213949463952247371907021798
609437027705392171= 76293176752384674818467669405132
000568127145263560827785771342757789= 60917363717872
14684409012249534301465495853710507922796892589235
= 42019956112129021960864034418159813629774771309960
518707211349999= 99837297804995105973173281609631859
502445945534690830264252230825334= 46850352619311881
71010003137838752886587533208381420617177669147303<= br> 59825349042875546873115956286388235378759375195778
185778053217= 12268066130019278766111959092164201989

what could you learn?

- Bj=C3=B8rn Ivar
On Thu, 28 Jul 2022 at 11:55, Sebastian Moeller <moeller0@gmx=2Ede= > wrote:

Hi all= ,


On Jul 28, 2= 022, at 11:26, Bj=C3=B8rn Ivar Teigen via Starlink <starlink@lists=2Ebuf= ferbloat=2Enet> wrote:

Hi everyone,

Interesting paper Da= ve, I've got a few thoughts:

I like the split into delay-sensitive = and loss-sensitive data=2E

However often real data is = slightly different (e=2Eg=2E not nicely either delay- or loss-sensitive)=2E= =2E=2E e=2Eg=2E for "real-time" games you have both delay and loss sensitiv= ity (similarly for VoIP), however both can deal with occasional lost or del= ayed packets (if the delay is large enough to say be re-ordered with the te= mporally next data packet (voice sample in VoIP, server-tick update in game= s), that packet's data will likely not be evaluated at all)=2E And large sc= ale bulk downloads are both tolerant to delay and occasional loss=2E So if = we think about a state space spanned by a delay and a loss-sensitivity axis= , I predict most real traffic types will cluster somewhat around the diagon= al (more or less closely)=2E

About the rest of the paper I have not= hing to contribute, since I did not spend the time to work though it=2E
=
Regards
Sebastian



Different applications can have different needs an= d this split allows a queuing algorithm to take those differences into acco= unt=2E Not the first time I've seen this kind of split, but the other one I= 've seen used M/M/1/k queues (document here: https://www=2Eresear= chgate=2Enet/publication/2452029_A_Queueing_Theory_Model_that_Enables_Contr= ol_of_Loss_and_Delay_of_Traffic_at_a_Network_Switch)

That said,= the performance metrics are derived from the embedded Markov chain of the = queuing system=2E This means the metrics are averages over *all of time*, a= nd thus there can be shorter periods (seconds, minutes, hours) of much wors= e than average performance=2E Therefore the conclusions of the paper should= be taken with a grain of salt in my opinion=2E

On Thu, 28 Jul 2022= at 10:45, Bless, Roland (TM) via Starlink <starlink@lists=2Ebufferbloat= =2Enet> wrote:
Hi Dave,

IMHO the problem w=2Er=2Et the appli= cability of most models from
queueing theory is that they only work for= load < 1, whereas
we are using the network with load values ~1 (i= =2Ee=2E, around one) due to
congestion control feedback loops that driv= e the bottleneck link
to saturation (unless you consider application li= mited traffic sources)=2E

To be fair there are queuing theory model= s that include packet loss (which is the case for the paper Dave is asking = about here), and these can work perfectly well for load > 1=2E Agree abo= ut the CC feedback loops affecting the results though=2E Even if the distri= butions are general in the paper, they still assume samples are IID which i= s not true for real networks=2E Feedback loops make real traffic self-corre= lated, which makes the short periods of worse than average performance wors= e and more frequent than IID models might suggest=2E

Regards,
B= j=C3=B8rn Ivar


Regards,
Roland

On 27=2E07=2E22 at= 17:34 Dave Taht via Starlink wrote:
Occasionally I pass along a recent paper that I don't unders= tand in
the hope that someone can enlighten me=2E
This is one of th= ose occasions, where I am trying to leverage what I
understand of exist= ing FQ-codel behaviors against real traffic=2E

https://www=2Ehindawi=2Ecom/= journals/mpe/2022/4539940/

Compared to the previous study on fi= nite-buffer M/M/1 priority queues
with time and space priority, where s= ervice times are identical and
exponentially distributed for both types= of traffic, in our model we
assume that service times are different an= d are generally distributed
for different types of traffic=2E As a resu= lt, our model is more
suitable for the performance analysis of communic= ation systems
accommodating multiple types of traffic with different se= rvice-time
distributions=2E For the proposed queueing model, we derive = the
queue-length distributions, loss probabilities, and mean waiting ti= mes
of both types of traffic, as well as the push-out probability of delay-sensitive traffic=2E

Starlink mailing list
= Starlink@lists=2Ebufferbloat=2Enet
https://lists=2Ebufferbloat=2Enet/listinfo/sta= rlink


--
Bj=C3=B8rn Ivar Teigen
Head of Research +47 47335952 | bjorn@domos=2Eno | www=2Edomos=2Eno
Starlink mailing l= ist
Starlink@lists=2Ebufferbloat=2Enet
https://lists=2Ebufferbloat=2Enet/listi= nfo/starlink



--
Bj=C3=B8r= n Ivar Teigen
Head of Research
+47 47335952 | bjorn@domos=2Eno | ww= w=2Edomos=2Eno
Starlink mailing list
Starlink@lists=2Ebufferbloat= =2Enet
= https://lists=2Ebufferbloat=2Enet/listinfo/starlink


--
FQ World Domination pending: https://blog=2Ecerowrt=2Eorg/post/state= _of_fq_codel/
Dave T=C3=A4ht CEO, TekLibre, LLC


--
FQ World Domination pending= : https://= blog=2Ecerowrt=2Eorg/post/state_of_fq_codel/
Dave T=C3=A4ht CEO, Tek= Libre, LLC
--
Sent from my Android device w= ith K-9 Mail=2E Please excuse my brevity=2E
------Y8RTZE61V75OLN4VBD8BQZLF1JBURV--