From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id C774C3B2A4 for ; Tue, 1 Nov 2022 09:39:03 -0400 (EDT) Received: by mail-pf1-x434.google.com with SMTP id k15so5423114pfg.2 for ; Tue, 01 Nov 2022 06:39:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=BI21nOoYmXpOMzxhamd4Stm64RyUCL70ROyYI/Icy7E=; b=leKqYvpXucG3Q6Prc7HQnQf+D3kpP8FpAE7v2OMzC6OSoor4E++RqatEUnolk/+RVM WmJ8ApqvWS4i4dL/oUpSxgg7oWsqv5K3KbtYk8qxxvSQ2G7m6IwWW6347Jgsc8172ZAl grQY1w+ahiw5U06Jn2JADX/2pHlhdCTFD73YI2BDcjDFp+cChA/unALJfpDsYJkckNYG 58TOvTqmJJhAElbRP7dD1F+/3ZJbYbYGTpyDT7YtoSZyk17WqOSdQUPVGKHRWon+9b6+ qfTOTmI5wjHOvINmCSr19uinL7rjqLR1EF7WVi+RLNHG9PBzRDTQQhCjPgUWRdpqnows H4pQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=BI21nOoYmXpOMzxhamd4Stm64RyUCL70ROyYI/Icy7E=; b=4HVHUDEgDwbuWTBX5GdIURcaFnRQaTs6DpXJYKv/eemyDiZeV7LZooHwa24j2MpmO7 8DEz4G98QCSpXDaglqx/+7z0n9EXQfKoRvVzrKOPIERcPxtIuXceyPfvWaVIGfaTHNtJ C2xx6LWLln/x5GI4wRGRBUi7tTyNFdOg9mXUmdGlQkSwuFGFUOaS6LzRXbEBs5ZvxnpB WpwX9JUOcJUrdDDf2Bur6ZCLCyG1jQx5eFe9XhhXQiSrO48T7qG2YV2eNNh2UMQeXQmg VsU2oEpUBjoheHjOqP2w35nB0ATxreQOfrc/8kO26sqvz1CNljp54UlS3EIeixXwoj8A iybQ== X-Gm-Message-State: ACrzQf0nCGTajLZepqeaizP2rPt3idEvxw9+X4+3xFcc43M0P43Ldn0n GfIcbCjx/KFRonlwQmIQeaqFEz2fNPCelArX+VKo6lA+jx4= X-Google-Smtp-Source: AMsMyM7tsmkLI2QFdSNO0Td9PjcQ9j/7H5kmWdWfd7ix/Un7IcUeqsxwRGQiv4Umbva9wt1FP0eDFy5Pwew1Un7k6qg= X-Received: by 2002:a63:2055:0:b0:439:cc64:572 with SMTP id r21-20020a632055000000b00439cc640572mr129777pgm.185.1667309942552; Tue, 01 Nov 2022 06:39:02 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Herbert Wolverson Date: Tue, 1 Nov 2022 08:38:51 -0500 Message-ID: To: Dave Taht Cc: libreqos Content-Type: multipart/alternative; boundary="000000000000db59f605ec68d810" Subject: Re: [LibreQoS] Integration system, aka fun with graph theory X-BeenThere: libreqos@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Many ISPs need the kinds of quality shaping cake can do List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Nov 2022 13:39:04 -0000 --000000000000db59f605ec68d810 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Dave: in this case, I'm running inside the eBPF VM - so I'm already in kernel space, but have a very limited set of functions available. bpf_ktime_get_ns() seems to be the approved way to get the clock. There was a big debate that it uses the kernel's monotonic clock, which takes longer to sample. I'm guessing they improved that, because I'm not seeing the delay that some people were complaining about (it's not free, but it's also a *lot* faster than the estimates I was finding). > > preseems numbers ar -074 green, 75-124 yellow, 125-200 red, and they just consolidate everything >200 to 200, basically so there's no 'terrible' color lol. > I am sorry to hear those numbers are considered to be good. It's interesting that you see adverts on Wisp Talk (the FB group) showing "wow, half my APs are now green!" (and showing about 50% green, 25% yellow, 25% red). When we had Preseem, we always took "red" to mean "oh no, something's really wrong" - and got to work fixing it. There were a couple of distant (many hops down the chain) APs that struggled to stay yellow, but red was always a sign for battle stations. I think that's part of why WISPs suffer from "jump ship as soon as something better comes along" - I'd be jumping ship too, if my ISP expected me to "enjoy" 125-200 ms RTT latency for any extended period of time (I'm pretty understanding about "something went wrong, we're working on it"). Geography does play a large part. I'll see if I can resurrect a tool I had that turned RTT latency measurements into a Google Maps heatmap overlay (updating, so you could see the orange/red areas moving when the network suffered). It can be pretty tough to find a good upstream far from towns, which affects everything. But more, deep chains of backhauls add up - and add up fast if you have any sort of congestion issue along the way. For example: - We have a pretty decently connected upstream, averaging 8ms ping round-trip time to Cloudflare's DNS. - Going down our "hottest" path (60 ghz AF60 LR to a tower, and then another one to a 3,000 bed apartment complex - peaks at 900 mbit/s every night; will peak at a lot more than that as soon as their check clears f= or some Siklu gear), we worked *stupidly hard* to keep the average ping time there at 9ms to Cloudflare's DNS. Even then, it's closer to 16ms wh= en fully loaded. They are a topic for a future Cake discussion. :-) - We have a few clients connected directly off of the facility with the upstream - and they all get great RTT times (a mix of 5.8 and 3.6 CBRS; Wave coming as soon as it's in stock at the same time as the guy with th= e money being at a keyboard!). - Our largest (by # of customers) tower is 11 miles away, currently fed by 2 AirFiber 5XHD (ECMP balanced). We've worked really hard to keep tha= t tower's average ping time to Cloudflare at 18ms. We have some nicer radi= os (the Cambium 400C is a beast) going in soon, which should help. - That tower feeds 4 micro-pops. The worst is near line-of-sight (trees) on a 3.6 ghz Medusa. It suffers a bit at 33ms round-trip ping times to Cloudflare. The best averages 22ms ping times to Cloudflare. - We have a bunch more sites behind a 13 mile backhaul hop (followed by a 3 mile backhaul hop; geography meant going around a tree-covered ridge= ). We've had a heck of time getting that up to scratch; AF5XHD kinda worked= , but the experience was pretty wretched. They were the testbed for the Cambium 400C, and now average 22ms to Cloudflare. - There's 15 (!) small towers behind that one! We eventually got the most distant one to 35ms to Cloudflare pings - but ripped/replaced SO much hardware to get there. (Even then, customer experience at some of tho= se sites isn't what I'd like; I just tried a ping test from a customer running a 2.4 ghz "elevated" Ubiquiti dish to an old ePMP 1000 - at a tower 5 hops in. 45-50ms to Cloudflare. Not great. Physics dictates that the tiny towers, separated from the core by miles of backhaul and hops between them aren't going to perform as well as the nearby ones. You *can* get them going well, but it's expensive and time consuming. One thing Preseem does pretty well is show daily reports in brightly colored bars, which "gamifies" fixing the issue. If you have any gamers on staff, they start to obsess with turning everything green. It's great. :-) The other thing I keep running into is network management. A few years ago, we bought a WISP with 20 towers and a few hundred customers (it was a friendly "I'm getting too unwell to keep doing this" purchase). The guy who set it up was pretty amazing; he had no networking experience whatsoever, but was pretty good at building things. So he'd built most of the towers himself, purely because he wanted to get better service out to some *very* rural parts of Missouri (including a whole bunch of non-profits and churches, which is our largest market). While it's impressive what he pulled off, he'd still just lost 200 customers to an electric coop's fiber build-out. His construction skills were awesome; his network skills - not so much. He had 1 public IP, connected to a 100mbit/s connection at his house. Every single tower (over a 60 mile spread) was connected to exactly one other tower. Every tower had backhauls in bridge mode, connected to a (netgear consumer) switch at the tower. Every AP (all of them 2.4ghz Bullet M2) was in bridge mode with client isolation turned off, connected to an assortment of CPES (mostly Airgrid M2) - also in bridge mode. No DHCP, he had every customer type in their 192.168.x.y address (he had the whole /16 setup on the one link; no VLANs). Speed limits were set by turning on traffic shaping on the M2 CPEs... and he wondered why latency sometimes resembled remote control of a Mars rover, or parts of the network would randomly die when somebody accidentally plugged their net connection into their router's LAN port. A couple of customers had foregone routers altogether, and you could see their Windows networking broadcasts traversing the network! I wish I could say that was unusual, but I've helped a handful of WISPs in similar situations. One of the first things we did was get Preseem running (after adding every client into UNMS as it was called then). That made a big difference, and gave good visibility into how bad it was. Then it was a long process of breaking the network down into routed chunks, enabling DHCP, replacing backhauls (there were a bunch of times when towers were connected in the order they were constructed, and never connected to a new tower a mile away - but 20 miles down the chain), switching out bullets, etc. Eventually, it's a great network - and growing again. I'm not sure we could've done that without a) great visibility from monitoring platforms, and b) decades of experience between us. Longer-term, I'm hoping that we can help networks like that one. Great shaping and visibility go a *long* way. Building up some "best practices" and offering advice can go a *really long* way. (And good mapping makes a big difference; I'm not all that far from releasing a generally usable version of my LiDAR mapping suite, an ancient version is here - https://github.com/thebracket/rf-signals ; You can get LiDAR data for about 2/3 of the US for free, now. ). On Mon, Oct 31, 2022 at 10:32 PM Dave Taht wrote: > Calling rdtsc directly used to be even faster than gettimeofday > > https://github.com/dtaht/libv6/blob/master/erm/includes/get_cycles.h > > On Mon, Oct 31, 2022 at 2:20 PM Herbert Wolverson via LibreQoS > wrote: > > > > I'd agree with color coding (when it exists - no rush, IMO) being > configurable. > > > > From the "how much delay are we adding" discussion earlier, I thought > I'd do a little bit of profiling of the BPF programs themselves. This is > with the latest round of performance updates ( > https://github.com/thebracket/cpumap-pping/issues/2), so it's not > measuring anything in production. I simply added a call to get the clock = at > the start, and again at the end - and log the difference. Measuring both > XDP and TC BPF programs. (Execution goes (packet arrives)->(XDP cpumap > sends it to the right CPU)->(egress)->(TC sends it to the right classifie= r, > on the correct CPU and measures RTT latency). This is adding about two > clock checks and a debug log entry to execution time, so measuring it is > slowing it down. > > > > The results are interesting, and mostly tell me to try a different > measurement system. I'm seeing a pretty wide variance. Hammering it with = an > iperf session and a queue capped at 5 gbit/s: most of the TC timings were > 40 nanoseconds - not a packet that requires extra tracking, already in > cache, so proceed. When the TCP RTT tracker fired and recorded a > performance event, it peaked at 5,900 nanoseconds. So the tc xdp program > seems to be adding a worst-case of 0.0059 ms to packet times. The XDP sid= e > of things is typically in the 300-400 nanosecond range, I saw a handful o= f > worst-case numbers in the 3400 nanosecond range. So the XDP side is addin= g > 0.00349 ms. So - assuming worst case (and keeping the overhead added by t= he > not-so-great monitoring), we're adding 0.0093 ms to packet transit time > with the BPF programs. > > > > With a much more sedate queue (ceiling 500 mbit/s), I saw much more > consistent numbers. The vast majority of XDP timings were in the 75-150 > nanosecond range, and TC was a consistent 50-55 nanoseconds when it didn'= t > have an update to perform - peaking very occasionally at 1500 nanoseconds= . > Only adding 0.00155 ms to packet times is pretty good. > > > > It definitely performs best on long streams, probably because the > previous lookups are all in cache. This is also making me question the > answer I found to "how long does it take to read the clock?" I'd seen > ballpark estimates of 53 nanoseconds. Given that this reads the clock > twice, that can't be right. (I'm *really* not sure how to measure that on= e) > > > > Again - not a great test (I'll have to learn the perf system to do this > properly - which in turn opens up the potential for flame graphs and some > proper tracing). Interesting ballpark, though. > > > > On Mon, Oct 31, 2022 at 10:56 AM dan wrote: > >> > >> > >> > >> On Sun, Oct 30, 2022 at 8:21 PM Dave Taht via LibreQoS < > libreqos@lists.bufferbloat.net> wrote: > >>> > >>> How about the idea of "metaverse-ready" metrics, with one table that > is preseem-like and another that's > >>> > >>> blue =3D < 8ms > >>> green =3D < 20ms > >>> yellow =3D < 50ms > >>> orange =3D < 70ms > >>> red =3D > 70ms > >> > >> > >> These need configurable. There are a lot of wisps that would have > everything orange/red. We're considering anything under 100ms good on th= e > rural plans. Also keep in mind that if you're tracking latence via ppin= g > etc, then you need some buffer in there for the internet at large. <70ms > to Amazon is one thing, they're very well connected, but <70ms to most of > the internet isn't probably very realistic and would make most charts loo= k > like poop. > > > > _______________________________________________ > > LibreQoS mailing list > > LibreQoS@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/libreqos > > > > -- > This song goes out to all the folk that thought Stadia would work: > > https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-698136666= 5607352320-FXtz > Dave T=C3=A4ht CEO, TekLibre, LLC > --000000000000db59f605ec68d810 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Dave: in this case, I'm running inside the eBPF V= M - so I'm already in kernel space, but have a very limited set of func= tions available. bpf_ktime_get_ns() seems to be the approved way to get the= clock. There was a big debate that it uses the kernel's monotonic cloc= k, which takes longer to sample. I'm guessing they improved that, becau= se I'm not seeing the delay that some people were complaining about (it= 's not free, but it's also a *lot* faster than the estimates I was = finding).

> > preseems numbers ar -074 green, 75-124 y= ellow,=20 125-200 red, and they just consolidate everything >200 to 200,=20 basically so there's no 'terrible' color lol.
>
I am sorry to hear those numbers are considered to be good.

It's interesting that you see adverts on Wis= p Talk (the FB group) showing "wow, half my APs are now green!" (= and showing about 50% green, 25% yellow, 25% red). When we had Preseem, we = always took "red" to mean "oh no, something's really wro= ng" - and got to work fixing it. There were a couple of distant (many = hops down the chain) APs that struggled to stay yellow, but red was always = a sign for battle stations. I think that's part of why WISPs suffer fro= m "jump ship as soon as something better comes along" - I'd b= e jumping ship too, if my ISP expected me to "enjoy" 125-200 ms R= TT latency for any extended period of time (I'm pretty understanding ab= out "something went wrong, we're working on it").
<= br>
Geography does play a large part. I'll see if I can resur= rect a tool I had that turned RTT latency measurements into a Google Maps h= eatmap overlay (updating, so you could see the orange/red areas moving when= the network suffered). It can be pretty tough to find a good upstream far = from towns, which affects everything. But more, deep chains of backhauls ad= d up - and add up fast if you have any sort of congestion issue along the w= ay. For example:
  • We have a pretty decently connected upst= ream, averaging 8ms ping round-trip time to Cloudflare's DNS.
  • G= oing down our "hottest" path (60 ghz AF60 LR to a tower, and then= another one to a 3,000 bed apartment complex - peaks at 900 mbit/s every n= ight; will peak at a lot more than that as soon as their check clears for s= ome Siklu gear), we worked stupidly hard to keep the average ping ti= me there at 9ms to Cloudflare's DNS. Even then, it's closer to 16ms= when fully loaded. They are a topic for a future Cake discussion. :-)
  • =
  • We have a few clients connected directly off of the facility with the u= pstream - and they all get great RTT times (a mix of 5.8 and 3.6 CBRS; Wave= coming as soon as it's in stock at the same time as the guy with the m= oney being at a keyboard!).
  • Our largest (by # of customers) tower i= s 11 miles away, currently fed by 2 AirFiber 5XHD (ECMP balanced). We'v= e worked really hard to keep that tower's average ping time to Cloudfla= re at 18ms. We have some nicer radios (the Cambium 400C is a beast) going i= n soon, which should help.
    • That tower feeds 4 micro-pops. The w= orst is near line-of-sight (trees) on a 3.6 ghz Medusa. It suffers a bit at= 33ms round-trip ping times to Cloudflare. The best averages 22ms ping time= s to Cloudflare.
  • We have a bunch more sites behind a 13 mile b= ackhaul hop (followed by a 3 mile backhaul hop; geography meant going aroun= d a tree-covered ridge). We've had a heck of time getting that up to sc= ratch; AF5XHD kinda worked, but the experience was pretty wretched. They we= re the testbed for the Cambium 400C, and now average 22ms to Cloudflare.
    • There's 15 (!) small towers behind that one! We eventually go= t the most distant one to 35ms to Cloudflare pings - but ripped/replaced SO= much hardware to get there. (Even then, customer experience at some of tho= se sites isn't what I'd like; I just tried a ping test from a custo= mer running a 2.4 ghz "elevated" Ubiquiti dish to an old ePMP 100= 0 - at a tower 5 hops in. 45-50ms to Cloudflare. Not great.
    Physics dictates that the tiny towers, separated from the core by m= iles of backhaul and hops between them aren't going to perform as well = as the nearby ones. You can get them going well, but it's expens= ive and time consuming.

    One thing Preseem does pre= tty well is show daily reports in brightly colored bars, which "gamifi= es" fixing the issue. If you have any gamers on staff, they start to o= bsess with turning everything green. It's great. :-)

    The other thing I keep running into is network management. A few yea= rs ago, we bought a WISP with 20 towers and a few hundred customers (it was= a friendly "I'm getting too unwell to keep doing this" purch= ase). The guy who set it up was pretty amazing; he had no networking experi= ence whatsoever, but was pretty good at building things. So he'd built = most of the towers himself, purely because he wanted to get better service = out to some *very* rural parts of Missouri (including a whole bunch of non-= profits and churches, which is our largest market). While it's impressi= ve what he pulled off, he'd still just lost 200 customers to an electri= c coop's fiber build-out. His construction skills were awesome; his net= work skills - not so much. He had 1 public IP, connected to a 100mbit/s con= nection at his house. Every single tower (over a 60 mile spread) was connec= ted to exactly one other tower. Every tower had backhauls in bridge mode, c= onnected to a (netgear consumer) switch at the tower. Every AP (all of them= 2.4ghz Bullet M2) was in bridge mode with client isolation turned off, con= nected to an assortment of CPES (mostly Airgrid M2) - also in bridge mode. = No DHCP, he had every customer type in their 192.168.x.y address (he had th= e whole /16 setup on the one link; no VLANs). Speed limits were set by turn= ing on traffic shaping on the M2 CPEs... and he wondered why latency someti= mes resembled remote control of a Mars rover, or parts of the network would= randomly die when somebody accidentally plugged their net connection into = their router's LAN port. A couple of customers had foregone routers alt= ogether, and you could see their Windows networking broadcasts traversing t= he network! I wish I could say that was unusual, but I've helped a hand= ful of WISPs in similar situations.

    One of th= e first things we did was get Preseem running (after adding every client in= to UNMS as it was called then). That made a big difference, and gave good v= isibility into how bad it was. Then it was a long process of breaking the n= etwork down into routed chunks, enabling DHCP, replacing backhauls (there w= ere a bunch of times when towers were connected in the order they were cons= tructed, and never connected to a new tower a mile away - but 20 miles down= the chain), switching out bullets, etc. Eventually, it's a great netwo= rk - and growing again. I'm not sure we could've done that without = a) great visibility from monitoring platforms, and b) decades of experience= between us.

    Longer-term, I'm hoping that = we can help networks like that one. Great shaping and visibility go a lo= ng way. Building up some "best practices" and offering advice= can go a really long way. (And good mapping makes a big difference;= I'm not all that far from releasing a generally usable version of my L= iDAR mapping suite, an ancient version is here - https://github.com/thebracket/rf-signals ;= =C2=A0 You can get LiDAR data for about 2/3 of the US for free, now. ).



On Mon, Oct 31, 2022 at 1= 0:32 PM Dave Taht <dave.taht@gmai= l.com> wrote:
Calling rdtsc directly used to be even faster than gettimeofday

https://github.com/dtaht/libv6/b= lob/master/erm/includes/get_cycles.h

On Mon, Oct 31, 2022 at 2:20 PM Herbert Wolverson via LibreQoS
<lib= reqos@lists.bufferbloat.net> wrote:
>
> I'd agree with color coding (when it exists - no rush, IMO) being = configurable.
>
> From the "how much delay are we adding" discussion earlier, = I thought I'd do a little bit of profiling of the BPF programs themselv= es. This is with the latest round of performance updates (https://github.com/thebracket/cpumap-pping/issues/2), so it= 9;s not measuring anything in production. I simply added a call to get the = clock at the start, and again at the end - and log the difference. Measurin= g both XDP and TC BPF programs. (Execution goes (packet arrives)->(XDP c= pumap sends it to the right CPU)->(egress)->(TC sends it to the right= classifier, on the correct CPU and measures RTT latency). This is adding a= bout two clock checks and a debug log entry to execution time, so measuring= it is slowing it down.
>
> The results are interesting, and mostly tell me to try a different mea= surement system. I'm seeing a pretty wide variance. Hammering it with a= n iperf session and a queue capped at 5 gbit/s: most of the TC timings were= 40 nanoseconds - not a packet that requires extra tracking, already in cac= he, so proceed. When the TCP RTT tracker fired and recorded a performance e= vent, it peaked at 5,900 nanoseconds. So the tc xdp program seems to be add= ing a worst-case of 0.0059 ms to packet times. The XDP side of things is ty= pically in the 300-400 nanosecond range, I saw a handful of worst-case numb= ers in the 3400 nanosecond range. So the XDP side is adding 0.00349 ms. So = - assuming worst case (and keeping the overhead added by the not-so-great m= onitoring), we're adding 0.0093 ms to packet transit time with the BPF = programs.
>
> With a much more sedate queue (ceiling 500 mbit/s), I saw much more co= nsistent numbers. The vast majority of XDP timings were in the 75-150 nanos= econd range, and TC was a consistent 50-55 nanoseconds when it didn't h= ave an update to perform - peaking very occasionally at 1500 nanoseconds. O= nly adding 0.00155 ms to packet times is pretty good.
>
> It definitely performs best on long streams, probably because the prev= ious lookups are all in cache. This is also making me question the answer I= found to "how long does it take to read the clock?" I'd seen= ballpark estimates of 53 nanoseconds. Given that this reads the clock twic= e, that can't be right. (I'm *really* not sure how to measure that = one)
>
> Again - not a great test (I'll have to learn the perf system to do= this properly - which in turn opens up the potential for flame graphs and = some proper tracing). Interesting ballpark, though.
>
> On Mon, Oct 31, 2022 at 10:56 AM dan <dandenson@gmail.com> wrote:
>>
>>
>>
>> On Sun, Oct 30, 2022 at 8:21 PM Dave Taht via LibreQoS <libreqos@lists= .bufferbloat.net> wrote:
>>>
>>> How about the idea of "metaverse-ready" metrics, wit= h one table that is preseem-like and another that's
>>>
>>> blue =3D=C2=A0 < 8ms
>>> green =3D < 20ms
>>> yellow =3D < 50ms
>>> orange=C2=A0 =3D < 70ms
>>> red =3D > 70ms
>>
>>
>> These need configurable.=C2=A0 There are a lot of wisps that would= have everything orange/red.=C2=A0 We're considering anything under 100= ms good on the rural plans.=C2=A0 =C2=A0Also keep in mind that if you'r= e tracking latence via pping etc, then you need some buffer in there for th= e internet at large.=C2=A0 <70ms to Amazon is one thing, they're ver= y well connected, but <70ms to most of the internet isn't probably v= ery realistic and would make most charts look like poop.
>
> _______________________________________________
> LibreQoS mailing list
> Li= breQoS@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/libreqos<= /a>



--
This song goes out to all the folk that thought Stadia would work:
https://www.= linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXt= z
Dave T=C3=A4ht CEO, TekLibre, LLC
--000000000000db59f605ec68d810--