Many ISPs need the kinds of quality shaping cake can do
 help / color / mirror / Atom feed
From: dan <dandenson@gmail.com>
To: "Robert Chacón" <robert.chacon@jackrabbitwireless.com>
Cc: Herbert Wolverson <herberticus@gmail.com>,
	libreqos <libreqos@lists.bufferbloat.net>
Subject: Re: [LibreQoS] Integration system, aka fun with graph theory
Date: Mon, 31 Oct 2022 17:31:59 -0600	[thread overview]
Message-ID: <CAA_JP8Xtt2900u5g0h0ioXRin-XU-F2mjWVTTrTX2xVjAwXJ2g@mail.gmail.com> (raw)
In-Reply-To: <CAOZyJosyFYVY3eL_i7Jhekv83B8Y5erQ9Xfb7ONwEXP8bVsLxA@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 5940 bytes --]

preseems numbers ar -074 green, 75-124 yellow, 125-200 red, and they just
consolidate everything >200 to 200, basically so there's no 'terrible'
color lol.  I think these numbers are reasonable for standard internet
service these days.  for a 'default' value anyway.   >100ms isn't bad
service for most people, and most wisps will have a LOT of traffic coming
through with >100ms from the far reaches of the internet.

Maybe just reasonable defaults like preseem uses for integrated 'generic'
tracking, but then have a separate graph hitting some target services.  ie,
try to get game servers on there, AWS, Cloudflare, Azure, Google cloud.
Show a radar graphic or similar.

On Mon, Oct 31, 2022 at 3:57 PM Robert Chacón via LibreQoS <
libreqos@lists.bufferbloat.net> wrote:

> > I'd agree with color coding (when it exists - no rush, IMO) being
> configurable.
>
> Thankfully it will be configurable, and easily, through the InfluxDB
> interface.
> Any operator will be able to click the Gear icon above the tables and set
> the thresholds to whatever is desired.
> I've set it to include both a standard table and "metaverse-ready" table
> based on Dave's threshold recommendations.
>
>    - Standard (Preseem like)
>    - green = < 75 ms
>       - yellow = < 100 ms
>       - red = > 100 ms
>       - Metaverse-Ready
>       - blue =  < 8ms
>       - green = < 20ms
>       - yellow = < 50ms
>       - orange  = < 70ms
>       - red = > 70ms
>
> Are the defaults here reasonable at least? Should we change the Standard
> table thresholds a bit?
>
> > Only adding 0.00155 ms to packet times is pretty good.
>
> Agreed! That's excellent. Great work on this so far it's looking like
> you're making tremendous progress.
>
> On Mon, Oct 31, 2022 at 3:20 PM Herbert Wolverson via LibreQoS <
> libreqos@lists.bufferbloat.net> wrote:
>
>> I'd agree with color coding (when it exists - no rush, IMO) being
>> configurable.
>>
>> From the "how much delay are we adding" discussion earlier, I thought I'd
>> do a little bit of profiling of the BPF programs themselves. This is with
>> the latest round of performance updates (
>> https://github.com/thebracket/cpumap-pping/issues/2), so it's not
>> measuring anything in production. I simply added a call to get the clock at
>> the start, and again at the end - and log the difference. Measuring both
>> XDP and TC BPF programs. (Execution goes (packet arrives)->(XDP cpumap
>> sends it to the right CPU)->(egress)->(TC sends it to the right classifier,
>> on the correct CPU and measures RTT latency). This is adding about two
>> clock checks and a debug log entry to execution time, so measuring it is
>> slowing it down.
>>
>> The results are interesting, and mostly tell me to try a different
>> measurement system. I'm seeing a pretty wide variance. Hammering it with an
>> iperf session and a queue capped at 5 gbit/s: most of the TC timings were
>> 40 nanoseconds - not a packet that requires extra tracking, already in
>> cache, so proceed. When the TCP RTT tracker fired and recorded a
>> performance event, it peaked at 5,900 nanoseconds. So the tc xdp program
>> seems to be adding a worst-case of 0.0059 ms to packet times. The XDP side
>> of things is typically in the 300-400 nanosecond range, I saw a handful of
>> worst-case numbers in the 3400 nanosecond range. So the XDP side is adding
>> 0.00349 ms. So - assuming worst case (and keeping the overhead added by the
>> not-so-great monitoring), we're adding *0.0093 ms* to packet transit
>> time with the BPF programs.
>>
>> With a much more sedate queue (ceiling 500 mbit/s), I saw much more
>> consistent numbers. The vast majority of XDP timings were in the 75-150
>> nanosecond range, and TC was a consistent 50-55 nanoseconds when it didn't
>> have an update to perform - peaking very occasionally at 1500 nanoseconds.
>> Only adding 0.00155 ms to packet times is pretty good.
>>
>> It definitely performs best on long streams, probably because the
>> previous lookups are all in cache. This is also making me question the
>> answer I found to "how long does it take to read the clock?" I'd seen
>> ballpark estimates of 53 nanoseconds. Given that this reads the clock
>> twice, that can't be right. (I'm *really* not sure how to measure that one)
>>
>> Again - not a great test (I'll have to learn the perf system to do this
>> properly - which in turn opens up the potential for flame graphs and some
>> proper tracing). Interesting ballpark, though.
>>
>> On Mon, Oct 31, 2022 at 10:56 AM dan <dandenson@gmail.com> wrote:
>>
>>>
>>>
>>> On Sun, Oct 30, 2022 at 8:21 PM Dave Taht via LibreQoS <
>>> libreqos@lists.bufferbloat.net> wrote:
>>>
>>>> How about the idea of "metaverse-ready" metrics, with one table that is
>>>> preseem-like and another that's
>>>>
>>>> blue =  < 8ms
>>>> green = < 20ms
>>>> yellow = < 50ms
>>>> orange  = < 70ms
>>>> red = > 70ms
>>>>
>>>
>>> These need configurable.  There are a lot of wisps that would have
>>> everything orange/red.  We're considering anything under 100ms good on the
>>> rural plans.   Also keep in mind that if you're tracking latence via pping
>>> etc, then you need some buffer in there for the internet at large.  <70ms
>>> to Amazon is one thing, they're very well connected, but <70ms to most of
>>> the internet isn't probably very realistic and would make most charts look
>>> like poop.
>>>
>> _______________________________________________
>> LibreQoS mailing list
>> LibreQoS@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/libreqos
>>
>
>
> --
> Robert Chacón
> CEO | JackRabbit Wireless LLC <http://jackrabbitwireless.com>
> Dev | LibreQoS.io
>
> _______________________________________________
> LibreQoS mailing list
> LibreQoS@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/libreqos
>

[-- Attachment #2: Type: text/html, Size: 8033 bytes --]

  reply	other threads:[~2022-10-31 23:32 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-27 21:33 Herbert Wolverson
2022-10-27 21:41 ` Dave Taht
2022-10-27 21:44 ` Dave Taht
2022-10-27 21:48 ` Robert Chacón
2022-10-28  0:27   ` dan
2022-10-28 12:40     ` Herbert Wolverson
2022-10-28 17:43       ` Herbert Wolverson
2022-10-28 19:05         ` Robert Chacón
2022-10-28 19:54           ` Herbert Wolverson
2022-10-28 21:15             ` Robert Chacón
2022-10-29 15:57               ` Herbert Wolverson
2022-10-29 19:05                 ` Robert Chacón
2022-10-29 19:43                   ` Dave Taht
2022-10-30  1:45                     ` Herbert Wolverson
2022-10-31  0:15                       ` Dave Taht
2022-10-31  1:15                         ` Robert Chacón
2022-10-31  1:26                         ` Herbert Wolverson
2022-10-31  1:36                           ` Herbert Wolverson
2022-10-31  1:46                             ` Herbert Wolverson
2022-10-31  2:21                               ` Dave Taht
2022-10-31  3:26                                 ` Robert Chacón
2022-10-31 14:47                                 ` [LibreQoS] metaverse-ready metrics Dave Taht
2022-10-31 14:50                                   ` Dave Taht
2022-10-31 15:56                                 ` [LibreQoS] Integration system, aka fun with graph theory dan
2022-10-31 21:19                                   ` Herbert Wolverson
2022-10-31 21:54                                     ` Dave Taht
2022-10-31 21:57                                     ` Robert Chacón
2022-10-31 23:31                                       ` dan [this message]
2022-10-31 23:45                                         ` Dave Taht
2022-11-01  3:31                                     ` Dave Taht
2022-11-01 13:38                                       ` Herbert Wolverson
2022-10-29 19:18                 ` Dave Taht
2022-10-30  1:10                   ` Herbert Wolverson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/libreqos.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAA_JP8Xtt2900u5g0h0ioXRin-XU-F2mjWVTTrTX2xVjAwXJ2g@mail.gmail.com \
    --to=dandenson@gmail.com \
    --cc=herberticus@gmail.com \
    --cc=libreqos@lists.bufferbloat.net \
    --cc=robert.chacon@jackrabbitwireless.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox