Many ISPs need the kinds of quality shaping cake can do
 help / color / mirror / Atom feed
From: Frantisek Borsik <frantisek.borsik@gmail.com>
To: libreqos <libreqos@lists.bufferbloat.net>
Subject: [LibreQoS] Progress Report: LibreQoS Version 1.5
Date: Wed, 28 Feb 2024 20:47:40 +0100	[thread overview]
Message-ID: <CAJUtOOimrokpuXPZevEAoZHej8jmVDtzvOGGUO-3ybNiwbB5nQ@mail.gmail.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 4413 bytes --]

Hello to all,

Our very own Herbert just put together *progress report on LibreQoS v1.5* -
join our chat to follow the development, discuss anything
(W)ISP/latency/WiFi related, and even unrelated :-)
*https://chat.libreqos.io/join/fvu3cerayyaumo377xwvpev6/
<https://chat.libreqos.io/join/fvu3cerayyaumo377xwvpev6/>*

It's been a while since I posted out in the open about what's going on
> behind-the-scenes with LibreQoS development. So here's a "State of 1.5"
> summary, as of a few minutes ago.
>
> *Unified Configuration*Instead of having configuration split between
> /etc/lqos.conf and ispConfig.py, we have all of the configuration in one
> place - /etc/lqos.conf. It makes support a lot easier to have a single
> place to send people, and there's a LOT more validation and sanity checking
> now.
>
>    - New Configuration Format. *DONE*
>
>
>    - Automatic conversion from 1.4 configuration, including migrations.
>    *DONE*
>
>
>    - Merged into the develop branch. *DONE*
>
> *Performance*
>
>    - The old RTT system made up to 4 map lookups per packet (!). The new
>    one makes do with 1 lookup, at the expense of only being accurate on the
>    old system by +/- 5%. That's a huge reduction in per-packet workload, so
>    I'm happy with it. Status: *Working on my machine, needs cleaning
>    before push*
>
>
>    - The new RTT system runs on the input side, so on NICs that do
>    receive-side steering it is now spread between CPUs rather than single CPU.
>     *Working on my machine*
>
>
>    - Enabled eBPF SKB-Metadata and bpf_xdp_adjust_meta (which requires
>    5.5 kernel, but is actually supported by NICs around 5.18+). This allows
>    the XDP side to store the TC and CPU map data in a blob of meta-data
>    accompanying the packet data itself in kernel memory. If support for this
>    is detected (not every NIC does it), it automaticaly passes the data
>    between the XDP and TC flows - which allows to skip an entire LPM lookup on
>    the TC side. I've wanted this for over a year. *Works on my machine,
>    improves throughput by 0.5 gbps single stream on my really crappy testbed
>    setup*
>
>
> *Bin-Packing*We're hoping to extend the bin-packing system to be both
> smarter and to include top-level trees (to avoid "oops, two important
> things are on one CPU" incidents).
> *Smart Weight Calculation*: partly done. We have a call that builds
> weights per-customer now. Weights are a combination of:
>
>    - (if you have LTS) what did the customer do in this period, last
>    week? This is *remarkably* predictable, people are really consistent
>    on aggregate.
>
>
>    - What did the customer do in the last 5 minutes? (Doesn't require
>    LTS, reasonably accurate)
>
>
>    - A fraction of their defined plan.
>
> The actual binpacking part isn't done yet, but doesn't look excessively
> tough.
>
> *Per-Flow Analysis*We've had long-running task items to: track RTT per
> flow, balance the reported host RTT between flows, make it possible to
> exclude endpoints from reporting (e.g. a UISP server hosted somewhere
> else), and begin per-ASN and per-target analysis. We've also wanted to have
> flow information accessible, with a view to future enhancements - and allow
> a LOT more querying.
>
>    - Track TCP flows in real-time. We count bytes/packets, estimate a
>    rate per flow, track RTT in both directions. This is working super-nicely
>    on my test system.
>
>
>    - Track UDP/ICMP in real-time. We're aggregating bytes/packets and
>    estimating a rate per flow.
>
>
>    - Web UI - display RTTs. RTTs are now combined per-host with a much
>    smarter algorithm that can optionally exclude data from a flow that is
>    beneath (threshold bits per second). The actual threshold is still being
>    figured out.
>
>
>    - Web UI API - you can view the current state of all flows.
>
> There's a lot more to do here, mostly the analytics and display side. But
> it is coming along hot and heavy, and looking pretty good.
>
> *Webserver Version*Rocket has been upgraded to the latest and greatest
> 1.5. A new UI is still coming; it may be a 1.6 item since the scope keeps
> looking bigger everytime it stares at me.


All the best,

Frank

Frantisek (Frank) Borsik



https://www.linkedin.com/in/frantisekborsik

Signal, Telegram, WhatsApp: +421919416714

iMessage, mobile: +420775230885

Skype: casioa5302ca

frantisek.borsik@gmail.com

[-- Attachment #2: Type: text/html, Size: 10291 bytes --]

                 reply	other threads:[~2024-02-28 19:48 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/libreqos.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJUtOOimrokpuXPZevEAoZHej8jmVDtzvOGGUO-3ybNiwbB5nQ@mail.gmail.com \
    --to=frantisek.borsik@gmail.com \
    --cc=libreqos@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox