From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 3E4C33CB37 for ; Wed, 28 Feb 2024 14:48:18 -0500 (EST) Received: by mail-qt1-x82c.google.com with SMTP id d75a77b69052e-42e5e1643adso581281cf.3 for ; Wed, 28 Feb 2024 11:48:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709149697; x=1709754497; darn=lists.bufferbloat.net; h=to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=5GB7tp7BdErGkdKlUkyzh3ETL5sBO9r6pFzQRkrHoew=; b=CS488A7nT3h5nqDQ0RkZOgqyMch9ARyeqRnY0bXNiiWEvdJupty99iYvrfklQK7Ymp zb9yva0dXLVW0Cm2j3T3rDTa7DDSSCw78gKwdb+W6IsS7ufhENH69TWHwkGenTHE4Ygt xn4w7cFUUm8BDNx6A4YcNZ+HHlzMfRpBgO4g2VlAsPUI3bKJXR6rPjaOg6OELuGPjl/Q D0/ulkgTmROVTcSMvGYmuYX8ecyP7o17TQJb6nHZ5lEBLUH7dPM5jIPz8d3ZxWKCzic5 LGmuw/5bMPStRGESVvOYv1DgbeDl5AR39lnAwnIvgZzid0EbGO63jCAFCHh8gIxnpvCW BA6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709149697; x=1709754497; h=to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=5GB7tp7BdErGkdKlUkyzh3ETL5sBO9r6pFzQRkrHoew=; b=jXj+C2fnKLCHLhsW1l0SCTy/lfU1bbp1/arOlEFUU9axDWAlgTV9WoG/m6kTMjMuR2 efLwUMrhG60Iyg4KhxQegVvsQiiik6eWRCGRxB/nk7fQ+EPfgpK3ttwhjckZEH8N5JRg C2ido6TH/sYpc/9aYA5x+ZZY4Ouzc2Cok+7Fdvsf5AJXle8RgM/tSi9nQNhHYvJZIGSi jGxKseKYLqvEr9AfMHgpzoFk8CgQeFkRNXi/ojf6XTrT/NKDOvthblxPDO+P9vYtohUZ ulqUV6MpDvpyFvZmAMar+YtTo/w11m0KdoHvKCEy7Se8VpXGoRRlqCHPk0FiQwMrSmEn 1guQ== X-Gm-Message-State: AOJu0Yyuy+27KrWmOVVrHMTaW3GE1eDbaZHPor6+INBAXokI35QJCC6S puGPtPJM40SHowlNGjaPosDTkavS9528ocPhoKA8Fqv8rS3Q+BY3nhMgjiEPYnF3RTJ0d7Ip7zD 0Pm2aGWU0OFY3hckweM4XdIv959nzkiMkkig= X-Google-Smtp-Source: AGHT+IEx/iSN8wLO/gx4MTEbNUqxWmJqJYAG2A4W0g9hkhgmZsyPt1UulOU4mtDn/IkzK/A+N7hUZ5q4LqB/vnUPZz8= X-Received: by 2002:ac8:5e46:0:b0:42e:8e5d:bfe3 with SMTP id i6-20020ac85e46000000b0042e8e5dbfe3mr8998167qtx.64.1709149697507; Wed, 28 Feb 2024 11:48:17 -0800 (PST) MIME-Version: 1.0 From: Frantisek Borsik Date: Wed, 28 Feb 2024 20:47:40 +0100 Message-ID: To: libreqos Content-Type: multipart/alternative; boundary="0000000000009689710612766c7d" Subject: [LibreQoS] Progress Report: LibreQoS Version 1.5 X-BeenThere: libreqos@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Many ISPs need the kinds of quality shaping cake can do List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 28 Feb 2024 19:48:18 -0000 --0000000000009689710612766c7d Content-Type: text/plain; charset="UTF-8" Hello to all, Our very own Herbert just put together *progress report on LibreQoS v1.5* - join our chat to follow the development, discuss anything (W)ISP/latency/WiFi related, and even unrelated :-) *https://chat.libreqos.io/join/fvu3cerayyaumo377xwvpev6/ * It's been a while since I posted out in the open about what's going on > behind-the-scenes with LibreQoS development. So here's a "State of 1.5" > summary, as of a few minutes ago. > > *Unified Configuration*Instead of having configuration split between > /etc/lqos.conf and ispConfig.py, we have all of the configuration in one > place - /etc/lqos.conf. It makes support a lot easier to have a single > place to send people, and there's a LOT more validation and sanity checking > now. > > - New Configuration Format. *DONE* > > > - Automatic conversion from 1.4 configuration, including migrations. > *DONE* > > > - Merged into the develop branch. *DONE* > > *Performance* > > - The old RTT system made up to 4 map lookups per packet (!). The new > one makes do with 1 lookup, at the expense of only being accurate on the > old system by +/- 5%. That's a huge reduction in per-packet workload, so > I'm happy with it. Status: *Working on my machine, needs cleaning > before push* > > > - The new RTT system runs on the input side, so on NICs that do > receive-side steering it is now spread between CPUs rather than single CPU. > *Working on my machine* > > > - Enabled eBPF SKB-Metadata and bpf_xdp_adjust_meta (which requires > 5.5 kernel, but is actually supported by NICs around 5.18+). This allows > the XDP side to store the TC and CPU map data in a blob of meta-data > accompanying the packet data itself in kernel memory. If support for this > is detected (not every NIC does it), it automaticaly passes the data > between the XDP and TC flows - which allows to skip an entire LPM lookup on > the TC side. I've wanted this for over a year. *Works on my machine, > improves throughput by 0.5 gbps single stream on my really crappy testbed > setup* > > > *Bin-Packing*We're hoping to extend the bin-packing system to be both > smarter and to include top-level trees (to avoid "oops, two important > things are on one CPU" incidents). > *Smart Weight Calculation*: partly done. We have a call that builds > weights per-customer now. Weights are a combination of: > > - (if you have LTS) what did the customer do in this period, last > week? This is *remarkably* predictable, people are really consistent > on aggregate. > > > - What did the customer do in the last 5 minutes? (Doesn't require > LTS, reasonably accurate) > > > - A fraction of their defined plan. > > The actual binpacking part isn't done yet, but doesn't look excessively > tough. > > *Per-Flow Analysis*We've had long-running task items to: track RTT per > flow, balance the reported host RTT between flows, make it possible to > exclude endpoints from reporting (e.g. a UISP server hosted somewhere > else), and begin per-ASN and per-target analysis. We've also wanted to have > flow information accessible, with a view to future enhancements - and allow > a LOT more querying. > > - Track TCP flows in real-time. We count bytes/packets, estimate a > rate per flow, track RTT in both directions. This is working super-nicely > on my test system. > > > - Track UDP/ICMP in real-time. We're aggregating bytes/packets and > estimating a rate per flow. > > > - Web UI - display RTTs. RTTs are now combined per-host with a much > smarter algorithm that can optionally exclude data from a flow that is > beneath (threshold bits per second). The actual threshold is still being > figured out. > > > - Web UI API - you can view the current state of all flows. > > There's a lot more to do here, mostly the analytics and display side. But > it is coming along hot and heavy, and looking pretty good. > > *Webserver Version*Rocket has been upgraded to the latest and greatest > 1.5. A new UI is still coming; it may be a 1.6 item since the scope keeps > looking bigger everytime it stares at me. All the best, Frank Frantisek (Frank) Borsik https://www.linkedin.com/in/frantisekborsik Signal, Telegram, WhatsApp: +421919416714 iMessage, mobile: +420775230885 Skype: casioa5302ca frantisek.borsik@gmail.com --0000000000009689710612766c7d Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hello to all,

Our very own Herbert just= put together progress report on LibreQoS v1.5 - join our chat to fo= llow the development, discuss anything (W)ISP/latency/WiFi related, and eve= n unrelated :-)=C2=A0https://chat.libreqos.io/join/fvu3cerayyaumo377xwvpev6/

It&= #39;s been a while since I posted out in the open about what's going on= behind-the-scenes with LibreQoS development. So here's a "State o= f 1.5" summary, as of a few minutes ago.
Unified Configuration
Instead of having configuration split between=C2=A0/etc/lqos.conf=C2=A0and=C2=A0ispConfig.py, we have all of the configuration i= n one place -=C2=A0/etc= /lqos.conf. It makes support a lot easier to have a single place to = send people, and there's a LOT more validation and sanity checking now.=
  • New Configuration Format.=C2=A0DONE
    • Automatic conversion from 1.4 configuration, inc= luding migrations.=C2=A0= DONE
    • Merged into = the=C2=A0develop= =C2=A0branch.=C2=A0DONE<= /li>
    Performance
    • The old RTT system made up to 4 map lookups per packet (!= ). The new one makes do with 1 lookup, at the expense of only being accurat= e on the old system by +/- 5%. That's a huge reduction in per-packet wo= rkload, so I'm happy with it. Status:=C2=A0Working on my machine, needs cleaning before pus= h
    • The new RTT system runs on= the input side, so on NICs that do receive-side steering it is now spread = between CPUs rather than single CPU.=C2=A0Working on my machine
    • Enabled eBPF SKB-Metadata and=C2=A0bpf_xdp_adjust_meta=C2=A0(which requires 5.5 kernel, but is actu= ally supported by NICs around 5.18+). This allows the XDP side to store the= TC and CPU map data in a blob of meta-data accompanying the packet data it= self in kernel memory. If support for this is detected (not every NIC does = it), it automaticaly passes the data between the XDP and TC flows - which a= llows to skip an entire LPM lookup on the TC side. I've wanted this for= over a year.=C2=A0W= orks on my machine, improves throughput by 0.5 gbps single stream on my rea= lly crappy testbed setup
    Bin-Packing
    We're hoping = to extend the bin-packing system to be both smarter and to include top-leve= l trees (to avoid "oops, two important things are on one CPU" inc= idents).
    Smart Weight Calculation: partly done. We have a call that builds weights per-customer now. Weight= s are a combination of:
      (if you have LTS) wh= at did the customer do in this period, last week? This is=C2=A0remarkably=C2=A0predictable, people are really cons= istent on aggregate.
    • What did the = customer do in the last 5 minutes? (Doesn't require LTS, reasonably acc= urate)
    • A fraction of their define= d plan.
    The actual binpacking part = isn't done yet, but doesn't look excessively tough.
    Per-Flow Analysis
    We've had long-running task items to: track RTT per flow, = balance the reported host RTT between flows, make it possible to exclude en= dpoints from reporting (e.g. a UISP server hosted somewhere else), and begi= n per-ASN and per-target analysis. We've also wanted to have flow infor= mation accessible, with a view to future enhancements - and allow a LOT mor= e querying.
    • Track TCP flows in real-time. We= count bytes/packets, estimate a rate per flow, track RTT in both direction= s. This is working super-nicely on my test system.
    • Track UDP/ICMP in real-time. We're aggregating bytes/p= ackets and estimating a rate per flow.
    • Web UI - display RTTs. RTTs are now combined per-host with a much smar= ter algorithm that can optionally exclude data from a flow that is beneath = (threshold bits per second). The actual threshold is still being figured ou= t.
    • Web UI API - you can view the c= urrent state of all flows.
    There= 9;s a lot more to do here, mostly the analytics and display side. But it is= coming along hot and heavy, and looking pretty good.
    Webserver Version
    Rocket has been upgraded to the latest and greatest 1.5. A new UI is= still coming; it may be a 1.6 item since the scope keeps looking bigger ev= erytime it stares at me.


Frank

Frantisek (Frank) Borsik

=C2=A0

https://www.li= nkedin.com/in/frantisekborsik

Signal, Telegram, WhatsApp: +421919416714=C2= =A0

i= Message, mobile: +420775230885

Skype: casioa5302ca

frantisek.bors= ik@gmail.com

<= /div>
--0000000000009689710612766c7d--