From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 5CA9E3B29D for ; Wed, 19 Oct 2022 09:58:17 -0400 (EDT) Received: by mail-wr1-x42d.google.com with SMTP id b4so29231669wrs.1 for ; Wed, 19 Oct 2022 06:58:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=2Ic+YwgI+MzsdScWarxjorqtOGVtVV1MGswhefncbfk=; b=CMtclKI+Pp6FKi7HY4AtiWK7JQUXnrLanfGpnFxzLDEQ4puhyQRyZqhdUi6q3RYnlZ ry2hW2TDizJi2xSD3YUUd7Lbtst1qUC8+52xQNITjEmmlzU9k/goNG2975/8wcyLQ344 kEoFNBJA9t89SPW1gtnXNT0psuABtLP8iNOs/YUkK8Pi+ZPhhzNtc7uqKN9bYNOcLZZc yyankUWmOvlF+SXPx7lZUbN2BQdJX6tIZV3MQIY40xaH8M7zKFfX6nVTkCRxrUm0TMEv IRTfnAWwhI2GWtEbQdnYe5UuEr8kLmXztQri394Cb6Q/mT7TagE75/+gEL+Rp6n4YTAo y05A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2Ic+YwgI+MzsdScWarxjorqtOGVtVV1MGswhefncbfk=; b=dcH60M27lUTKaXQb6sgVSyC6HtoCIvB1sFLaScqS4CSnxdQIUh7FQtXmSWZ2ilqrk/ Xmtrv1olSBeuqEHzKlSA0Wlgia07683vssmKVIk8JJkBgbjIjKqZBlANXC3Oj5syJH0B ihCrExwD8A/fTS2qkUQjgbRSHAeywDnyoQNW/XV7lKtm9k+nuYOg8BA7yiGIXSJ8OJjq XldDWYUrcp8X1RQgI5aBNbIJMEp5dp3sRSMXB5o5HrWxgXV8CVBEenZREgDGXXgr32wN sUzRSHUSVtRyBqiUoP/gZ+4+DziTPaJ5ODNeaoAnrUoX+JmMRIwHqhiKMmyBxVK0L38+ 2HVA== X-Gm-Message-State: ACrzQf0ZPu2jZX/l9TktsH9Plk79VBhEL05kLntpKeA4Yp/7nYpRuNrR Humsda9+xdG5afs9qd+zQ86xzQ2fXtustPaMOys= X-Google-Smtp-Source: AMsMyM5BffTwpyMlz7mSR4Mp3tP5Z2PVb8/ThWnW+4nc2I3BEVNQy7u52lifNgqfKU0UUdeiagw2j7xPYzbY+a8m+pA= X-Received: by 2002:a5d:64c3:0:b0:22e:57e7:6230 with SMTP id f3-20020a5d64c3000000b0022e57e76230mr5343803wri.482.1666187895845; Wed, 19 Oct 2022 06:58:15 -0700 (PDT) MIME-Version: 1.0 References: <87bkqatu61.fsf@toke.dk> <759c25c6fd54dceccc00eada5ccf5358d2d1c20c.camel@kau.se> In-Reply-To: From: Dave Taht Date: Wed, 19 Oct 2022 06:58:04 -0700 Message-ID: To: Herbert Wolverson Cc: "libreqos@lists.bufferbloat.net" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [LibreQoS] In BPF pping - so far X-BeenThere: libreqos@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Many ISPs need the kinds of quality shaping cake can do List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Oct 2022 13:58:17 -0000 could I coax you to adopt flent? apt-get install flent netperf irtt fping You sometimes have to compile netperf yourself with --enable-demo on some systems. There are a bunch of python libs neede for the gui, but only on the client. Then you can run a really gnarly test series and plot the results over time= . flent --socket-stats --step-size=3D.05 -t 'the-test-conditions' -H the_server_name rrul # 110 other tests On Wed, Oct 19, 2022 at 6:44 AM Herbert Wolverson via LibreQoS wrote: > > Hey, > > Testing the current version ( https://github.com/thebracket/cpumap-pping-= hackjob ), it's doing better than I hoped. This build has shared (not per-c= pu) maps, and a userspace daemon (xdp_pping) to extract and reset stats. > > My testing environment has grown a bit: > * ShaperVM - running Ubuntu Server and LibreQoS, with the new cpumap-ppin= g-hackjob version of xdp-cpumap. > * ExtTest - running Ubuntu Server, set as 10.64.1.1. Hosts an iperf serve= r. > * ClientInt1 - running Ubuntu Server (minimal), set as 10.64.1.2. Hosts i= perf client. > * ClientInt2 - running Ubuntu Server (minimal), set as 10.64.1.3. Hosts i= perf client. > > ClientInt1, ClientInt2 and one interface (LAN facing) of ShaperVM are on = a virtual switch. > ExtTest and the other interface (WAN facing) of ShaperVM are on a differe= nt virtual switch. > > These are all on a host machine running Windows 11, a core i7 12th gen, 3= 2 Gb RAM and fast SSD setup. > > TEST 1: DUAL STREAMS, LOW THROUGHPUT > > For this test, LibreQoS is configured: > * Two APs, each with 5gbit/s max. > * 100.64.1.2 and 100.64.1.3 setup as CPEs, each limited to about 100mbit/= s. They map to 1:5 and 2:5 respectively (separate CPUs). > * Set to use Cake > > On each client, roughly simultaneously run: iperf -c 100.64.1.1 -t 500 (f= or a long run). Running xdp_pping yields correct results: > > [ > {"tc":"1:5", "avg" : 4, "min" : 3, "max" : 5, "samples" : 11}, > {"tc":"2:5", "avg" : 4, "min" : 3, "max" : 5, "samples" : 11}, > {}] > > Or when I waited a while to gather/reset: > > [ > {"tc":"1:5", "avg" : 4, "min" : 3, "max" : 6, "samples" : 60}, > {"tc":"2:5", "avg" : 4, "min" : 3, "max" : 5, "samples" : 60}, > {}] > > The ShaperVM shows no errors, just periodic logging that it is recording = data. CPU is about 2-3% on two CPUs, zero on the others (as expected). > > After 500 seconds of continual iperfing, each client reported a throughpu= t of 104 Mbit/sec and 6.06 GBytes of data transmitted. > > So for smaller streams, I'd call this a success. > > TEST 2: DUAL STREAMS, HIGH THROUGHPUT > > For this test, LibreQoS is configured: > * Two APs, each with 5gb/s max. > * 100.64.1.2 and 100.64.1.3 setup as CPEs, each limited to 5Gbit/s! Mappe= d to 1:5 and 2:5 respectively (separate CPUs). > > Run iperfc -c 100.64.1.1 -t 500 on each client at the same time. > > xdp_pping shows results, too: > > [ > {"tc":"1:5", "avg" : 4, "min" : 1, "max" : 7, "samples" : 58}, > {"tc":"2:5", "avg" : 7, "min" : 3, "max" : 11, "samples" : 58}, > {}] > > [ > {"tc":"1:5", "avg" : 5, "min" : 4, "max" : 8, "samples" : 13}, > {"tc":"2:5", "avg" : 8, "min" : 7, "max" : 10, "samples" : 13}, > {}] > > The ShaperVM shows two CPUs pegging between 70 and 90 percent. > > After 500 seconds of continual iperfing, each client reported a throughpu= t of 2.72 Gbits/sec (158 GBytes) and 3.89 Gbits/sec and 226 GBytes. > > Maxing out HyperV like this is inducing a bit of latency (which is to be = expected), but it's not bad. I also forgot to disable hyperthreading, and l= ooking at the host performance it is sometimes running the second virtual C= PU on an underpowered "fake" CPU. > > So for two large streams, I think we're doing pretty well also! > > TEST 3: DUAL STREAMS, SINGLE CPU > > This test is designed to try and blow things up. It's the same as test 2,= but both CPEs are set to the same CPU (1), using TC handles 1:5 and 1:6. > > ShaperVM CPU1 maxed out in the high 90s, the other CPUs were idle. The pp= ing stats start to show a bit of degradation in performance for pounding it= so hard: > > [ > {"tc":"1:6", "avg" : 10, "min" : 9, "max" : 19, "samples" : 24}, > {"tc":"1:5", "avg" : 10, "min" : 8, "max" : 18, "samples" : 24}, > {}] > > For whatever reason, it smoothed out over time: > > [ > {"tc":"1:6", "avg" : 10, "min" : 9, "max" : 12, "samples" : 50}, > {"tc":"1:5", "avg" : 10, "min" : 8, "max" : 13, "samples" : 50}, > {}] > > Surprisingly (to me), I didn't encounter errors. Each client received 2.2= 2 Gbit/s performance, over 129 Gbytes of data. > > TEST 4: DUAL STREAMS, 50 SUB-STREAMS > > This test is also designed to break things. Same as test 3, but using ipe= rf -c 100.64.1.1 -P 50 -t 120 - 50 substreams, to try and really tax the fl= ow tracking. (Shorter time window because I really wanted to go and find co= ffee) > > ShaperVM CPU sat at around 80-97%, tending towards 97%. pping results sho= w that this torture test is worsening performance, and there's always lots = of samples in the buffer: > > [ > {"tc":"1:6", "avg" : 23, "min" : 19, "max" : 27, "samples" : 49}, > {"tc":"1:5", "avg" : 24, "min" : 19, "max" : 27, "samples" : 49}, > {}] > > This test also ran better than I expected. You can definitely see some la= tency creeping in as I make the system work hard. Each VM showed around 2.4= Gbit/s in total performance at the end of the iperf session. There's defin= itely some latency creeping in, which is expected - but I'm not sure I expe= cted quite that much. > > WHAT'S NEXT & CONCLUSION > > I noticed that I forgot to turn off efficient power management on my VMs = and host, and left Hyperthreading on by mistake. So that hurts overall perf= ormance. > > The base system seems to be working pretty solidly, at least for small te= sts.Next up, I'll be removing extraneous debug reporting code, removing som= e code paths that don't do anything but report, and looking for any small o= ptimization opportunities. I'll then re-run these tests. Once that's done, = I hope to find a maintenance window on my WISP and try it with actual traff= ic. > > I also need to re-run these tests without the pping system to provide som= e before/after analysis. > > On Tue, Oct 18, 2022 at 1:01 PM Herbert Wolverson = wrote: >> >> It's probably not entirely thread-safe right now (ran into some issues r= eading per_cpu maps back from userspace; hopefully, I'll get that figured o= ut) - but the commits I just pushed have it basically working on single-str= eam testing. :-) >> >> Setup cpumap as usual, and periodically run xdp-pping. This gives you pe= r-connection RTT information in JSON: >> >> [ >> {"tc":"1:5", "avg" : 5, "min" : 5, "max" : 5, "samples" : 1}, >> {}] >> >> (With the extra {} because I'm not tracking the tail and haven't done co= mma removal). The tool also empties the various maps used to gather data, a= cting as a "reset" point. There's a max of 60 samples per queue, in a ringb= uffer setup (so newest will start to overwrite the oldest). >> >> I'll start trying to test on a larger scale now. >> >> On Mon, Oct 17, 2022 at 3:34 PM Robert Chac=C3=B3n wrote: >>> >>> Hey Herbert, >>> >>> Fantastic work! Super exciting to see this coming together, especially = so quickly. >>> I'll test it soon. >>> I understand and agree with your decision to omit certain features (ICM= P tracking,DNS tracking, etc) to optimize performance for our use case. Lik= e you said, in order to merge the functionality without a performance hit, = merging them is sort of the only way right now. Otherwise there would be a = lot of redundancy and lost throughput for an ISP's use. Though hopefully lo= ng term there will be a way to keep all projects working independently but = interoperably with a plugin system of some kind. >>> >>> By the way, I'm making some headway on LibreQoS v1.3. Focusing on optim= izations for high sub counts (8000+ subs) as well as stateful changes to th= e queue structure. >>> I'm working to set up a physical lab to test high throughput and high c= lient count scenarios. >>> When testing beyond ~32,000 filters we get "no space left on device" fr= om xdp-cpumap-tc, which I think relates to the bpf map size limitation you = mentioned. Maybe in the coming months we can take a look at that. >>> >>> Anyway great work on the cpumap-pping program! Excited to see more on t= his. >>> >>> Thanks, >>> Robert >>> >>> On Mon, Oct 17, 2022 at 12:45 PM Herbert Wolverson via LibreQoS wrote: >>>> >>>> Hey, >>>> >>>> My current (unfinished) progress on this is now available here: https:= //github.com/thebracket/cpumap-pping-hackjob >>>> >>>> I mean it about the warnings, this isn't at all stable, debugged - and= can't promise that it won't unleash the nasal demons >>>> (to use a popular C++ phrase). The name is descriptive! ;-) >>>> >>>> With that said, I'm pretty happy so far: >>>> >>>> * It runs only on the classifier - which xdp-cpumap-tc has nicely shun= ted onto a dedicated CPU. It has to run on both >>>> the inbound and outbound classifiers, since otherwise it would only = see half the conversation. >>>> * It does assume that your ingress and egress CPUs are mapped to the s= ame interface; I do that anyway in BracketQoS. Not doing >>>> that opens up a potential world of pain, since writes to the shared = maps would require a locking scheme. Too much locking, and you lose all of = the benefit of using multiple CPUs to begin with. >>>> * It is pretty wasteful of RAM, but most of the shaper systems I've wo= rked with have lots of it. >>>> * I've been gradually removing features that I don't want for BracketQ= oS. A hypothetical future "useful to everyone" version wouldn't do that. >>>> * Rate limiting is working, but I removed the requirement for a shared= configuration provided from userland - so right now it's always set to rep= ort at 1 second intervals per stream. >>>> >>>> My testbed is currently 3 Hyper-V VMs - a simple "client" and "world",= and a "shaper" VM in between running a slightly hacked-up LibreQoS. >>>> iperf from "client" to "world" (with Libre set to allow 10gbit/s max, = via a cake/HTB queue setup) is around 5 gbit/s at present, on my >>>> test PC (the host is a core i7, 12th gen, 12 cores - 64gb RAM and fast= SSDs) >>>> >>>> Output currently consists of debug messages reading: >>>> cpumap/0/map:4-1371 [000] D..2. 515.399222: bpf_trace_printk: (= tc) Flow open event >>>> cpumap/0/map:4-1371 [000] D..2. 515.399239: bpf_trace_printk: (= tc) Send performance event (5,1), 374696 >>>> cpumap/0/map:4-1371 [000] D..2. 515.399466: bpf_trace_printk: (= tc) Flow open event >>>> cpumap/0/map:4-1371 [000] D..2. 515.399475: bpf_trace_printk: (= tc) Send performance event (5,1), 247069 >>>> cpumap/0/map:4-1371 [000] D..2. 516.405151: bpf_trace_printk: (= tc) Send performance event (5,1), 5217155 >>>> cpumap/0/map:4-1371 [000] D..2. 517.405248: bpf_trace_printk: (= tc) Send performance event (5,1), 4515394 >>>> cpumap/0/map:4-1371 [000] D..2. 518.406117: bpf_trace_printk: (= tc) Send performance event (5,1), 4481289 >>>> cpumap/0/map:4-1371 [000] D..2. 519.406255: bpf_trace_printk: (= tc) Send performance event (5,1), 4255268 >>>> cpumap/0/map:4-1371 [000] D..2. 520.407864: bpf_trace_printk: (= tc) Send performance event (5,1), 5249493 >>>> cpumap/0/map:4-1371 [000] D..2. 521.406664: bpf_trace_printk: (= tc) Send performance event (5,1), 3795993 >>>> cpumap/0/map:4-1371 [000] D..2. 522.407469: bpf_trace_printk: (= tc) Send performance event (5,1), 3949519 >>>> cpumap/0/map:4-1371 [000] D..2. 523.408126: bpf_trace_printk: (= tc) Send performance event (5,1), 4365335 >>>> cpumap/0/map:4-1371 [000] D..2. 524.408929: bpf_trace_printk: (= tc) Send performance event (5,1), 4154910 >>>> cpumap/0/map:4-1371 [000] D..2. 525.410048: bpf_trace_printk: (= tc) Send performance event (5,1), 4405582 >>>> cpumap/0/map:4-1371 [000] D..2. 525.434080: bpf_trace_printk: (= tc) Send flow event >>>> cpumap/0/map:4-1371 [000] D..2. 525.482714: bpf_trace_printk: (= tc) Send flow event >>>> >>>> The times haven't been tweaked yet. The (5,1) is tc handle major/minor= , allocated by the xdp-cpumap parent. >>>> I get pretty low latency between VMs; I'll set up a test with some rea= l-world data very soon. >>>> >>>> I plan to keep hacking away, but feel free to take a peek. >>>> >>>> Thanks, >>>> Herbert >>>> >>>> On Mon, Oct 17, 2022 at 10:14 AM Simon Sundberg wrote: >>>>> >>>>> Hi, thanks for adding me to the conversation. Just a couple of quick >>>>> notes. >>>>> >>>>> On Mon, 2022-10-17 at 16:13 +0200, Toke H=C3=B8iland-J=C3=B8rgensen w= rote: >>>>> > [ Adding Simon to Cc ] >>>>> > >>>>> > Herbert Wolverson via LibreQoS wri= tes: >>>>> > >>>>> > > Hey, >>>>> > > >>>>> > > I've had some pretty good success with merging xdp-pping ( >>>>> > > https://github.com/xdp-project/bpf-examples/blob/master/pping/ppi= ng.h ) >>>>> > > into xdp-cpumap-tc ( https://github.com/xdp-project/xdp-cpumap-tc= ). >>>>> > > >>>>> > > I ported over most of the xdp-pping code, and then changed the en= try point >>>>> > > and packet parsing code to make use of the work already done in >>>>> > > xdp-cpumap-tc (it's already parsed a big chunk of the packet, no = need to do >>>>> > > it twice). Then I switched the maps to per-cpu maps, and had to p= in them - >>>>> > > otherwise the two tc instances don't properly share data. >>>>> > > >>>>> >>>>> I guess the xdp-cpumap-tc ensures that the same flow is processed on >>>>> the same CPU core at both ingress or egress. Otherwise, if a flow may >>>>> be processed by different cores on ingress and egress the per-CPU map= s >>>>> will not really work reliably as each core will have a different view >>>>> on the state of the flow, if there's been a previous packet with a >>>>> certain TSval from that flow etc. >>>>> >>>>> Furthermore, if a flow is always processed on the same core (on both >>>>> ingress and egress) I think per-CPU maps may be a bit wasteful on >>>>> memory. From my understanding the keys for per-CPU maps are still >>>>> shared across all CPUs, it's just that each CPU gets its own value. S= o >>>>> all CPUs will then have their own data for each flow, but it's only t= he >>>>> CPU processing the flow that will have any relevant data for the flow >>>>> while the remaining CPUs will just have an empty state for that flow. >>>>> Under the same assumption that packets within the same flow are alway= s >>>>> processed on the same core there should generally not be any >>>>> concurrency issues with having a global (non-per-CPU) either as packe= ts >>>>> from the same flow cannot be processed concurrently then (and thus no >>>>> concurrent access to the same value in the map). I am however still >>>>> very unclear on if there's any considerable performance impact betwee= n >>>>> global and per-CPU map versions if the same key is not accessed >>>>> concurrently. >>>>> >>>>> > > Right now, output >>>>> > > is just stubbed - I've still got to port the perfmap output code.= Instead, >>>>> > > I'm dumping a bunch of extra data to the kernel debug pipe, so I = can see >>>>> > > roughly what the output would look like. >>>>> > > >>>>> > > With debug enabled and just logging I'm now getting about 4.9 Gbi= ts/sec on >>>>> > > single-stream iperf between two VMs (with a shaper VM in the midd= le). :-) >>>>> > >>>>> > Just FYI, that "just logging" is probably the biggest source of >>>>> > overhead, then. What Simon found was that sending the data from ker= nel >>>>> > to userspace is one of the most expensive bits of epping, at least = when >>>>> > the number of data points goes up (which is does as additional flow= s are >>>>> > added). >>>>> >>>>> Yhea, reporting individual RTTs when there's lots of them (you may ge= t >>>>> upwards of 1000 RTTs/s per flow) is not only problematic in terms of >>>>> direct overhead from the tool itself, but also becomes demanding for >>>>> whatever you use all those RTT samples for (i.e. need to log, parse, >>>>> analyze etc. a very large amount of RTTs). One way to deal with that = is >>>>> of course to just apply some sort of sampling (the -r/--rate-limit an= d >>>>> -R/--rtt-rate >>>>> > >>>>> > > So my question: how would you prefer to receive this data? I'll h= ave to >>>>> > > write a daemon that provides userspace control (periodic cleanup = as well as >>>>> > > reading the performance stream), so the world's kinda our oyster.= I can >>>>> > > stick to Kathie's original format (and dump it to a named pipe, p= erhaps?), >>>>> > > a condensed format that only shows what you want to use, an effic= ient >>>>> > > binary format if you feel like parsing that... >>>>> > >>>>> > It would be great if we could combine efforts a bit here so we don'= t >>>>> > fork the codebase more than we have to. I.e., if "upstream" epping = and >>>>> > whatever daemon you end up writing can agree on data format etc tha= t >>>>> > would be fantastic! Added Simon to Cc to facilitate this :) >>>>> > >>>>> > Briefly what I've discussed before with Simon was to have the abili= ty to >>>>> > aggregate the metrics in the kernel (WiP PR [0]) and have a userspa= ce >>>>> > utility periodically pull them out. What we discussed was doing thi= s >>>>> > using an LPM map (which is not in that PR yet). The idea would be t= hat >>>>> > userspace would populate the LPM map with the keys (prefixes) they >>>>> > wanted statistics for (in LibreQOS context that could be one key pe= r >>>>> > customer, for instance). Epping would then do a map lookup into the= LPM, >>>>> > and if it gets a match it would update the statistics in that map e= ntry >>>>> > (keeping a histogram of latency values seen, basically). Simon's PR >>>>> > below uses this technique where userspace will "reset" the histogra= m >>>>> > every time it loads it by swapping out two different map entries wh= en it >>>>> > does a read; this allows you to control the sampling rate from >>>>> > userspace, and you'll just get the data since the last time you pol= led. >>>>> >>>>> Thank's Toke for summarzing both the current state and the plan going >>>>> forward. I will just note that this PR (and all my other work with >>>>> ePPing/BPF-PPing/XDP-PPing/I-suck-at-names-PPing) will be more or les= s >>>>> on hold for a couple of weeks right now as I'm trying to finish up a >>>>> paper. >>>>> >>>>> > I was thinking that if we all can agree on the map format, then you= r >>>>> > polling daemon could be one userspace "client" for that, and the ep= ping >>>>> > binary itself could be another; but we could keep compatibility bet= ween >>>>> > the two, so we don't duplicate effort. >>>>> > >>>>> > Similarly, refactoring of the epping code itself so it can be plugg= ed >>>>> > into the cpumap-tc code would be a good goal... >>>>> >>>>> Should probably do that...at some point. In general I think it's a bi= t >>>>> of an interesting problem to think about how to chain multiple XDP/tc >>>>> programs together in an efficent way. Most XDP and tc programs will d= o >>>>> some amount of packet parsing and when you have many chained programs >>>>> parsing the same packets this obviously becomes a bit wasteful. In th= e >>>>> same time it would be nice if one didn't need to manually merge >>>>> multiple programs together into a single one like this to get rid of >>>>> this duplicated parsing, or at least make that process of merging tho= se >>>>> programs as simple as possible. >>>>> >>>>> >>>>> > -Toke >>>>> > >>>>> > [0] https://github.com/xdp-project/bpf-examples/pull/59 >>>>> >>>>> N=C3=A4r du skickar e-post till Karlstads universitet behandlar vi di= na personuppgifter. >>>>> When you send an e-mail to Karlstad University, we will process your = personal data. >>>> >>>> _______________________________________________ >>>> LibreQoS mailing list >>>> LibreQoS@lists.bufferbloat.net >>>> https://lists.bufferbloat.net/listinfo/libreqos >>> >>> >>> >>> -- >>> Robert Chac=C3=B3n >>> CEO | JackRabbit Wireless LLC > > _______________________________________________ > LibreQoS mailing list > LibreQoS@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/libreqos --=20 This song goes out to all the folk that thought Stadia would work: https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-69813666656= 07352320-FXtz Dave T=C3=A4ht CEO, TekLibre, LLC