* [Bloat] Thanks to developers / htb+fq_codel ISP shaper
@ 2021-01-14 17:59 Robert Chacon
2021-01-14 19:46 ` Toke Høiland-Jørgensen
2021-01-21 4:25 ` Dave Taht
0 siblings, 2 replies; 9+ messages in thread
From: Robert Chacon @ 2021-01-14 17:59 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 1275 bytes --]
Hello everyone,
I am new here, my name is Robert. I operate a small ISP in the US. I wanted
to post here to thank Dave Täht, as well as the dozens of contributors to
the fq_codel and cake projects.
I created a simple python application that uses htb+fq_codel to shape my
customers' traffic, and have seen great performance improvements. I am
maintaining it as an open source project for other ISPs to use at
https://github.com/rchac/LibreQoS
Mostly I just wanted to thank Dave and everyone else here for working to
make fq_codel and cake possible. These are hugely helpful projects that
have helped improve our network, and thousands of other networks around the
world. Looking at discussions from fellow ISPs who use Preseem and Sensei,
which use fq_codel, small ISP networks across the world are hugely
benefiting from fq_codel. They are now able to retain customers who would
have otherwise been lost - thanks to fq_codel and the many optimizations
you all made possible. I hope more ISPs are able to deploy fq_codel and/or
cake using our tool or commercial applications like Preseem and Sensei.
Amid COVID, fq_codel is really important for keeping work-from-home and
remote learning connectivity stable. Thank you all!
Thanks,
Robert Chacon
[-- Attachment #2: Type: text/html, Size: 1445 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Thanks to developers / htb+fq_codel ISP shaper
2021-01-14 17:59 [Bloat] Thanks to developers / htb+fq_codel ISP shaper Robert Chacon
@ 2021-01-14 19:46 ` Toke Høiland-Jørgensen
2021-01-14 22:07 ` Robert Chacon
2021-01-21 4:25 ` Dave Taht
1 sibling, 1 reply; 9+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-01-14 19:46 UTC (permalink / raw)
To: Robert Chacon, bloat
Robert Chacon <robert.chacon@jackrabbitwireless.com> writes:
> Hello everyone,
>
> I am new here, my name is Robert. I operate a small ISP in the US. I wanted
> to post here to thank Dave Täht, as well as the dozens of contributors to
> the fq_codel and cake projects.
Thank you for reaching out! It's always fun to hear about real-world
deployments of this technology, and it's great to hear that it's working
well for you! :)
> I created a simple python application that uses htb+fq_codel to shape my
> customers' traffic, and have seen great performance improvements. I am
> maintaining it as an open source project for other ISPs to use at
> https://github.com/rchac/LibreQoS
Cool! What kind of performance are you seeing? The README mentions being
limited by the BPF hash table size, but can you actually shape 2000
customers on one machine? On what kind of hardware and at what rate(s)?
-Toke
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Thanks to developers / htb+fq_codel ISP shaper
2021-01-14 19:46 ` Toke Høiland-Jørgensen
@ 2021-01-14 22:07 ` Robert Chacon
2021-01-15 12:30 ` Toke Høiland-Jørgensen
2021-01-21 4:38 ` Dave Taht
0 siblings, 2 replies; 9+ messages in thread
From: Robert Chacon @ 2021-01-14 22:07 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 2884 bytes --]
> Cool! What kind of performance are you seeing? The README mentions being
> limited by the BPF hash table size, but can you actually shape 2000
> customers on one machine? On what kind of hardware and at what rate(s)?
On our production network our peak throughput is 1.5Gbps from 200 clients,
and it works very well.
We use a simple consumer-class AMD 2700X CPU in production because
utilization of the shaper VM is ~15% at 1.5Gbps load.
Customers get reliably capped within ±2Mbps of their allocated htb/fq_codel
bandwidth, which is very helpful to control network congestion.
Here are some graphs from RRUL performed on our test bench hypervisor:
https://raw.githubusercontent.com/rchac/LibreQoS/main/docs/fq_codel_1000_subs_4G.png
In that example, bandwidth for the "subscriber" client VM was set to 4Gbps.
1000 IPv4 IPs and 1000 IPv6 IPs were in the filter hash table of LibreQoS.
The test bench server has an AMD 3900X running Ubuntu in Proxmox. 4Gbps
utilizes 10% of the VM's 12 cores. Paravirtualized VirtIO network drivers
are used and most offloading types are enabled.
In our setup, VM networking multiqueue isn't enabled (it kept disrupting
traffic flow), so 6Gbps is probably the most it can achieve like this. Our
qdiscs in this VM may be limited to one core because of that.
I suspect in a non-virtualized setup, or one with multiqueue, it can handle
much more throughput.
Either way for now it's surprising to me how well it works and I'm just
grateful for it haha.
Kudos to you and your peers for making fq_codel so efficient!
- Robert
On Thu, Jan 14, 2021 at 12:46 PM Toke Høiland-Jørgensen <toke@toke.dk>
wrote:
> Robert Chacon <robert.chacon@jackrabbitwireless.com> writes:
>
> > Hello everyone,
> >
> > I am new here, my name is Robert. I operate a small ISP in the US. I
> wanted
> > to post here to thank Dave Täht, as well as the dozens of contributors to
> > the fq_codel and cake projects.
>
> Thank you for reaching out! It's always fun to hear about real-world
> deployments of this technology, and it's great to hear that it's working
> well for you! :)
>
> > I created a simple python application that uses htb+fq_codel to shape my
> > customers' traffic, and have seen great performance improvements. I am
> > maintaining it as an open source project for other ISPs to use at
> > https://github.com/rchac/LibreQoS
>
> Cool! What kind of performance are you seeing? The README mentions being
> limited by the BPF hash table size, but can you actually shape 2000
> customers on one machine? On what kind of hardware and at what rate(s)?
>
> -Toke
>
--
[image: photograph]
*Robert Chacón* Owner
*M* (915) 730-1472
*E* robert.chacon@jackrabbitwireless.com
*JackRabbit Wireless LLC*
P.O. Box 222111
El Paso, TX 79913
*jackrabbitwireless.com* <http://jackrabbitwireless.com>
[-- Attachment #2: Type: text/html, Size: 6151 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Thanks to developers / htb+fq_codel ISP shaper
2021-01-14 22:07 ` Robert Chacon
@ 2021-01-15 12:30 ` Toke Høiland-Jørgensen
2021-01-21 5:50 ` Robert Chacon
2021-01-21 4:38 ` Dave Taht
1 sibling, 1 reply; 9+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-01-15 12:30 UTC (permalink / raw)
To: Robert Chacon; +Cc: bloat
Robert Chacon <robert.chacon@jackrabbitwireless.com> writes:
>> Cool! What kind of performance are you seeing? The README mentions being
>> limited by the BPF hash table size, but can you actually shape 2000
>> customers on one machine? On what kind of hardware and at what rate(s)?
>
> On our production network our peak throughput is 1.5Gbps from 200 clients,
> and it works very well.
> We use a simple consumer-class AMD 2700X CPU in production because
> utilization of the shaper VM is ~15% at 1.5Gbps load.
> Customers get reliably capped within ±2Mbps of their allocated htb/fq_codel
> bandwidth, which is very helpful to control network congestion.
>
> Here are some graphs from RRUL performed on our test bench hypervisor:
> https://raw.githubusercontent.com/rchac/LibreQoS/main/docs/fq_codel_1000_subs_4G.png
> In that example, bandwidth for the "subscriber" client VM was set to 4Gbps.
> 1000 IPv4 IPs and 1000 IPv6 IPs were in the filter hash table of LibreQoS.
> The test bench server has an AMD 3900X running Ubuntu in Proxmox. 4Gbps
> utilizes 10% of the VM's 12 cores. Paravirtualized VirtIO network drivers
> are used and most offloading types are enabled.
> In our setup, VM networking multiqueue isn't enabled (it kept disrupting
> traffic flow), so 6Gbps is probably the most it can achieve like this. Our
> qdiscs in this VM may be limited to one core because of that.
I suspect the issue you had with multiqueue is that it requires per-CPU
partitioning on a per-customer base to work well. This is possible to do
with XDP, as Jesper demonstrates here:
https://github.com/netoptimizer/xdp-cpumap-tc
With this it should be possible to scale the hardware queues across
multiple CPUs properly, and you should be able to go to much higher
rates by just throwing more CPU cores at it. At least on bare metal; not
sure if the VM virt-drivers have the needed support yet...
-Toke
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Thanks to developers / htb+fq_codel ISP shaper
2021-01-14 17:59 [Bloat] Thanks to developers / htb+fq_codel ISP shaper Robert Chacon
2021-01-14 19:46 ` Toke Høiland-Jørgensen
@ 2021-01-21 4:25 ` Dave Taht
2021-01-21 5:44 ` Robert Chacon
1 sibling, 1 reply; 9+ messages in thread
From: Dave Taht @ 2021-01-21 4:25 UTC (permalink / raw)
To: Robert Chacon; +Cc: bloat
Thank you robert, but way more than dozens of folk were and remain more involved
in the effort. For starters Jim Gettys fired it all up, and like him,
I'm mostly retreated to
the sidelines, working on other things. Perhaps in 2021, with a new
administration
and FCC chair, and so many families stuck at home... will be the year
the average user will finally get gear that does more of the right
things, especially for videoconferencing, by default, led by the
smaller ISPs in competitive markets.
I've never figured out how to get the message "more out"; we ran a funding drive
once that got more PR. Half the donations I take from
https://www.patreon.com/dtaht go to keeping the flent servers alive
and the other half buys top ramen. I just lost my
dinghy in a windstorm and can't even get "home" at the moment, and one of
my big frustrations from the harbor vantage point whenever I manage to
get back...
is having to run cake in front of my cell phone in order to make it
behave... and another is to see all the very poor offloads from major
manufacturers that claim an SQM implementation that doesn't actually
work....
But all grousing aside:
THANK YOU VERY MUCH for open sourcing a set of tools that help
out a smaller ISP. We're all in this bloat together, and by sharing
code and ideas
we can make for a faster, more reliable, better internet, for everyone.
I am quite behind on reading the bloat list, and this thread made my day.
thx
On Thu, Jan 14, 2021 at 9:59 AM Robert Chacon
<robert.chacon@jackrabbitwireless.com> wrote:
>
> Hello everyone,
>
> I am new here, my name is Robert. I operate a small ISP in the US. I wanted to post here to thank Dave Täht, as well as the dozens of contributors to the fq_codel and cake projects.
>
> I created a simple python application that uses htb+fq_codel to shape my customers' traffic, and have seen great performance improvements. I am maintaining it as an open source project for other ISPs to use at https://github.com/rchac/LibreQoS
>
> Mostly I just wanted to thank Dave and everyone else here for working to make fq_codel and cake possible. These are hugely helpful projects that have helped improve our network, and thousands of other networks around the world. Looking at discussions from fellow ISPs who use Preseem and Sensei, which use fq_codel, small ISP networks across the world are hugely benefiting from fq_codel. They are now able to retain customers who would have otherwise been lost - thanks to fq_codel and the many optimizations you all made possible. I hope more ISPs are able to deploy fq_codel and/or cake using our tool or commercial applications like Preseem and Sensei. Amid COVID, fq_codel is really important for keeping work-from-home and remote learning connectivity stable. Thank you all!
>
> Thanks,
> Robert Chacon
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
"For a successful technology, reality must take precedence over public
relations, for Mother Nature cannot be fooled" - Richard Feynman
dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Thanks to developers / htb+fq_codel ISP shaper
2021-01-14 22:07 ` Robert Chacon
2021-01-15 12:30 ` Toke Høiland-Jørgensen
@ 2021-01-21 4:38 ` Dave Taht
1 sibling, 0 replies; 9+ messages in thread
From: Dave Taht @ 2021-01-21 4:38 UTC (permalink / raw)
To: Robert Chacon; +Cc: Toke Høiland-Jørgensen, bloat
[-- Attachment #1: Type: text/plain, Size: 4176 bytes --]
On Thu, Jan 14, 2021 at 2:07 PM Robert Chacon <
robert.chacon@jackrabbitwireless.com> wrote:
> > Cool! What kind of performance are you seeing? The README mentions being
> > limited by the BPF hash table size, but can you actually shape 2000
> > customers on one machine? On what kind of hardware and at what rate(s)?
>
> On our production network our peak throughput is 1.5Gbps from 200 clients,
> and it works very well.
> We use a simple consumer-class AMD 2700X CPU in production because
> utilization of the shaper VM is ~15% at 1.5Gbps load.
> Customers get reliably capped within ±2Mbps of their allocated
> htb/fq_codel bandwidth, which is very helpful to control network congestion.
>
> Here are some graphs from RRUL performed on our test bench hypervisor:
> https://raw.githubusercontent.com/rchac/LibreQoS/main/docs/fq_codel_1000_subs_4G.png
> In that example, bandwidth for the "subscriber" client VM was set to
> 4Gbps. 1000 IPv4 IPs and 1000 IPv6 IPs were in the filter hash table of
> LibreQoS.
>
What I really love about this plot is that it is now very possible for your
customers to
play live music together with reasonable latencies and jitter, under load.
Existing tools like "jacktrip" should "just work. For a cool talk about the
jacktrip
revolution:
https://www.npr.org/2020/11/21/937043051/musicians-turn-to-new-software-to-play-together-online
I'm using cake to keep things under control on my testbed network, using
ardour
as the mixing tool, and achieving about 6ms of inherent latency.
I am hoping to sink a bit of time into galene.org and various web browsers
this
year to finally get closer to what the lola project has been doing for a
while on the video front.
> The test bench server has an AMD 3900X running Ubuntu in Proxmox. 4Gbps
> utilizes 10% of the VM's 12 cores. Paravirtualized VirtIO network drivers
> are used and most offloading types are enabled.
> In our setup, VM networking multiqueue isn't enabled (it kept disrupting
> traffic flow), so 6Gbps is probably the most it can achieve like this. Our
> qdiscs in this VM may be limited to one core because of that.
> I suspect in a non-virtualized setup, or one with multiqueue, it can
> handle much more throughput.
> Either way for now it's surprising to me how well it works and I'm just
> grateful for it haha.
> Kudos to you and your peers for making fq_codel so efficient!
>
> - Robert
>
> On Thu, Jan 14, 2021 at 12:46 PM Toke Høiland-Jørgensen <toke@toke.dk>
> wrote:
>
>> Robert Chacon <robert.chacon@jackrabbitwireless.com> writes:
>>
>> > Hello everyone,
>> >
>> > I am new here, my name is Robert. I operate a small ISP in the US. I
>> wanted
>> > to post here to thank Dave Täht, as well as the dozens of contributors
>> to
>> > the fq_codel and cake projects.
>>
>> Thank you for reaching out! It's always fun to hear about real-world
>> deployments of this technology, and it's great to hear that it's working
>> well for you! :)
>>
>> > I created a simple python application that uses htb+fq_codel to shape my
>> > customers' traffic, and have seen great performance improvements. I am
>> > maintaining it as an open source project for other ISPs to use at
>> > https://github.com/rchac/LibreQoS
>>
>> Cool! What kind of performance are you seeing? The README mentions being
>> limited by the BPF hash table size, but can you actually shape 2000
>> customers on one machine? On what kind of hardware and at what rate(s)?
>>
>> -Toke
>>
>
>
> --
> [image: photograph]
>
>
> *Robert Chacón* Owner
> *M* (915) 730-1472
> *E* robert.chacon@jackrabbitwireless.com
> *JackRabbit Wireless LLC*
> P.O. Box 222111
> El Paso, TX 79913
> *jackrabbitwireless.com* <http://jackrabbitwireless.com>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--
"For a successful technology, reality must take precedence over public
relations, for Mother Nature cannot be fooled" - Richard Feynman
dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
[-- Attachment #2: Type: text/html, Size: 8440 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Thanks to developers / htb+fq_codel ISP shaper
2021-01-21 4:25 ` Dave Taht
@ 2021-01-21 5:44 ` Robert Chacon
0 siblings, 0 replies; 9+ messages in thread
From: Robert Chacon @ 2021-01-21 5:44 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 6130 bytes --]
I can assure you, it's making a real difference for our customers. On the
"WISP Talk" facebook group dozens of small ISPs like ours regularly report
the improvements fq_codel has had on their networks and customer retention
(Preseem, Sensei).
In our case, most of our customers recently switched to us from the
incumbent telco - a major company with higher bandwidth offerings but also
high pricing.
We mostly use 802.11ac PtMP radios, which *should* mean lots of latency
spikes and odd performance in such a noisy 5GHz environment.
Fq_codel has really stabilized performance of our links, and qdisc stats
have been able to help us to track down problematic connections before the
customer knows anything is wrong.
Amazingly, nearly every one of our subscribers describes having a more
stable video conferencing experience than with the incumbent - since we
added fq_codel.
It offers small ISPs like ours a unique competitive advantage, while also
alleviating headaches for many families who were having performance issues
on other providers at the start of the pandemic when everything switched to
video conferencing.
I'm very sorry about the dinghy. I can imagine that's stressful so I hope
you're able to find a good replacement ASAP.
I'm going to post about LibreQoS on some ISP forums I frequent and see if
we can't get some modest contributions to the patreon.
A good deal of us small ISP operators owe a great deal to linux traffic
control and all these related open source projects for keeping us going
against the big guys.
Your work has put bloat and network performance in the spotlight and helped
bring many disparate devs and stakeholders together to make linux tc work
well.
Given how many ISPs pay $5000+/yr for proprietary wrappers for
htb+fq_codel, I am hopeful they'll see the value in supporting long term
advances in linux networking, and contribute back!
As someone who makes music on linux myself (bitwig mostly, which connects
to JACK) I am very excited to mess with JackTrip. How neat! Also thank you
for familiarizing me with Galene - I had always hoped more webrtc related
p2p video platforms would take off and I'm going to keep an eye on it.
Everyone could benefit from p2p and decentralization there, plus us ISPs
would stop getting yelled at when Zoom servers go down haha.
I will check the ecn mark vs drop stats tomorrow and see what they are at.
On Wed, Jan 20, 2021 at 9:26 PM Dave Taht <dave.taht@gmail.com> wrote:
> Thank you robert, but way more than dozens of folk were and remain more
> involved
> in the effort. For starters Jim Gettys fired it all up, and like him,
> I'm mostly retreated to
> the sidelines, working on other things. Perhaps in 2021, with a new
> administration
> and FCC chair, and so many families stuck at home... will be the year
> the average user will finally get gear that does more of the right
> things, especially for videoconferencing, by default, led by the
> smaller ISPs in competitive markets.
>
> I've never figured out how to get the message "more out"; we ran a funding
> drive
> once that got more PR. Half the donations I take from
> https://www.patreon.com/dtaht go to keeping the flent servers alive
> and the other half buys top ramen. I just lost my
> dinghy in a windstorm and can't even get "home" at the moment, and one of
> my big frustrations from the harbor vantage point whenever I manage to
> get back...
>
> is having to run cake in front of my cell phone in order to make it
> behave... and another is to see all the very poor offloads from major
> manufacturers that claim an SQM implementation that doesn't actually
> work....
>
> But all grousing aside:
>
> THANK YOU VERY MUCH for open sourcing a set of tools that help
> out a smaller ISP. We're all in this bloat together, and by sharing
> code and ideas
> we can make for a faster, more reliable, better internet, for everyone.
>
> I am quite behind on reading the bloat list, and this thread made my day.
>
> thx
>
> On Thu, Jan 14, 2021 at 9:59 AM Robert Chacon
> <robert.chacon@jackrabbitwireless.com> wrote:
> >
> > Hello everyone,
> >
> > I am new here, my name is Robert. I operate a small ISP in the US. I
> wanted to post here to thank Dave Täht, as well as the dozens of
> contributors to the fq_codel and cake projects.
> >
> > I created a simple python application that uses htb+fq_codel to shape my
> customers' traffic, and have seen great performance improvements. I am
> maintaining it as an open source project for other ISPs to use at
> https://github.com/rchac/LibreQoS
> >
> > Mostly I just wanted to thank Dave and everyone else here for working to
> make fq_codel and cake possible. These are hugely helpful projects that
> have helped improve our network, and thousands of other networks around the
> world. Looking at discussions from fellow ISPs who use Preseem and Sensei,
> which use fq_codel, small ISP networks across the world are hugely
> benefiting from fq_codel. They are now able to retain customers who would
> have otherwise been lost - thanks to fq_codel and the many optimizations
> you all made possible. I hope more ISPs are able to deploy fq_codel and/or
> cake using our tool or commercial applications like Preseem and Sensei.
> Amid COVID, fq_codel is really important for keeping work-from-home and
> remote learning connectivity stable. Thank you all!
> >
> > Thanks,
> > Robert Chacon
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> "For a successful technology, reality must take precedence over public
> relations, for Mother Nature cannot be fooled" - Richard Feynman
>
> dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729
>
--
[image: photograph]
*Robert Chacón* Owner
*M* (915) 730-1472
*E* robert.chacon@jackrabbitwireless.com
*JackRabbit Wireless LLC*
P.O. Box 222111
El Paso, TX 79913
*jackrabbitwireless.com* <http://jackrabbitwireless.com>
[-- Attachment #2: Type: text/html, Size: 9664 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Thanks to developers / htb+fq_codel ISP shaper
2021-01-15 12:30 ` Toke Høiland-Jørgensen
@ 2021-01-21 5:50 ` Robert Chacon
2021-01-21 11:14 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 9+ messages in thread
From: Robert Chacon @ 2021-01-21 5:50 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 2788 bytes --]
Toke,
Thank you very much for pointing me in the right direction.
I am having some fun in the lab tinkering with the 'mq' qdisc and Jesper's
xdp-cpumap-tc.
It seems I will need to use iptables or nftables to filter packets to
corresponding queues, since mq apparently cannot have u32 filters on its
root.
I will try to familiarize myself with iptables and nftables, and hopefully
get it working soon and report back. Thank you!
On Fri, Jan 15, 2021 at 5:30 AM Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> Robert Chacon <robert.chacon@jackrabbitwireless.com> writes:
>
> >> Cool! What kind of performance are you seeing? The README mentions being
> >> limited by the BPF hash table size, but can you actually shape 2000
> >> customers on one machine? On what kind of hardware and at what rate(s)?
> >
> > On our production network our peak throughput is 1.5Gbps from 200
> clients,
> > and it works very well.
> > We use a simple consumer-class AMD 2700X CPU in production because
> > utilization of the shaper VM is ~15% at 1.5Gbps load.
> > Customers get reliably capped within ±2Mbps of their allocated
> htb/fq_codel
> > bandwidth, which is very helpful to control network congestion.
> >
> > Here are some graphs from RRUL performed on our test bench hypervisor:
> >
> https://raw.githubusercontent.com/rchac/LibreQoS/main/docs/fq_codel_1000_subs_4G.png
> > In that example, bandwidth for the "subscriber" client VM was set to
> 4Gbps.
> > 1000 IPv4 IPs and 1000 IPv6 IPs were in the filter hash table of
> LibreQoS.
> > The test bench server has an AMD 3900X running Ubuntu in Proxmox. 4Gbps
> > utilizes 10% of the VM's 12 cores. Paravirtualized VirtIO network drivers
> > are used and most offloading types are enabled.
> > In our setup, VM networking multiqueue isn't enabled (it kept disrupting
> > traffic flow), so 6Gbps is probably the most it can achieve like this.
> Our
> > qdiscs in this VM may be limited to one core because of that.
>
> I suspect the issue you had with multiqueue is that it requires per-CPU
> partitioning on a per-customer base to work well. This is possible to do
> with XDP, as Jesper demonstrates here:
>
> https://github.com/netoptimizer/xdp-cpumap-tc
>
> With this it should be possible to scale the hardware queues across
> multiple CPUs properly, and you should be able to go to much higher
> rates by just throwing more CPU cores at it. At least on bare metal; not
> sure if the VM virt-drivers have the needed support yet...
>
> -Toke
>
--
[image: photograph]
*Robert Chacón* Owner
*M* (915) 730-1472
*E* robert.chacon@jackrabbitwireless.com
*JackRabbit Wireless LLC*
P.O. Box 222111
El Paso, TX 79913
*jackrabbitwireless.com* <http://jackrabbitwireless.com>
[-- Attachment #2: Type: text/html, Size: 5999 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] Thanks to developers / htb+fq_codel ISP shaper
2021-01-21 5:50 ` Robert Chacon
@ 2021-01-21 11:14 ` Toke Høiland-Jørgensen
0 siblings, 0 replies; 9+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-01-21 11:14 UTC (permalink / raw)
To: Robert Chacon, Jesper Dangaard Brouer; +Cc: bloat
Robert Chacon <robert.chacon@jackrabbitwireless.com> writes:
> Toke,
>
> Thank you very much for pointing me in the right direction.
> I am having some fun in the lab tinkering with the 'mq' qdisc and Jesper's
> xdp-cpumap-tc.
> It seems I will need to use iptables or nftables to filter packets to
> corresponding queues, since mq apparently cannot have u32 filters on its
> root.
> I will try to familiarize myself with iptables and nftables, and hopefully
> get it working soon and report back. Thank you!
Cool - adding in Jesper, maybe he has some input on this :)
-Toke
> On Fri, Jan 15, 2021 at 5:30 AM Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>
>> Robert Chacon <robert.chacon@jackrabbitwireless.com> writes:
>>
>> >> Cool! What kind of performance are you seeing? The README mentions being
>> >> limited by the BPF hash table size, but can you actually shape 2000
>> >> customers on one machine? On what kind of hardware and at what rate(s)?
>> >
>> > On our production network our peak throughput is 1.5Gbps from 200
>> clients,
>> > and it works very well.
>> > We use a simple consumer-class AMD 2700X CPU in production because
>> > utilization of the shaper VM is ~15% at 1.5Gbps load.
>> > Customers get reliably capped within ±2Mbps of their allocated
>> htb/fq_codel
>> > bandwidth, which is very helpful to control network congestion.
>> >
>> > Here are some graphs from RRUL performed on our test bench hypervisor:
>> >
>> https://raw.githubusercontent.com/rchac/LibreQoS/main/docs/fq_codel_1000_subs_4G.png
>> > In that example, bandwidth for the "subscriber" client VM was set to
>> 4Gbps.
>> > 1000 IPv4 IPs and 1000 IPv6 IPs were in the filter hash table of
>> LibreQoS.
>> > The test bench server has an AMD 3900X running Ubuntu in Proxmox. 4Gbps
>> > utilizes 10% of the VM's 12 cores. Paravirtualized VirtIO network drivers
>> > are used and most offloading types are enabled.
>> > In our setup, VM networking multiqueue isn't enabled (it kept disrupting
>> > traffic flow), so 6Gbps is probably the most it can achieve like this.
>> Our
>> > qdiscs in this VM may be limited to one core because of that.
>>
>> I suspect the issue you had with multiqueue is that it requires per-CPU
>> partitioning on a per-customer base to work well. This is possible to do
>> with XDP, as Jesper demonstrates here:
>>
>> https://github.com/netoptimizer/xdp-cpumap-tc
>>
>> With this it should be possible to scale the hardware queues across
>> multiple CPUs properly, and you should be able to go to much higher
>> rates by just throwing more CPU cores at it. At least on bare metal; not
>> sure if the VM virt-drivers have the needed support yet...
>>
>> -Toke
>>
>
>
> --
> [image: photograph]
>
>
> *Robert Chacón* Owner
> *M* (915) 730-1472
> *E* robert.chacon@jackrabbitwireless.com
> *JackRabbit Wireless LLC*
> P.O. Box 222111
> El Paso, TX 79913
> *jackrabbitwireless.com* <http://jackrabbitwireless.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2021-01-21 11:14 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-14 17:59 [Bloat] Thanks to developers / htb+fq_codel ISP shaper Robert Chacon
2021-01-14 19:46 ` Toke Høiland-Jørgensen
2021-01-14 22:07 ` Robert Chacon
2021-01-15 12:30 ` Toke Høiland-Jørgensen
2021-01-21 5:50 ` Robert Chacon
2021-01-21 11:14 ` Toke Høiland-Jørgensen
2021-01-21 4:38 ` Dave Taht
2021-01-21 4:25 ` Dave Taht
2021-01-21 5:44 ` Robert Chacon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox