General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: "Toke Høiland-Jørgensen" <toke@toke.dk>
To: Robert Chacon <robert.chacon@jackrabbitwireless.com>
Cc: bloat@lists.bufferbloat.net
Subject: Re: [Bloat] Thanks to developers / htb+fq_codel ISP shaper
Date: Fri, 15 Jan 2021 13:30:28 +0100	[thread overview]
Message-ID: <87o8hqs7q3.fsf@toke.dk> (raw)
In-Reply-To: <CAOZyJouzt8v5kjdXYCzV7xT6eKvkYGsAs2c1UD4zytpDUpt6vA@mail.gmail.com>

Robert Chacon <robert.chacon@jackrabbitwireless.com> writes:

>> Cool! What kind of performance are you seeing? The README mentions being
>> limited by the BPF hash table size, but can you actually shape 2000
>> customers on one machine? On what kind of hardware and at what rate(s)?
>
> On our production network our peak throughput is 1.5Gbps from 200 clients,
> and it works very well.
> We use a simple consumer-class AMD 2700X CPU in production because
> utilization of the shaper VM is ~15% at 1.5Gbps load.
> Customers get reliably capped within ±2Mbps of their allocated htb/fq_codel
> bandwidth, which is very helpful to control network congestion.
>
> Here are some graphs from RRUL performed on our test bench hypervisor:
> https://raw.githubusercontent.com/rchac/LibreQoS/main/docs/fq_codel_1000_subs_4G.png
> In that example, bandwidth for the "subscriber" client VM was set to 4Gbps.
> 1000 IPv4 IPs and 1000 IPv6 IPs were in the filter hash table of LibreQoS.
> The test bench server has an AMD 3900X running Ubuntu in Proxmox. 4Gbps
> utilizes 10% of the VM's 12 cores. Paravirtualized VirtIO network drivers
> are used and most offloading types are enabled.
> In our setup, VM networking multiqueue isn't enabled (it kept disrupting
> traffic flow), so 6Gbps is probably the most it can achieve like this. Our
> qdiscs in this VM may be limited to one core because of that.

I suspect the issue you had with multiqueue is that it requires per-CPU
partitioning on a per-customer base to work well. This is possible to do
with XDP, as Jesper demonstrates here:

https://github.com/netoptimizer/xdp-cpumap-tc

With this it should be possible to scale the hardware queues across
multiple CPUs properly, and you should be able to go to much higher
rates by just throwing more CPU cores at it. At least on bare metal; not
sure if the VM virt-drivers have the needed support yet...

-Toke

  reply	other threads:[~2021-01-15 12:30 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-14 17:59 Robert Chacon
2021-01-14 19:46 ` Toke Høiland-Jørgensen
2021-01-14 22:07   ` Robert Chacon
2021-01-15 12:30     ` Toke Høiland-Jørgensen [this message]
2021-01-21  5:50       ` Robert Chacon
2021-01-21 11:14         ` Toke Høiland-Jørgensen
2021-01-21  4:38     ` Dave Taht
2021-01-21  4:25 ` Dave Taht
2021-01-21  5:44   ` Robert Chacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87o8hqs7q3.fsf@toke.dk \
    --to=toke@toke.dk \
    --cc=bloat@lists.bufferbloat.net \
    --cc=robert.chacon@jackrabbitwireless.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox