[Bloat] Fwd: Traffic shaping at 10~300mbps at a 10Gbps link

Toke Høiland-Jørgensen toke at toke.dk
Mon Jun 7 18:20:00 EDT 2021

Jonathan Morton <chromatix99 at gmail.com> writes:

>> On 7 Jun, 2021, at 8:28 pm, Rich Brown <richb.hanover at gmail.com> wrote:
>> Saw this on the lartc mailing list... For my own information, does anyone have thoughts, esp. for this quote:
>> "... when the speed comes to about 4.5Gbps download (upload is about 500mbps), chaos kicks in. CPU load goes sky high (all 24x2.4Ghz physical cores above 90% - 48x2.4Ghz if count that virtualization is on)..."
> This is probably the same phenomenon that limits most cheap CPE
> devices to about 100Mbps or 300Mbps with software shaping, just on a
> bigger scale due to running on fundamentally better hardware.
> My best theory to date on the root cause of this phenomenon is a
> throughput bottleneck between the NIC and the system RAM via DMA,
> which happens to be bypassed by a hardware forwarding engine within
> the NIC (or in an external switch chip) when software shaping is
> disabled. I note that 4.5Gbps is close to the capacity of a single
> PCIe v2 lane, so checking the topology of the NIC's attachment to the
> machine might help to confirm my theory.
> To avoid the problem, you'll either need to shape to a rate lower than
> the bottleneck capacity, or eliminate the unexpected bottleneck by
> implementing a faster connection to the NIC that can support
> wire-speed transfers.

I very much doubt this has anything to do with system bottlenecks. We
hit the PCIe bottleneck when trying to push 100Gbit through a server, 5
Gbps is trivial for a modern device.

Rather, as Jesper pointed out this sounds like root qdisc lock


More information about the Bloat mailing list