[Cake] [Bloat] Really getting 1G out of ISP?
David P. Reed
dpreed at deepplum.com
Thu Jul 8 15:56:25 EDT 2021
As a data point, I run Cake on a "Intel(R) Celeron(R) CPU N2930 @ 1.83GHz" with 2 cores, and 1 GB/sec cable modem network. My "router board" has two GigE ports, doesn't have WiFi. It uses Fedora 34 Server as its basis, runs dnsmasq for the main LAN serving DNS, DHCP, and running a Hurricane Electric /56 tunnel for v6.
Doing testing with RRUL or various high-end web speed tests, I get full 1 GHz (usually >950 Mb/s throughput) download performance through it, and minimal bufferbloat (A+ on the speed tests that measure bufferbloat).. I also get full upload speed with no bufferbloat.
This, I believe, is a much slower board, with fewer cores, than the Odyssey. It never comes close to saturating one of the cores.
I long ago gave up on trying to reflash consumer WiFi routers to serve as home gateway. (and now that cpus and memory are incredibly cheap, the proper architecture is not to bundle two unrelated functions into a single processor anyway, just have two boxes for the two functions)
I do use them inside my premises as APs. Life is too short. As APs, they are limited by the damn WiFi chipsets and drivers, with their poor packet scheduling, which is not solved by Cake. That's a WiFi layer problem of queuing and scheduling in the MAC layer, and I think the WiFi chip vendors have been clueless for at least a decade, and show no sign of getting a clue, sad to say. They live in proprietary land, and really have no interest in fixing the MAC layer as long as they can claim extreme throughput in an artificial scenario between two points with no cross traffic.
On Tuesday, July 6, 2021 10:26pm, "Dave Taht" <dave.taht at gmail.com> said:
> On Tue, Jul 6, 2021 at 3:32 PM Aaron Wood <woody77 at gmail.com> wrote:
> > I'm running an Odyssey from Seeed Studios (celeron J4125 with dual i211), and
> it can handle Cake at 1Gbps on a single core (which it needs to, because OpenWRT's
> i211 support still has multiple receive queues disabled).
> Not clear if that is shaped or not? Line rate is easy on processors of
> that class or better, but shaped?
> some points:
> On inbound shaping especially it it still best to lock network traffic
> to a single core in low end platforms.
> Cake itself is not multicore, although the design essentially is. We
> did some work towards trying to make it shape across multiple cores
> and multiple hardware queues. IF the locking contention could be
> minimized (RCU) I felt it possible for a win here, but a bigger win
> would be to eliminate "mirred" from the ingress path entirely.
> Even multiple transmit queues remains kind of dicy in linux, and
> actually tend to slow network processing in most cases I've tried at
> gbit line rates. They also add latency, as (1) BQL is MIAD, not AIMD,
> so it stays "stuck" at a "good" level for a long time, AND 2) each hw
> queue gets an additive fifo at this layer, so where, you might need
> only 40k to keep a single hw queue busy, you end up with 160k with 4
> hw queues. This problem is getting worse and worse (64 queues are
> common in newer hardware, 1000s in really new hardware) and a revisit
> to how BQL does things in this case would be useful. Ideally it would
> share state (with a cross core variable and atomic locks) as to how
> much total buffering was actually needed "down there" across all the
> queues, but without trying it, I worry that that would end up costing
> a lot of cpu cycles.
> Feel free to experiment with multiple transmit queues locked to other
> cores with the set-affinity bits in /proc/interrupts. I'm sure these
> MUST be useful on some platform, but I think most of the use for
> multiple hw queues is when a locally processing application is
> getting the data, not when it is being routed.
> Ironically, I guess, the shorter your queues the higher likelihood a
> given packet will remain in l2 or even l1 cache.
> > On Tue, Jun 22, 2021 at 12:44 AM Giuseppe De Luca <dropheaders at gmx.com>
> >> Also a PC Engines APU4 will do the job
> >> (https://inonius.net/results/?userId=17996087f5e8 - this is a
> >> 1gbit/1gbit, with Openwrt/sqm-scripts set to 900/900. ISP is Sony NURO
> >> in Japan). Will follow this thread to know if some interesting device
> >> popup :)
> >> https://inonius.net/results/?userId=17996087f5e8
> >> On 6/22/2021 6:12 AM, Sebastian Moeller wrote:
> >> >
> >> > On 22 June 2021 06:00:48 CEST, Stephen Hemminger
> <stephen at networkplumber.org> wrote:
> >> >> Is there any consumer hardware that can actually keep up and do
> AQM at
> >> >> 1Gbit.
> >> > Over in the OpenWrt forums the same question pops up
> routinely once per week. The best answer ATM seems to be a combination of a
> raspberry pi4B with a decent USB3 gigabit ethernet dongle, a managed switch and
> any capable (OpenWrt) AP of the user's liking. With 4 arm A72 cores the will
> traffic shape up to a gigabit as reported by multiple users.
> >> >
> >> >
> >> >> It seems everyone seems obsessed with gamer Wifi 6. But can only
> >> >> 300Mbit single
> >> >> stream with any kind of QoS.
> >> > IIUC most commercial home routers/APs bet on offload engines to do
> most of the heavy lifting, but as far as I understand only the NSS cores have a
> shaper and fq_codel module....
> >> >
> >> >
> >> >> It doesn't help that all the local ISP's claim 10Mbit upload
> even with
> >> >> 1G download.
> >> >> Is this a head end provisioning problem or related to Docsis 3.0
> >> >> later) modems?
> >> > For DOCSIS the issue seems to be an unfortunate frequency split
> between up and downstream and use of lower efficiency coding schemes .
> >> > Over here the incumbent cable isp provisions fifty Mbps for
> upstream and plans to increase that to hundred once the upstream is switched to
> docsis 3.1.
> >> > I believe one issue is that since most of the upstream is required
> for the reverse ACK traffic for the download and hence it can not be
> oversubscribed too much.... but I think we have real docsis experts on the list,
> so I will stop my speculation here...
> >> >
> >> > Regards
> >> > Sebastian
> >> >
> >> >
> >> >
> >> >
> >> >> _______________________________________________
> >> >> Bloat mailing list
> >> >> Bloat at lists.bufferbloat.net
> >> >> https://lists.bufferbloat.net/listinfo/bloat
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat at lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> > _______________________________________________
> > Bloat mailing list
> > Bloat at lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> Latest Podcast:
> Dave Täht CTO, TekLibre, LLC
> Cake mailing list
> Cake at lists.bufferbloat.net
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Cake