[Cake] Advantages to tightly tuning latency

Luca Muscariello muscariello at ieee.org
Thu Apr 23 08:29:30 EDT 2020


On Thu, Apr 23, 2020 at 1:57 PM Toke Høiland-Jørgensen <toke at redhat.com>
wrote:

> Maxime Bizon <mbizon at freebox.fr> writes:
>
> > On Wednesday 22 Apr 2020 à 07:48:43 (-0700), Dave Taht wrote:
> >
> > Hello,
> >
> >> > Free has been using SFQ since 2005 (if I remember well).
> >> > They announced the wide deployment of SFQ in the free.fr newsgroup.
> >> > Wi-Fi in the free.fr router was not as good though.
> >>
> >> They're working on it. :)
> >
> > yes indeed.
> >
> > Switching to softmac approach, so now mac80211 will do rate control
> > and scheduling (using wake_tx_queue model).
> >
> > for 5ghz, we use ath10k
>
> That is awesome! Please make sure you include the AQL patch for ath10k,
> it really works wonders, as Dave demonstrated:
>
>
> https://lists.bufferbloat.net/pipermail/make-wifi-fast/2020-March/002721.html
>
> >> I am very, very happy for y'all. Fiber has always been the sanest
> >> thing. Is there a SPF+ gpon card yet I can plug into a convention
> >> open source router yet?
> >
> > FYI Free.fr uses 10G-EPON, not GPON.
> >
> > Also most deployments are using an additionnal terminal equipement at
> > called "ONT" or "ONU" that handle the PON part and exposes an ethernet
> > port where the operator CPE is plugged. So we are back to the early
> > days of DSL, where the hardest part (scheduling) is done inside a
> > black box. That makes it easier to replace the operator CPE with your
> > own standard ethernet router though.
> >
> > At least SOCs with integrated PON (supporting all flavours
> > GPON/EPON/..)  are starting to be deployed. Nothing available in
> > opensource.
> >
> > Also note that it's not just kernel drivers, you also need some higher
> > OAM stack to make that work, and there are a lot of existing
> > standards, DPOE (EPON), OMCI (GPON)... all with interop challenges.
>
> It always bugged me that there was no open source support for these
> esoteric protocols and standards. It would seem like an obvious place to
> pool resources, but I guess proprietary vendors are going to keep doing
> their thing :/
>
> >> > The challenge becomes to keep up with these link rates in software
> >> > as there is a lot of hardware offloading.
> >
> > Yes that's our pain point, because that's what the SOCs vendors
> > deliver and you need to use that because there is no alternative.
> >
> > It's not entierely the SOCs vendors fault though.
> >
> > 15 years ago, your average SOC's CPU would be something like 200Mhz
> > MIPS, Linux standard forwarding path (softirq => routing+netfilter =>
> > qdisc) was too slow for this, too much cache footprint/overhead. So
> > every vendor started building alternatives forwarding path in their
> > hardware and never looked back.
> >
> > Nowdays, the baseline SOC CPU would be ARM Cortex A53@~1Ghz, which
> > with a non crappy network driver and internal fabric should do be able
> > to route 1Gbit/s with out-of-the-box kernel forwarding.
> >
> > But that's too late. SOC vendors compete against each others, and the
> > big telcos need a way to tell which SOC is better to make a buying
> > decision. So synthetic benchmarks have become the norm, and since
> > everybody was able to do fill their pipe with 1500 bytes packets,
> > benchmarks have moved to unrealistic 64 bytes packets (so called
> > wirespeed)
>

Yes, I'm not working anymore on these kinds of platforms
but I do remember the pain.
Hardware offloading may also have unexpected behaviours
for stateful offloads. A flow starts in a slow path and
then it moves to the fast path in hardware.
Out of order at this stage can be nasty for a TCP connection.
Worse a packet loss.



> >
> > If you don't have hardware acceleration for forwarding, you don't
> > exist in those benchmarks and will not sell your chipset. Also they
> > invested so much in their alternative network stack that it's
> > difficult to stop (huge R&D teams). That being said, they do have a
> > point, when speed go above 1Gbit/s, the kernel becomes the bottleneck.
> >
> > For Free.fr 10Gbit/s offer, we had to develop an alternative
> > (software) forwarding path using polling mode model (DPDK style),
> > otherwise our albeit powerful ARM Cortex A72 at 2Ghz could not forward
> > more than 2Gbit/s.
>
> We're working on that in kernel land - ever heard of XDP? On big-iron
> servers we have no issues pushing 10s and 100s of Gbps in software
> (well, the latter only given enough cores to throw at the problem :)).
> There's not a lot of embedded platforms support as of yet, but we do
> have some people in the ARM world working on that.
>
> Personally, I do see embedded platforms as an important (future) use
> case for XDP, though, in particular for CPEs. So I would be very
> interested in hearing details about your particular platform, and your
> DPDK solution, so we can think about what it will take to achieve the
> same with XDP. If you're interested in this, please feel free to reach
> out :)
>
> > And going multicore/RSS does not fly when the test case is single
> > stream TCP session, which is what most speedtest application do (ookla
> > only recently added multi-connections test).
>
> Setting aside the fact that those single-stream tests ought to die a
> horrible death, I do wonder if it would be feasible to do a bit of
> 'optimising for the test'? With XDP we do have the ability to steer
> packets between CPUs based on arbitrary criteria, and while it is not as
> efficient as hardware-based RSS it may be enough to achieve line rate
> for a single TCP flow?
>


Toke yes I was implicitly thinking about XDP but I did
not read yet any experience in CPEs using that.

DPDK, netmap and kernel bypass may be an option but
you lose all qdiscs.




>
> -Toke
>
> _______________________________________________
> Cake mailing list
> Cake at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cake/attachments/20200423/ba84eebb/attachment.html>


More information about the Cake mailing list