[Cake] Advantages to tightly tuning latency
toke at redhat.com
Thu Apr 23 14:35:15 EDT 2020
Maxime Bizon <mbizon at freebox.fr> writes:
> On Thursday 23 Apr 2020 à 18:42:11 (+0200), Toke Høiland-Jørgensen wrote:
>> Didn't make it in until 5.5, unfortunately... :(
>> I can try to produce a patch that you can manually apply on top of 5.4
>> if you're interested?
> I could do it, but the thing I'm more worried about is the lack of
> test coverage from everyone else.
Yeah, I guess you'd be on the hook for backporting any follow-ups
yourself if you do that; maybe better to wait for the next longterm
kernel release, then...
>> Anyhow, my larger point was that we really do want to enable such use
>> cases for XDP; but we are lacking the details of what exactly is missing
>> before we can get to something that's useful / deployable. So any
>> details you could share about what feature set you are supporting in
>> your own 'fast path' implementation would be really helpful. As would
>> details about the hardware platform you are using. You can send them
>> off-list if you don't want to make it public, of course :)
> there is no hardware specific feature used, it's all software
I meant more details of your SOC platform. You already said it's
ARM-based, so I guess the most important missing piece is which (Linux)
driver does the Ethernet device(s) use?
> imagine this "simple" setup, pretty much what anyone's home router is
> <br0> with <eth0> + <wlan0> inside, private IPv4 address
> <wan0.vlan> with IPv6, vlan interface over <wan0>
> <map0> with IPv4, MAP-E tunnel over <wan0.vlan>
> - IPv6 routing between <br0> and <wan0.vlan>
> - IPv4 routing + NAT between <br0> and <map0>
> iptables would be filled with usual rules, per interface ALLOW rules
> in FORWARD chain, DNAT rules in PREROUTING to access LAN from WAN...
> and then you want this to be fast :)
> What we do is build a "flow" table on top of conntrack, so with a
> single lookup we find the flow, the destination interface, and what
> modifications to apply to the packet (L3 address to change, encap to
> add/remove, etc etc)
> Then we do this lookup more or less early in RX path, on our oldest
> platform we even had to do this from the ethernet driver, and do TX
> from there too, skipping qdisc layer and allowing cache maintenance
> hacks (partial invalidation and wback)
This sounds pretty much what you'd do with an XDP program: Packet comes
in -> XDP program runs, parses the headers, does a flow lookup, modifies
the packet and redirects it out the egress interface. All in one go,
kernel never even builds an skb for the packet.
You can build most of that with XDP today, but you'd need to implement
all the lookups yourself using BPF maps; having a hook into the kernel
conntrack / flow tables would help with that. I guess I should look into
what happened with that hook.
Oh, and we also need to solve queueing in XDP; it's all line rate ATM,
which is obviously not ideal for a CPE :)
> nftable with flowtables seems to be have developped something that
> could replace our flow cache, but I'm not sure if it can handle our
> tunneling scheme yet. It even has a notion of offloaded flow for
> hardware that can support it.
Well, the nice thing about XDP is that you can just implement any custom
encapsulation that is not supported by the kernel yourself :)
> If you add an XDP offload to it, with an option to do the
> lookup/modification/tx at the layer you want, depending on the
> performance you need, whether you want qdisc.. that you'd give you
> pretty much the same thing we use today, but with a cleaner design.
Yup, I think so. What does your current solution do with packets that
are destined for the WiFi interface, BTW? Just punt them to the regular
>> Depends on the TCP stack (I think).
> I guess Linux deals with OFO better, but unfortunately that's not the
> main OS used by our subscribers...
Yeah, you really should do something about that ;)
>> Steam is perhaps a bad example as that is doing something very much like
>> bittorrent AFAIK; but point taken, people do occasionally run
>> single-stream downloads and want them to be fast. I'm just annoyed that
>> this becomes the *one* benchmark people run, to the exclusion of
>> everything else that has a much larger impact on the overall user
>> experience :/
> that one is easy
> convince ookla to add some kind of "latency under load" metric, and
> have them report it as a big red flag when too high, and even better
> add scary messages like "this connection is not suitable for online
> subscribers will bug telco, then telco will bug SOCs vendors
Heh. Easy in theory, yeah. I do believe people on this list have tried
to convince them; no luck thus far :/
More information about the Cake