From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ns.iliad.fr (ns.iliad.fr [212.27.33.1]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id ACA523B2A4 for ; Thu, 23 Apr 2020 13:31:13 -0400 (EDT) Received: from ns.iliad.fr (localhost [127.0.0.1]) by ns.iliad.fr (Postfix) with ESMTP id C2AD4202C1; Thu, 23 Apr 2020 19:31:12 +0200 (CEST) Received: from sakura (freebox.vlq16.iliad.fr [213.36.7.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ns.iliad.fr (Postfix) with ESMTPS id B3A9A201BB; Thu, 23 Apr 2020 19:31:12 +0200 (CEST) Date: Thu, 23 Apr 2020 19:31:11 +0200 From: Maxime Bizon To: Toke =?iso-8859-1?Q?H=F8iland-J=F8rgensen?= Cc: Dave Taht , Cake List Message-ID: <20200423173111.GL28541@sakura> References: <603DFF79-D0C0-41BD-A2FB-E40B95A9CBB0@gmail.com> <20200423092909.GC28541@sakura> <87o8ri76u2.fsf@toke.dk> <20200423123329.GG28541@sakura> <877dy66tng.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <877dy66tng.fsf@toke.dk> User-Agent: Mutt/1.9.4 (2018-02-28) X-Virus-Scanned: ClamAV using ClamSMTP ; ns.iliad.fr ; Thu Apr 23 19:31:12 2020 +0200 (CEST) X-Mailman-Approved-At: Thu, 23 Apr 2020 14:15:13 -0400 Subject: Re: [Cake] Advantages to tightly tuning latency X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Apr 2020 17:31:13 -0000 On Thursday 23 Apr 2020 à 18:42:11 (+0200), Toke Høiland-Jørgensen wrote: > Didn't make it in until 5.5, unfortunately... :( > > I can try to produce a patch that you can manually apply on top of 5.4 > if you're interested? I could do it, but the thing I'm more worried about is the lack of test coverage from everyone else. > Anyhow, my larger point was that we really do want to enable such use > cases for XDP; but we are lacking the details of what exactly is missing > before we can get to something that's useful / deployable. So any > details you could share about what feature set you are supporting in > your own 'fast path' implementation would be really helpful. As would > details about the hardware platform you are using. You can send them > off-list if you don't want to make it public, of course :) there is no hardware specific feature used, it's all software imagine this "simple" setup, pretty much what anyone's home router is doing: with + inside, private IPv4 address with IPv6, vlan interface over with IPv4, MAP-E tunnel over then: - IPv6 routing between and - IPv4 routing + NAT between and iptables would be filled with usual rules, per interface ALLOW rules in FORWARD chain, DNAT rules in PREROUTING to access LAN from WAN... and then you want this to be fast :) What we do is build a "flow" table on top of conntrack, so with a single lookup we find the flow, the destination interface, and what modifications to apply to the packet (L3 address to change, encap to add/remove, etc etc) Then we do this lookup more or less early in RX path, on our oldest platform we even had to do this from the ethernet driver, and do TX from there too, skipping qdisc layer and allowing cache maintenance hacks (partial invalidation and wback) nftable with flowtables seems to be have developped something that could replace our flow cache, but I'm not sure if it can handle our tunneling scheme yet. It even has a notion of offloaded flow for hardware that can support it. If you add an XDP offload to it, with an option to do the lookup/modification/tx at the layer you want, depending on the performance you need, whether you want qdisc.. that you'd give you pretty much the same thing we use today, but with a cleaner design. > Depends on the TCP stack (I think). I guess Linux deals with OFO better, but unfortunately that's not the main OS used by our subscribers... > Steam is perhaps a bad example as that is doing something very much like > bittorrent AFAIK; but point taken, people do occasionally run > single-stream downloads and want them to be fast. I'm just annoyed that > this becomes the *one* benchmark people run, to the exclusion of > everything else that has a much larger impact on the overall user > experience :/ that one is easy convince ookla to add some kind of "latency under load" metric, and have them report it as a big red flag when too high, and even better add scary messages like "this connection is not suitable for online gaming". subscribers will bug telco, then telco will bug SOCs vendors -- Maxime