From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from uplift.swm.pp.se (ipv6.swm.pp.se [IPv6:2a00:801::f]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id A6D743B2A4 for ; Mon, 25 Mar 2019 11:43:49 -0400 (EDT) Received: by uplift.swm.pp.se (Postfix, from userid 501) id 491EFAF; Mon, 25 Mar 2019 16:43:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=swm.pp.se; s=mail; t=1553528628; bh=vQe2nDRJiIkBbcW2mbwtQUlianb3QcHgC0whBivNjFg=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=ZY/+Cpq90R3muzASabR+zqhai+3XmC+kuv67HrV9Ob1RlFBqdmKPid5DTIhNUKEaB 0r7Z3zIGM5FLpnXbg8Jy7OVKoB37AhnIlXhp0kmDvfvSiRtIZbl0WylFdOl3HsJaXD 3jPvYiqdX6LXwy1SZReKlGWb+TJ4kcN+PzIOZ/So= Received: from localhost (localhost [127.0.0.1]) by uplift.swm.pp.se (Postfix) with ESMTP id 46D029F; Mon, 25 Mar 2019 16:43:48 +0100 (CET) Date: Mon, 25 Mar 2019 16:43:48 +0100 (CET) From: Mikael Abrahamsson To: Dave Taht cc: Sebastian Moeller , ecn-sane@lists.bufferbloat.net In-Reply-To: Message-ID: References: <3E9C6E74-E335-472B-8745-6020F7CDBA01@gmx.de> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) Organization: People's Front Against WWW MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Subject: Re: [Ecn-sane] FQ in the core X-BeenThere: ecn-sane@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Discussion of explicit congestion notification's impact on the Internet List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 Mar 2019 15:43:49 -0000 On Mon, 25 Mar 2019, Dave Taht wrote: > 4) The biggest cpu overhead for any of this stuff is per-tenant (in > the dc) or per customer shaping. This benefits a lot from a hardware Agreed, I'd say typical deployment will allow to have 4-8 queues per tenant. If you need to shape customers then you need per-customer queue, and typically these linecards will have enough queues to do 4-8 per customer. This rules out FQ, but it does allow to do things like WRED/PIE or something else on these few queues. So if we can skip bringing FQ back into the discussion all the time, I agree we can have a productive path forward that might actually have a good possibility to go into hardware. A lot of deployments I've seen does bidirectional shaping in the "BNG", which will have one of these linecards with 128k queues per 10G port. ISPs will put many thousands of customers on this kind of port. There is no flow identification machinery to put things into queues, but it can probably match on bits in the header to put traffic into different queues. So this is where PIE and L4S comes from (I imagine), it's coming from the side of "what can we do in this kind of hw". So who do we know who knows more about ASIC/NPU design who can help us with that? -- Mikael Abrahamsson email: swmike@swm.pp.se