From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from r-mail1.rd.orange.com (r-mail1.rd.orange.com [217.108.152.41]) by huchra.bufferbloat.net (Postfix) with ESMTP id 7DA3A21F200 for ; Wed, 25 Feb 2015 09:34:48 -0800 (PST) Received: from r-mail1.rd.orange.com (localhost.localdomain [127.0.0.1]) by localhost (Postfix) with SMTP id DB75BA4433C; Wed, 25 Feb 2015 18:34:46 +0100 (CET) Received: from FTRDCH01.rd.francetelecom.fr (unknown [10.194.32.11]) by r-mail1.rd.orange.com (Postfix) with ESMTP id D0FCFA44336; Wed, 25 Feb 2015 18:34:46 +0100 (CET) Received: from [172.31.0.14] (10.193.116.12) by FTRDCH01.rd.francetelecom.fr (10.194.32.11) with Microsoft SMTP Server id 14.3.224.2; Wed, 25 Feb 2015 18:34:45 +0100 Message-ID: <54EE07C0.60703@orange.com> Date: Wed, 25 Feb 2015 18:34:56 +0100 From: MUSCARIELLO Luca IMT/OLN User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Mikael Abrahamsson References: <201502250806.t1P86o5N011632@bagheera.jungle.bt.co.uk> <4A80D1F9-F4A1-4D14-AC75-958C5A2E8168@gmx.de> <3F47B274-B0E4-44F2-A434-E3C9F7D5D041@ifi.uio.no> <87twyaffv3.fsf@toke.dk> <1D438EDC-358D-4DD5-9B8D-89182256F66C@gmx.de> <54EDD951.50904@orange.com> In-Reply-To: Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Cc: "bloat@lists.bufferbloat.net" Subject: Re: [Bloat] RED against bufferbloat X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list Reply-To: MUSCARIELLO Luca IMT/OLN List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Feb 2015 17:35:18 -0000 On 02/25/2015 05:09 PM, Mikael Abrahamsson wrote: > On Wed, 25 Feb 2015, MUSCARIELLO Luca IMT/OLN wrote: > >> Doing FQ in silicon is easy. It must be very easy as I did myself in >> a MIPS Ikanos Vx185 chipset and I am not an hardware expert. This was >> for a CPE with a 5X1Gbps ports. > > I guess we have different definitions on what "silicon" is. Mine is > "ASIC". If it has a MIPS CPU (and the CPU does forwarding), then it's > not "silicon", then it's done in CPU. MIPS based board with a central unit and several programmable asics as accelerators. Fast path is not managed by the central unit. > > I keep hearing from vendors that queues are expensive. The smallest FQ > implementation that seems to be reasonable has 32 queues. Let's say we > have 5k customers per port on a BNG, that equates to 160k queues. > > Compare that to an efficient active ethernet p-t-p ETTH deployment > without BNG at all, where the access L2 switch is the one doing > policing. What would be the cost to equip this device with FQ per port? > >> If you go a little deeper in the network and you pick an OLT you won't > > I don't do PON. > >> find much intelligence. A little deeper and in a local aggregation >> router (all vendors) you'll find what you would need to implement FQ. A > > If it's a BNG yes. These linecards with hundreds of thousands of > queues are a lot more expensive than a linecard without these queues. > Usually today there are 3-4 queues per customer, now we want to expand > this to 32 or more. What is the cost increase for this? Yes I have BNG in mind. Downstream equipment has not enough resources on a per user basis. However I do not think that static per-flow queues is the right implementation for FQ. That requires a lot of resources just for instantiation and if the hardware polls the queues the cost can be very high for the user fanout and the rates we are considering here. A single FQ instantiation has to consume no more memory than a single FIFO instantiation. Per-flow queues should be managed as virtual queues. On a per customer basis. > > If you put a BNG in there with lots of intelligence then the cost of > adding the FQ machinery might not be too bad. If you don't have the > BNG at all, what is the cost then? I still believe it will be cheaper > to try to do this in the CPE. I think this should also be done in the CPE but not in ingress for the downlink. If that's what you meant. Only in egress for the uplink. > >> Some "positive" view: access with Gbps (up to 1Gbps) with a range of >> RTT (1ms to 100ms) will need smarter mechanisms in the equipment as >> inefficiencies will be crystal clear and business consequences will >> be substantially different. > > Please elaborate. I'd say FIFO is less harmful at these speeds because > of TCP inefficiencies meaning most end systems won't come up in high > enough transfer rates to saturate that part of the network. Now the > bottle neck will move elsewhere. I assume you want TCP to be able to do 1Gbps if this is what you sell to the customers. Also if you sell 1Gbps, you might want to do shaping instead of policing in the BNG because the BDP of a 1Gbps flow can be pretty high depending on the RTT, and downstream equipment (OLT, switch, DSLAM...) can have much smaller queues than what you have in a BNG. That is where inefficiencies come from. If in addition you have a mix of applications in this customer queue than you'll need FQ because the traffic mix can be very heterogeneous and flow isolation bec necessary. omes