From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-x22a.google.com (mail-we0-x22a.google.com [IPv6:2a00:1450:400c:c03::22a]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id B35E921F1C3 for ; Fri, 14 Mar 2014 07:06:52 -0700 (PDT) Received: by mail-we0-f170.google.com with SMTP id w61so2194931wes.1 for ; Fri, 14 Mar 2014 07:06:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=gqDNuVzadIM0oLoncRmwUMqIqKRMjHDRxwUp4yHZN94=; b=XhE9674U7+AA9f8ZB2s3/WnN11QxoQCq+QSjvyK/qGLUvYNeh0nNyhSXwPHHAvhE49 pEdOZYDmgw8K6+g7lT9qFF9yjDwTccd8kXE2igAGTqwuWUAODU88UkaWR6rVZUCt3dMt 4w19oHUb5PCC8VpDOAMIGjI418FA1UvGUA57HHwold6gxQl/zC0+Om6CTvq1XOhf5veY yFQH98QDOt5p390HLVrgAKgs7+9HsSUzQz2Df93D0lwEPaa3O+k89TNUgmR8cn4u5JjT Je87/8HsOERWb8SoTGbTafjJ10ICLdj1NJBzONOe6D08vpc/9Yp+78ZQo3vIqzCPiush B7+g== MIME-Version: 1.0 X-Received: by 10.194.206.102 with SMTP id ln6mr6579099wjc.43.1394806010495; Fri, 14 Mar 2014 07:06:50 -0700 (PDT) Received: by 10.216.8.1 with HTTP; Fri, 14 Mar 2014 07:06:50 -0700 (PDT) Date: Fri, 14 Mar 2014 07:06:50 -0700 Message-ID: From: Dave Taht To: "aqm@ietf.org" , bloat Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: [Bloat] stochastic hashing in hardware with a limited number of queues X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Mar 2014 14:06:53 -0000 The thread on netdev starting here: http://comments.gmane.org/gmane.linux.network/307532 was pretty interesting, where a research group at suny looked hard at the behavior of a 64 hw queue system running giant flows: http://www.fsl.cs.sunysb.edu/~mchen/fast14poster-hashcast-portrait.pdf They ran smack into the birthday problem inherent in a small number of queues. And also a bug (now fixed). The conclusion of the thread was amusing, in that with the new sch_fq scheduler with a single hardware queue (and a string of fixes over the past year for tcp small queues and tso offloads), performed as well as the multi queue implementation... with utter fairness. "On Sun, Mar 9, 2014 at 9:44 AM, Eric Dumazet gmail.com> wrote: > > Multiqueue is not a requirement in your case. You can easily reach line > rate with a single queue on a 10Gbe NIC. > I repeated the experiment for 10 times using one tx queue with FQ, and all clients get fair share of the bandwidth. The overall throughout showed no difference between the single queue case and the mq case, and the throughput in both cases are close to the line rate. " Sometimes merely because a feature is available on the hardware does not mean it should be used. Certainly multiple hw queues is a good idea for some traffic mixes, but not for the circumstances of this particular test series. --=20 Dave T=E4ht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.= html