From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ee0-f44.google.com (mail-ee0-f44.google.com [74.125.83.44]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 0743821F18F; Thu, 2 May 2013 05:04:18 -0700 (PDT) Received: by mail-ee0-f44.google.com with SMTP id t10so232644eei.17 for ; Thu, 02 May 2013 05:04:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:subject:mime-version:content-type:from:in-reply-to:date :cc:content-transfer-encoding:message-id:references:to:x-mailer; bh=AeAvd1TM3PuVDdYdVZo6xTv94fw1CB1GoLTqIc6N8/c=; b=uGiVshtZyu0GfmFOuyv3PCnW1kwkZASPlgFMh1IBi/Rzl0vFyA3n9PJ84AS8cr2gZd T3JFhyVMVbJCmCXecrwCPU/iKb7ULuGwjWeu/R7m2+IqLUaZCmM6lOqgIvjIg6MMiSjp P8B4azYTjeYK/cnQC8bU18+PCDEi6gKUIleJUpxS6zcRB9AalK0mX0IghjAtklyxpnJu y4ZsxGwQekOGLPzyV2Lg9E3cOZFLaG5NNF1XmzgWqrqSNfyA1gUWai6Wl3/J0AVx7EJr C0DQ/61Vegc2RFV76jiGqTClFekIq2HUoKUp5IbtPqRYFeF1RTlkjUlNpEzrWPJZLchf Ce3Q== X-Received: by 10.14.194.70 with SMTP id l46mr18861938een.28.1367496257063; Thu, 02 May 2013 05:04:17 -0700 (PDT) Received: from bass.home.chromatix.fi (xdsl-83-150-84-172.nebulazone.fi. [83.150.84.172]) by mx.google.com with ESMTPSA id r10sm9280153eez.10.2013.05.02.05.04.14 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 02 May 2013 05:04:15 -0700 (PDT) Mime-Version: 1.0 (Apple Message framework v1085) Content-Type: text/plain; charset=us-ascii From: Jonathan Morton In-Reply-To: <5181CD56.9050501@superduper.net> Date: Thu, 2 May 2013 15:04:13 +0300 Content-Transfer-Encoding: quoted-printable Message-Id: <71A0D0CD-9095-4AA5-93C3-FF55CC495788@gmail.com> References: <51817A6F.1080006@superduper.net> <86AA48E0-B5CD-4A94-AF2B-D75178E8C660@gmail.com> <5181CD56.9050501@superduper.net> To: Simon Barber X-Mailer: Apple Mail (2.1085) Cc: codel@lists.bufferbloat.net, cerowrt-devel@lists.bufferbloat.net, bloat@lists.bufferbloat.net Subject: Re: [Codel] [Bloat] Latest codel, fq_codel, and pie sim study from cablelabs now available X-BeenThere: codel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: CoDel AQM discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 May 2013 12:04:19 -0000 On 2 May, 2013, at 5:20 am, Simon Barber wrote: > Or one could use more queues in SFQ, so that the chance of 2 streams = sharing a queue is small. CableLabs actually did try that - increasing the number of queues - and = found that it made things worse. This, I think, extends to true "fair = queueing" with flows explicitly rather than stochastically identified. = The reason is that with a very large number of flows, the bandwidth (or = the packet throughput) is still shared evenly between them, and there is = not enough bandwidth in the VoIP flow's share to allow it to work = correctly. With a relatively small number of flow buckets, the = responsive flows hashed to the same bucket get out of the way of the = unresponsive VoIP flow. In short, a very large number of flow buckets prioritises BitTorrent = over anything latency-sensitive, because BitTorrent uses a very large = number of individual flows. By contrast, putting all the BitTorrent flows into one bucket (or a = depressed-priority queue with it's own SFQ buckets), or else elevating = the VoIP traffic explicitly to a prioritised queue, would share the = bandwidth more favourably to the VoIP flow, allowing it to use as much = as it needed. Either, or indeed both simultaneously, would do the job = reasonably well, although an elevated priority queue should be bandwidth = limited to a fraction of capacity to avoid the temptation of abuse by = bulk flows. Then there would be no performance objection to using a = large number of flow buckets. I can easily see a four-tier system working for most consumers, just so = long as the traffic for each tier can be identified - each tier would = have it's own fq_codel queue: 1) Network control traffic, eg. DNS, ICMP, even SYNs and pure ACKs - max = 1/16th bandwidth, top priority 2) Latency-sensitive unresponsive flows, eg. VoIP and gaming - max 1/4 = bandwidth, high priority 3) Ordinary bulk traffic, eg. web browsing, email, general purpose = protocols - no bandwidth limit, normal priority 4) Background traffic, eg. BitTorrent - no bandwidth limit, low = priority, voluntarily marked, competes at 1:4 with normal. Obviously, the classification system implementing that must have some = idea of what bandwidth is actually available at any given moment, but it = is not necessary to explicitly restrict the top tiers' bandwidth when = the link is otherwise idle. Practical algorithms could be found to = approximate the correct behaviour on a saturated link, while simply = letting all traffic through on an unsaturated link. Basic installations = could already do this using HTB and assuming a link bandwidth. Even better, of course, would be some system that allows BitTorrent to = yield as though it were a smaller number of flows than it really is. = The "swarm" behaviour is very unusual among network protocols. LEDBAT = via uTP already does a good job on a per-flow basis, but since it's all = under control of a single application, the necessary information should = be available at that point. I am reminded of the way Azureus adjusts = global bandwidth limits - both incoming and outgoing - to match reality, = based on both periodic and continuous measurements. - Jonathan Morton