From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from gpo2.cc.swin.edu.au (gpo2.cc.swin.edu.au [136.186.1.31]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id 89DF621F1B5 for ; Wed, 20 Mar 2013 13:21:55 -0700 (PDT) Received: from [136.186.229.37] (garmitage.caia.swin.edu.au [136.186.229.37]) by gpo2.cc.swin.edu.au (8.14.3/8.14.3) with ESMTP id r2KKLqmw008760 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 21 Mar 2013 07:21:52 +1100 Message-ID: <514A1A60.2090006@swin.edu.au> Date: Thu, 21 Mar 2013 07:21:52 +1100 From: grenville armitage User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:16.0) Gecko/20121107 Thunderbird/16.0.2 MIME-Version: 1.0 To: bloat@lists.bufferbloat.net References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Bloat] Solving bufferbloat with TCP using packet delay X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Mar 2013 20:21:56 -0000 On 03/21/2013 02:36, Steffan Norberhuis wrote: > Hello Everyone, > > For a project for the Delft Technical University myself and 3 > students are writing a review paper on the buffer bloat problem and > its possible solutions. My colleagues have been dabbling with delay-based CC algorithms, with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/) if that's of any interest. Some thoughts: - When delay-based TCPs share bottlenecks with loss-based TCPs, the delay-based TCPs are punished. Hard. They back-off, as queuing delay builds, while the loss-based flow(s) blissfully continue to push the queue to full (drop). Everyone sharing the bottleneck sees latency fluctuations, bounded by the bottleneck queue's effective 'length' (set by physical RAM limits or operator-configurable threshold). - The previous point suggests perhaps a hybrid TCP which uses delay-based control, but switches (briefly) to loss-based control if it detects the bottleneck queue is being hammered by other, loss-based TCP flows. Challenging questions arise as to what triggers switching between delay-based and loss-based modes. - Reducing a buffer's length requires meddling with the bottleneck(s) (new firmware or new devices). Deploying delay-based TCPs requires meddling with endpoints (OS upgrade/patch). Many more of the latter than the former. - But yes, in self-contained networks where the end hosts can all be told to run a delay-based CC algorithm, delay-based CC can mitigate the impact of bloated buffers in your bottleneck network devices. Such homogeneous environments do exist, but the Internet is quite different. - Alternatively, if one could classify delay-based CC flows into one queue and loss-based CC flows into another queue at each bottleneck, the first point above might not be such a problem. I also want a pink pony ;) (Of course, once we're considering tweak the bottlenecks with classifiers and multiple queues, might as continue the surgery and reduce the bloated buffers too.) cheers, gja