From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from masada.superduper.net (unknown [IPv6:2001:ba8:1f1:f263::2]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id 6CC6321F170 for ; Wed, 3 Apr 2013 17:10:32 -0700 (PDT) Received: from snappy-wlan.parc.xerox.com ([13.1.108.21]) by masada.superduper.net with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72) (envelope-from ) id 1UNXl3-0004EF-Bm; Thu, 04 Apr 2013 01:10:27 +0100 Message-ID: <515CC4EE.5050904@superduper.net> Date: Wed, 03 Apr 2013 17:10:22 -0700 From: Simon Barber User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130308 Thunderbird/17.0.4 MIME-Version: 1.0 To: Jonathan Morton References: <514A1A60.2090006@swin.edu.au> <20130320161622.25fbd642@nehalam.linuxnetplumber.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: -2.9 (--) Cc: bloat Subject: Re: [Bloat] Solving bufferbloat with TCP using packet delay X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Apr 2013 00:10:32 -0000 Is your TCP mod available? This would be useful on Android phones, to avoid background downloads (e.g. updates) from disrupting applications (Skype). Simon On 03/20/2013 06:01 PM, Jonathan Morton wrote: > One beneficial approach is to focus on the receive side rather than the > send side. It is possible to implement a delay based algorithm there, > where it will coexist naturally with a loss based system on the send > side, and also with AQM and FQ at the bottleneck link if present. > > I did this to make the behaviour of a 3G modem tolerable, which was > exhibiting extreme (tens of seconds) delays on the downlink through the > traffic shaper on the provider side. The algorithm simply combined the > latency measurement with the current receive window size to calculate > bandwidth, then chose a new receive window size based on that. It worked > sufficiently well. > > The approach is a logical development of receive window sizing > algorithms which simply measure how long and fat the network is, and > size the window to encompass that statistic. In fact I implemented it by > modifying the basic algorithm in Linux, rather than adding a new module. > > - Jonathan Morton > > On Mar 21, 2013 1:16 AM, "Stephen Hemminger" > wrote: > > On Thu, 21 Mar 2013 07:21:52 +1100 > grenville armitage > wrote: > > > > > > > On 03/21/2013 02:36, Steffan Norberhuis wrote: > > > Hello Everyone, > > > > > > For a project for the Delft Technical University myself and 3 > > > students are writing a review paper on the buffer bloat problem and > > > its possible solutions. > > > > My colleagues have been dabbling with delay-based CC algorithms, > > with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/) > > if that's of any interest. > > > > Some thoughts: > > > > - When delay-based TCPs share bottlenecks with loss-based TCPs, > > the delay-based TCPs are punished. Hard. They back-off, > > as queuing delay builds, while the loss-based flow(s) > > blissfully continue to push the queue to full (drop). > > Everyone sharing the bottleneck sees latency fluctuations, > > bounded by the bottleneck queue's effective 'length' (set > > by physical RAM limits or operator-configurable threshold). > > > > - The previous point suggests perhaps a hybrid TCP which uses > > delay-based control, but switches (briefly) to loss-based > > control if it detects the bottleneck queue is being > > hammered by other, loss-based TCP flows. Challenging > > questions arise as to what triggers switching between > > delay-based and loss-based modes. > > > > - Reducing a buffer's length requires meddling with the > > bottleneck(s) (new firmware or new devices). Deploying > > delay-based TCPs requires meddling with endpoints (OS > > upgrade/patch). Many more of the latter than the former. > > > > - But yes, in self-contained networks where the end hosts can all > > be told to run a delay-based CC algorithm, delay-based CC > > can mitigate the impact of bloated buffers in your bottleneck > > network devices. Such homogeneous environments do exist, but > > the Internet is quite different. > > > > - Alternatively, if one could classify delay-based CC flows > into one > > queue and loss-based CC flows into another queue at each > > bottleneck, the first point above might not be such a problem. > > I also want a pink pony ;) (Of course, once we're considering > > tweak the bottlenecks with classifiers and multiple queues, > > might as continue the surgery and reduce the bloated > buffers too.) > > Everyone has to go through the phase of thinking > "it can't be that hard, I can invent a better TCP congestion > algorithm" > But it is hard, and the delay based algorithms are fundamentally > flawed because they see reverse path delay and cross traffic as false > positives. The hybrid ones all fall back to loss under "interesting > times" so they really don't buy much. > > Really not convinced that Bufferbloat will be solved by TCP. > You can make a TCP algorithm that causes worse latency than Cubic or > Reno > very easily. But doing better is hard, especially since TCP really > can't assume much about its underlying network. There maybe random > delays and packet loss (wireless), there maybe spikes in RTT and > sessions maybe long or short lived. And you can't assume the whole > world is running your algorithm. > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > > > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat >