From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-bk0-x22f.google.com (mail-bk0-x22f.google.com [IPv6:2a00:1450:4008:c01::22f]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id C299A21F1CC for ; Wed, 20 Mar 2013 18:01:24 -0700 (PDT) Received: by mail-bk0-f47.google.com with SMTP id jc3so1153432bkc.6 for ; Wed, 20 Mar 2013 18:01:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=O1p9sFx9O1dyKe5wlL5UDxdFp9rMsslBbk9LB/zd940=; b=B9YFah+tlf7iC/WALpOrEalZvtWsyurmDzwhIAvtBhoVswzbHq/UufRQSOt9s2nDUA GQt6YAWUvBYGMJVGP+s0hF4rIjCpneX9qXrAhqmszArPULXWRzJCPgw0IT/MFcm+hYd3 XKrxQj9/fcFtwtcVY9HYagZDA8Xwx8E6bvUQkr5PUCnzZ/nRLtqEuuWliAKbUhMcgvnF NGcZH7S59i4xpjEM4eehsFN1QScbVraa+VUa0FuLEnTrjdMpDsyFXoepz+q2AeXbf2Th wez5ylMT3sVRR5+/1dKnkGB3Fb7gG0ZjcwlsufdtggDa0EZ2MEZmW5VDNK04z2zLzElT OaxA== MIME-Version: 1.0 X-Received: by 10.205.134.3 with SMTP id ia3mr12019758bkc.92.1363827682438; Wed, 20 Mar 2013 18:01:22 -0700 (PDT) Received: by 10.204.50.150 with HTTP; Wed, 20 Mar 2013 18:01:22 -0700 (PDT) Received: by 10.204.50.150 with HTTP; Wed, 20 Mar 2013 18:01:22 -0700 (PDT) In-Reply-To: <20130320161622.25fbd642@nehalam.linuxnetplumber.net> References: <514A1A60.2090006@swin.edu.au> <20130320161622.25fbd642@nehalam.linuxnetplumber.net> Date: Thu, 21 Mar 2013 03:01:22 +0200 Message-ID: From: Jonathan Morton To: Stephen Hemminger Content-Type: multipart/alternative; boundary=000e0cdfd6828c203604d864e0fe Cc: bloat Subject: Re: [Bloat] Solving bufferbloat with TCP using packet delay X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Mar 2013 01:01:25 -0000 --000e0cdfd6828c203604d864e0fe Content-Type: text/plain; charset=ISO-8859-1 One beneficial approach is to focus on the receive side rather than the send side. It is possible to implement a delay based algorithm there, where it will coexist naturally with a loss based system on the send side, and also with AQM and FQ at the bottleneck link if present. I did this to make the behaviour of a 3G modem tolerable, which was exhibiting extreme (tens of seconds) delays on the downlink through the traffic shaper on the provider side. The algorithm simply combined the latency measurement with the current receive window size to calculate bandwidth, then chose a new receive window size based on that. It worked sufficiently well. The approach is a logical development of receive window sizing algorithms which simply measure how long and fat the network is, and size the window to encompass that statistic. In fact I implemented it by modifying the basic algorithm in Linux, rather than adding a new module. - Jonathan Morton On Mar 21, 2013 1:16 AM, "Stephen Hemminger" wrote: > On Thu, 21 Mar 2013 07:21:52 +1100 > grenville armitage wrote: > > > > > > > On 03/21/2013 02:36, Steffan Norberhuis wrote: > > > Hello Everyone, > > > > > > For a project for the Delft Technical University myself and 3 > > > students are writing a review paper on the buffer bloat problem and > > > its possible solutions. > > > > My colleagues have been dabbling with delay-based CC algorithms, > > with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/) > > if that's of any interest. > > > > Some thoughts: > > > > - When delay-based TCPs share bottlenecks with loss-based TCPs, > > the delay-based TCPs are punished. Hard. They back-off, > > as queuing delay builds, while the loss-based flow(s) > > blissfully continue to push the queue to full (drop). > > Everyone sharing the bottleneck sees latency fluctuations, > > bounded by the bottleneck queue's effective 'length' (set > > by physical RAM limits or operator-configurable threshold). > > > > - The previous point suggests perhaps a hybrid TCP which uses > > delay-based control, but switches (briefly) to loss-based > > control if it detects the bottleneck queue is being > > hammered by other, loss-based TCP flows. Challenging > > questions arise as to what triggers switching between > > delay-based and loss-based modes. > > > > - Reducing a buffer's length requires meddling with the > > bottleneck(s) (new firmware or new devices). Deploying > > delay-based TCPs requires meddling with endpoints (OS > > upgrade/patch). Many more of the latter than the former. > > > > - But yes, in self-contained networks where the end hosts can all > > be told to run a delay-based CC algorithm, delay-based CC > > can mitigate the impact of bloated buffers in your bottleneck > > network devices. Such homogeneous environments do exist, but > > the Internet is quite different. > > > > - Alternatively, if one could classify delay-based CC flows into one > > queue and loss-based CC flows into another queue at each > > bottleneck, the first point above might not be such a problem. > > I also want a pink pony ;) (Of course, once we're considering > > tweak the bottlenecks with classifiers and multiple queues, > > might as continue the surgery and reduce the bloated buffers too.) > > Everyone has to go through the phase of thinking > "it can't be that hard, I can invent a better TCP congestion algorithm" > But it is hard, and the delay based algorithms are fundamentally > flawed because they see reverse path delay and cross traffic as false > positives. The hybrid ones all fall back to loss under "interesting > times" so they really don't buy much. > > Really not convinced that Bufferbloat will be solved by TCP. > You can make a TCP algorithm that causes worse latency than Cubic or Reno > very easily. But doing better is hard, especially since TCP really > can't assume much about its underlying network. There maybe random > delays and packet loss (wireless), there maybe spikes in RTT and > sessions maybe long or short lived. And you can't assume the whole > world is running your algorithm. > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > --000e0cdfd6828c203604d864e0fe Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

One beneficial approach is to focus on the receive side rather than the = send side. It is possible to implement a delay based algorithm there, where= it will coexist naturally with a loss based system on the send side, and a= lso with AQM and FQ at the bottleneck link if present.

I did this to make the behaviour of a 3G modem tolerable, which was exhi= biting extreme (tens of seconds) delays on the downlink through the traffic= shaper on the provider side. The algorithm simply combined the latency mea= surement with the current receive window size to calculate bandwidth, then = chose a new receive window size based on that. It worked sufficiently well.=

The approach is a logical development of receive window sizing algorithm= s which simply measure how long and fat the network is, and size the window= to encompass that statistic. In fact I implemented it by modifying the bas= ic algorithm in Linux, rather than adding a new module.

- Jonathan Morton

On Mar 21, 2013 1:16 AM, "Stephen Hemminger= " <stephen@networkplu= mber.org> wrote:
On Thu, 21 Mar 2013 07:21:52 +1100
grenville armitage <garmitage@s= win.edu.au> wrote:

>
>
> On 03/21/2013 02:36, Steffan Norberhuis wrote:
> > Hello Everyone,
> >
> > For a project for the Delft Technical University myself and 3
> > students are writing a review paper on the buffer bloat problem a= nd
> > its possible solutions.
>
> My colleagues have been dabbling with delay-based CC algorithms,
> with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
> if that's of any interest.
>
> Some thoughts:
>
> =A0 - When delay-based TCPs share bottlenecks with loss-based TCPs, > =A0 =A0 =A0 the delay-based TCPs are punished. Hard. They back-off, > =A0 =A0 =A0 as queuing delay builds, while the loss-based flow(s)
> =A0 =A0 =A0 blissfully continue to push the queue to full (drop).
> =A0 =A0 =A0 Everyone sharing the bottleneck sees latency fluctuations,=
> =A0 =A0 =A0 bounded by the bottleneck queue's effective 'lengt= h' (set
> =A0 =A0 =A0 by physical RAM limits or operator-configurable threshold)= .
>
> =A0 - The previous point suggests perhaps a hybrid TCP which uses
> =A0 =A0 =A0 delay-based control, but switches (briefly) to loss-based<= br> > =A0 =A0 =A0 control if it detects the bottleneck queue is being
> =A0 =A0 =A0 hammered by other, loss-based TCP flows. Challenging
> =A0 =A0 =A0 questions arise as to what triggers switching between
> =A0 =A0 =A0 delay-based and loss-based modes.
>
> =A0 - Reducing a buffer's length requires meddling with the
> =A0 =A0 =A0 bottleneck(s) (new firmware or new devices). Deploying
> =A0 =A0 =A0 delay-based TCPs requires meddling with endpoints (OS
> =A0 =A0 =A0 upgrade/patch). Many more of the latter than the former. >
> =A0 - But yes, in self-contained networks where the end hosts can all<= br> > =A0 =A0 =A0 be told to run a delay-based CC algorithm, delay-based CC<= br> > =A0 =A0 =A0 can mitigate the impact of bloated buffers in your bottlen= eck
> =A0 =A0 =A0 network devices. Such homogeneous environments do exist, b= ut
> =A0 =A0 =A0 the Internet is quite different.
>
> =A0 - Alternatively, if one could classify delay-based CC flows into o= ne
> =A0 =A0 =A0 queue and loss-based CC flows into another queue at each > =A0 =A0 =A0 bottleneck, the first point above might not be such a prob= lem.
> =A0 =A0 =A0 I also want a pink pony ;) =A0(Of course, once we're c= onsidering
> =A0 =A0 =A0 tweak the bottlenecks with classifiers and multiple queues= ,
> =A0 =A0 =A0 might as continue the surgery and reduce the bloated buffe= rs too.)

Everyone has to go through the phase of thinking
=A0 "it can't be that hard, I can invent a better TCP congestion a= lgorithm"
But it is hard, and the delay based algorithms are fundamentally
flawed because they see reverse path delay and cross traffic as false
positives. =A0The hybrid ones all fall back to loss under "interesting=
times" so they really don't buy much.

Really not convinced that Bufferbloat will be solved by TCP.
You can make a TCP algorithm that causes worse latency than Cubic or Reno very easily. But doing better is hard, especially since TCP really
can't assume much about its underlying network. There maybe random
delays and packet loss (wireless), there maybe spikes in RTT and
sessions maybe long or short lived. And you can't assume the whole
world is running your algorithm.

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<= /a>
= https://lists.bufferbloat.net/listinfo/bloat
--000e0cdfd6828c203604d864e0fe--