From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-x22f.google.com (mail-we0-x22f.google.com [IPv6:2a00:1450:400c:c03::22f]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 9C00421F0B3 for ; Tue, 26 Mar 2013 06:10:41 -0700 (PDT) Received: by mail-we0-f175.google.com with SMTP id t11so4181136wey.34 for ; Tue, 26 Mar 2013 06:10:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-received:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-type:x-gm-message-state; bh=ipONsNO44duaDT6mXAuMqBO9o+vUEoXufi6lwb9AWaw=; b=dW5b8eRRuAbM9QZuOAlombItw6uNSZAAqQ+1rJQffJi8Y/K9PR5PCIJVUShy+JfeMW wfGAaHy1iPEjRgWEPzCSOLmAtI1UaBqvbBI+YATdrs3CYHGecwwXkOvMQX5tqMMTjbiA a89zdNwN2rMaaPHeSs5Sag8gbISr7w3ZhhKfUOJ9QBzsvxvctr25qjUMvI2hA/qcsNJV Nz3X80HPpdzUa3JACvNe7AtVQdq5eYNNVyPAa/xijOTAE27jjbrcRBhzEFmf0WDHJyKj C3Q8qcIDgGMQ7e+h5jV9j760yiEnSuUtQLGU8sqr/6GMw9cFD+9H9eEutf0kgctnkZ6q LBqQ== X-Received: by 10.180.97.233 with SMTP id ed9mr3050161wib.32.1364303438773; Tue, 26 Mar 2013 06:10:38 -0700 (PDT) Received: from mail-ve0-f180.google.com (mail-ve0-f180.google.com [209.85.128.180]) by mx.google.com with ESMTPS id bk1sm3321466wib.2.2013.03.26.06.10.36 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 26 Mar 2013 06:10:37 -0700 (PDT) Received: by mail-ve0-f180.google.com with SMTP id c13so3750881vea.25 for ; Tue, 26 Mar 2013 06:10:35 -0700 (PDT) X-Received: by 10.220.248.74 with SMTP id mf10mr19489524vcb.0.1364303435083; Tue, 26 Mar 2013 06:10:35 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.34.236 with HTTP; Tue, 26 Mar 2013 06:10:15 -0700 (PDT) In-Reply-To: References: <514A1A60.2090006@swin.edu.au> <20130320161622.25fbd642@nehalam.linuxnetplumber.net> From: Maarten de Vries Date: Tue, 26 Mar 2013 14:10:15 +0100 Message-ID: To: bloat Content-Type: multipart/alternative; boundary=bcaec54a37bc9d38ee04d8d3a546 X-Gm-Message-State: ALoCoQn1YccpYBgUudom0uKeDZU30G3N/fGoEWw94wL86ZHW/s5W2frJbhFu/KItN2GK7nLCT+4F Subject: Re: [Bloat] Solving bufferbloat with TCP using packet delay X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Mar 2013 13:10:42 -0000 --bcaec54a37bc9d38ee04d8d3a546 Content-Type: text/plain; charset=UTF-8 I won't deny that there are problems with delay based congestion control, but at least some of the same problems also apply to AQM. In the presence of greedy UDP flows, AQM would highly bias those flows. Sure, the more packets a flow sends, the more it will have it's packets dropped. But only TCP will lower its congestion window and the greedy UDP flow will happily continue filling the buffer. The effect is that only the UDP flow will use most of the available bandwidth. Yes, the queue will be shorter so the TCP flow will have a lower delay, but it will also have a much lower throughput. Of course, the same applies for delay based congestion control and the greedy flow will still use most of the bandwidth and in addition the delay will be higher. The point remains that neither AQM nor delay based congestion control can provide a fair solution when there are greedy or stupid flows present. Unless of course there are multiple queues for the different types of flows, but yeah, where's that pink pony? Let's make it a unicorn while we're at it. And yes, there are much more endpoints than switches/routers. But I imagine they are also much more homogeneous. I could be wrong about this since I don't know too much about network equipment used by ISPs. Either way, it seems to me that most endpoints are running either Windows, Linux (or Android), a BSD variant or something made by Apple. And I would imagine most embedded systems (the remaining endpoints that don't run a consumer OS) aren't connected directly to the internet and won't be able to wreak much havoc. Consumer operating systems are regularly updated as it is, so sneaking in a new TCP variant should be *relatively* easy. Again, I might be wrong, but these are just my thoughts. I short, I'm not saying there are no problems, but I'm saying it might be easy to ignore the idea as ineffective too quickly. Kind regards, Maarten de Vries > On Thu, 21 Mar 2013 07:21:52 +1100 >> grenville armitage wrote: >> >> > >> > >> > On 03/21/2013 02:36, Steffan Norberhuis wrote: >> > > Hello Everyone, >> > > >> > > For a project for the Delft Technical University myself and 3 >> > > students are writing a review paper on the buffer bloat problem and >> > > its possible solutions. >> > >> > My colleagues have been dabbling with delay-based CC algorithms, >> > with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/) >> > if that's of any interest. >> > >> > Some thoughts: >> > >> > - When delay-based TCPs share bottlenecks with loss-based TCPs, >> > the delay-based TCPs are punished. Hard. They back-off, >> > as queuing delay builds, while the loss-based flow(s) >> > blissfully continue to push the queue to full (drop). >> > Everyone sharing the bottleneck sees latency fluctuations, >> > bounded by the bottleneck queue's effective 'length' (set >> > by physical RAM limits or operator-configurable threshold). >> > >> > - The previous point suggests perhaps a hybrid TCP which uses >> > delay-based control, but switches (briefly) to loss-based >> > control if it detects the bottleneck queue is being >> > hammered by other, loss-based TCP flows. Challenging >> > questions arise as to what triggers switching between >> > delay-based and loss-based modes. >> > >> > - Reducing a buffer's length requires meddling with the >> > bottleneck(s) (new firmware or new devices). Deploying >> > delay-based TCPs requires meddling with endpoints (OS >> > upgrade/patch). Many more of the latter than the former. >> > >> > - But yes, in self-contained networks where the end hosts can all >> > be told to run a delay-based CC algorithm, delay-based CC >> > can mitigate the impact of bloated buffers in your bottleneck >> > network devices. Such homogeneous environments do exist, but >> > the Internet is quite different. >> > >> > - Alternatively, if one could classify delay-based CC flows into one >> > queue and loss-based CC flows into another queue at each >> > bottleneck, the first point above might not be such a problem. >> > I also want a pink pony ;) (Of course, once we're considering >> > tweak the bottlenecks with classifiers and multiple queues, >> > might as continue the surgery and reduce the bloated buffers too.) >> >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloa > > > --bcaec54a37bc9d38ee04d8d3a546 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable I won't deny that there are problems with delay based congestion contro= l, but at least some of the same problems also apply to AQM.

In the = presence of greedy UDP flows, AQM would highly bias those flows. Sure, the = more packets a flow sends, the more it will have it's packets dropped. = But only TCP will lower its congestion window and the greedy UDP flow will = happily continue filling the buffer. The effect is that only the UDP flow w= ill use most of the available bandwidth. Yes, the queue will be shorter so = the TCP flow will have a lower delay, but it will also have a much lower th= roughput.

Of course, the same applies for delay based congestion control and the = greedy flow will still use most of the bandwidth and in addition the delay = will be higher. The point remains that neither AQM nor delay based congesti= on control can provide a fair solution when there are greedy or stupid flow= s present. Unless of course there are multiple queues for the different typ= es of flows, but yeah, where's that pink pony? Let's make it a unic= orn while we're at it.

And yes, there are= much more endpoints than switches/routers. But I imagine they are also muc= h more homogeneous. I could be wrong about this since I don't know too = much about network equipment used by ISPs. Either way, it seems to me that = most endpoints are running either Windows, Linux (or Android), a BSD varian= t or something made by Apple. And I would imagine most embedded systems (th= e remaining endpoints that don't run a consumer OS) aren't connecte= d directly to the internet and won't be able to wreak much havoc. Consu= mer operating systems are regularly updated as it is, so sneaking in a new = TCP variant should be *relatively* easy. Again, I might be wrong, but thes= e are just my thoughts.

I short, I'm not saying there are no problems, but I'm saying i= t might be easy to ignore the idea as ineffective too quickly.

Kind = regards,
Maarten de Vries

=C2=A0
On Thu, 21 Mar 2013 07:21:52 +1100
grenville armitage <garmitage@swin.edu.au> wrote:

>
>
> On 03/21/2013 02:36, Steffan Norberhuis wrote:
> > Hello Everyone,
> >
> > For a project for the Delft Technical University myself and 3
> > students are writing a review paper on the buffer bloat problem a= nd
> > its possible solutions.
>
> My colleagues have been dabbling with delay-based CC algorithms,
> with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
> if that's of any interest.
>
> Some thoughts:
>
> =C2=A0 - When delay-based TCPs share bottlenecks with loss-based TCPs,=
> =C2=A0 =C2=A0 =C2=A0 the delay-based TCPs are punished. Hard. They bac= k-off,
> =C2=A0 =C2=A0 =C2=A0 as queuing delay builds, while the loss-based flo= w(s)
> =C2=A0 =C2=A0 =C2=A0 blissfully continue to push the queue to full (dr= op).
> =C2=A0 =C2=A0 =C2=A0 Everyone sharing the bottleneck sees latency fluc= tuations,
> =C2=A0 =C2=A0 =C2=A0 bounded by the bottleneck queue's effective &= #39;length' (set
> =C2=A0 =C2=A0 =C2=A0 by physical RAM limits or operator-configurable t= hreshold).
>
> =C2=A0 - The previous point suggests perhaps a hybrid TCP which uses > =C2=A0 =C2=A0 =C2=A0 delay-based control, but switches (briefly) to lo= ss-based
> =C2=A0 =C2=A0 =C2=A0 control if it detects the bottleneck queue is bei= ng
> =C2=A0 =C2=A0 =C2=A0 hammered by other, loss-based TCP flows. Challeng= ing
> =C2=A0 =C2=A0 =C2=A0 questions arise as to what triggers switching bet= ween
> =C2=A0 =C2=A0 =C2=A0 delay-based and loss-based modes.
>
> =C2=A0 - Reducing a buffer's length requires meddling with the
> =C2=A0 =C2=A0 =C2=A0 bottleneck(s) (new firmware or new devices). Depl= oying
> =C2=A0 =C2=A0 =C2=A0 delay-based TCPs requires meddling with endpoints= (OS
> =C2=A0 =C2=A0 =C2=A0 upgrade/patch). Many more of the latter than the = former.
>
> =C2=A0 - But yes, in self-contained networks where the end hosts can a= ll
> =C2=A0 =C2=A0 =C2=A0 be told to run a delay-based CC algorithm, delay-= based CC
> =C2=A0 =C2=A0 =C2=A0 can mitigate the impact of bloated buffers in you= r bottleneck
> =C2=A0 =C2=A0 =C2=A0 network devices. Such homogeneous environments do= exist, but
> =C2=A0 =C2=A0 =C2=A0 the Internet is quite different.
>
> =C2=A0 - Alternatively, if one could classify delay-based CC flows int= o one
> =C2=A0 =C2=A0 =C2=A0 queue and loss-based CC flows into another queue = at each
> =C2=A0 =C2=A0 =C2=A0 bottleneck, the first point above might not be su= ch a problem.
> =C2=A0 =C2=A0 =C2=A0 I also want a pink pony ;) =C2=A0(Of course, once= we're considering
> =C2=A0 =C2=A0 =C2=A0 tweak the bottlenecks with classifiers and multip= le queues,
> =C2=A0 =C2=A0 =C2=A0 might as continue the surgery and reduce the bloa= ted buffers too.)

_______________________________________________
Bloat mailing list
Bloat@list= s.bufferbloat.net
= https://lists.bufferbloat.net/listinfo/bloa


--bcaec54a37bc9d38ee04d8d3a546--