* [Bloat] Solving bufferbloat with TCP using packet delay
@ 2013-03-20 15:36 Steffan Norberhuis
2013-03-20 15:55 ` Dave Taht
` (4 more replies)
0 siblings, 5 replies; 15+ messages in thread
From: Steffan Norberhuis @ 2013-03-20 15:36 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 1192 bytes --]
Hello Everyone,
For a project for the Delft Technical University myself and 3 students are
writing a review paper on the buffer bloat problem and its possible
solutions. I also subscribed to this mailinglist and see alot of proposed
solutions to be AQM.
But hardly any talk about solving buffer bloat by using a TCP variant that
that uses packet delay as a way to determine the send rate. We did not come
across any papers that argue that these TCP variants are not a good
solution. We went to several professors with the question if TCP using
packet delay was not a good solution. But we did not get a concise answer.
In our view AQM needs alot of new hardware to be implemented and a TCP
variant would perhaps be easier to implement and is also able to solve
bufferbloat.
So I have a few questions I would like to ask you:
- Is TCP using packet delay considered as part of the solution for
bufferbloat?
- What are the problems of TCP delay variants that keep it from solving
bufferbloat?
- What are the drawbacks of the TCP delay variants that would favor AQM
over TCP?
- What are the advantages of TCP delay varaints that would favor TCP over
AQM?
Best regards,
Steffan Norberhuis
[-- Attachment #2: Type: text/html, Size: 1341 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-20 15:36 [Bloat] Solving bufferbloat with TCP using packet delay Steffan Norberhuis
@ 2013-03-20 15:55 ` Dave Taht
2013-03-20 16:12 ` Oliver Hohlfeld
` (3 subsequent siblings)
4 siblings, 0 replies; 15+ messages in thread
From: Dave Taht @ 2013-03-20 15:55 UTC (permalink / raw)
To: Steffan Norberhuis; +Cc: bloat
On Wed, Mar 20, 2013 at 11:36 AM, Steffan Norberhuis
<steffan@norberhuis.nl> wrote:
> Hello Everyone,
>
> For a project for the Delft Technical University myself and 3 students are
> writing a review paper on the buffer bloat problem and its possible
> solutions. I also subscribed to this mailinglist and see alot of proposed
> solutions to be AQM.
>
> But hardly any talk about solving buffer bloat by using a TCP variant that
> that uses packet delay as a way to determine the send rate. We did not come
> across any papers that argue that these TCP variants are not a good
> solution. We went to several professors with the question if TCP using
> packet delay was not a good solution. But we did not get a concise answer.
> In our view AQM needs alot of new hardware to be implemented and a TCP
> variant would perhaps be easier to implement and is also able to solve
> bufferbloat.
LPCC has been pursued for a decade. the new AQM stuff outperforms it
in nearly every respect.
I'd put together what I'd hoped to be a foundational paper on it, in
conjunction with some LEDBAT researchers. I couldn't get it published
in full, full version is here:
www.telecom-paristech.fr/~drossi/paper/rossi13tma-b.pdf
I have not had a chance to work on the successor paper, running the
same sorts of tests, using sfqred, or fq_codel. I have tons of data,
just no time to process it and structure the experiments repeatably.
The issue has also been discussed on this list and others, for ages.
>
> So I have a few questions I would like to ask you:
> - Is TCP using packet delay considered as part of the solution for
> bufferbloat?
I think paced TCP shows more promise than any of the delay based TCPs,
at least at present, on the edge, for things like DASH traffic. There
are other possibilities being discussed in the rmcat ietf group.
> - What are the problems of TCP delay variants that keep it from solving
> bufferbloat?
>
> - What are the drawbacks of the TCP delay variants that would favor AQM over
> TCP?
Nothing competes with packet loss/ECN marked based TCP effectively
that we know of. A few, come fairly close.
> - What are the advantages of TCP delay varaints that would favor TCP over
> AQM?
You, know, at this point, my answer is - Install Linux, get the RRUL
test, enable every known variant of TCP, and try them with and without
the AQMs that have been developed and now available in linux, with
whatever variety of traffic you like, and get back to us with the
results. It'll only take a day or so, to try it out. See the iccrg
details for how to get started, or the stanford and MIT presos.
http://netseminar.stanford.edu
http://www.ietf.org/proceedings/86/slides/slides-86-iccrg-0.pdf
> Best regards,
>
> Steffan Norberhuis
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-20 15:36 [Bloat] Solving bufferbloat with TCP using packet delay Steffan Norberhuis
2013-03-20 15:55 ` Dave Taht
@ 2013-03-20 16:12 ` Oliver Hohlfeld
2013-03-20 16:35 ` Michael Richardson
` (2 subsequent siblings)
4 siblings, 0 replies; 15+ messages in thread
From: Oliver Hohlfeld @ 2013-03-20 16:12 UTC (permalink / raw)
To: bloat
Steffan,
> I also subscribed to this mailinglist and see alot of proposed
> solutions to be AQM.
Well, AQM is one possible answer. While AQM is discussed for years, it
never found widespread deployment. Proper RED configuration was close to
black magic. Based on preliminary results, Codel seems a promising
candidate to bring AQM back into the game (at least for edge routers).
However, the most basic fix is reducing the buffer size in exacerbated
buffer configurations (bufferbloat is mostly a problem in the edge.
Buffers in the core are often reasonably sized). When buffers can not be
changed, AQM provides a good fix.
Another aspect is managing congestion in the network (e.g., by QoS
mechanisms). We recently investigated the problem from the perspective
of end-users (see [1]) and found congestion to be a key aspect that
reduces user satisfaction.
[1] BufferBloat: How Relevant? A QoE Perspective on Buffer Sizing.
http://downloads.ohohlfeld.com/paper/bufferbloat-qoe-tr.pdf
> But hardly any talk about solving buffer bloat by using a TCP variant
> that that uses packet delay as a way to determine the send rate. We did
> not come across any papers that argue that these TCP variants are not a
> good solution.
I do have a few concerns:
- There is no flag day that would make users switch to new TCP variants.
There are plenty of TCP flavors used already. How would you ensure
deployment of a delay sensitive variant?
- While TCP traffic is dominant in the Internet's traffic mix, many
real-time (or delay sensitive) applications use UDP. In these cases,
your fix would not apply.
> In our view AQM needs alot of new hardware to be implemented and
> a TCP variant would perhaps be easier to implement and is also able to
> solve bufferbloat.
Think also about the widespread deployment of a new TCP variant. How
easy is it to get an TCP update deployed on all the different software
stacks that are out there? To me, it appears to be practically infeasible.
> So I have a few questions I would like to ask you:
> - Is TCP using packet delay considered as part of the solution for
> bufferbloat?
>
> - What are the problems of TCP delay variants that keep it from solving
> bufferbloat?
>
> - What are the drawbacks of the TCP delay variants that would favor AQM
> over TCP?
>
> - What are the advantages of TCP delay varaints that would favor TCP
> over AQM?
Sounds like a nice research problem. I'd love to see an extensive
evaluation on this.
Oliver
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-20 15:36 [Bloat] Solving bufferbloat with TCP using packet delay Steffan Norberhuis
2013-03-20 15:55 ` Dave Taht
2013-03-20 16:12 ` Oliver Hohlfeld
@ 2013-03-20 16:35 ` Michael Richardson
2013-03-20 20:21 ` grenville armitage
2013-04-03 18:14 ` Juliusz Chroboczek
4 siblings, 0 replies; 15+ messages in thread
From: Michael Richardson @ 2013-03-20 16:35 UTC (permalink / raw)
To: Steffan Norberhuis; +Cc: bloat
>>>>> "Steffan" == Steffan Norberhuis <steffan@norberhuis.nl> writes:
Steffan> But hardly any talk about solving buffer bloat by using a
Steffan> TCP variant that that uses packet delay as a way to
Steffan> determine the send rate. We did not come across any papers
Steffan> that argue that these TCP variants are not a good
Steffan> solution. We went to several professors with the question
Steffan> if TCP using packet delay was not a good solution. But we
Steffan> did not get a concise answer. In our view AQM needs alot
Steffan> of new hardware to be implemented and a TCP variant would
Steffan> perhaps be easier to implement and is also able to solve
Steffan> bufferbloat.
I'm not going to argue the protocol at all (because I don't know the
details of what you propose), but rather the economics.
1) Deploying a new TCP is a multi-decade effort.
Sure, a new Linux kernel with a new default might have significant (80%)
data centre presence is 12 months or so, and a really smart change to
the protocol might work even when only one end is upgraded.
However, significant numbers of systems out there will essentially
never get a firmware patch, and will continue to operate until they
wear out. I'm thinking not just of desktop computers in public
libraries, but media devices like Nintendo Wii. (mine runs Netflix)
Most of the owners of these devices have little knowledge, and little
ability to fix them, or even know that the device needs to be fixed.
As long as there even a few bad senders, and there are excessive
buffers out there, the bad senders will fill the buffers and ruin it
for the rest of us. Many excessive buffers are unseen at layer 3.
2) fixing routers is not as difficult as you think.
First, most equipment gets replaced and/or redeployed (perhaps thanks
to an ebay event) every three years or so.
Second, most equipment is field upgradable, and even a lot of the 10G
forwarding engines have upgradeable firmware to implement things like
queues.
Third, and most importantly, in markets which are not distorted by
idiotic regulation, ISPs get paid based upon their performance.
Bufferbloat, once people can measure it, will be impact the bottom
line of an ISP, particularly those that are reaching into the business
data/voip pie.
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | network architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-20 15:36 [Bloat] Solving bufferbloat with TCP using packet delay Steffan Norberhuis
` (2 preceding siblings ...)
2013-03-20 16:35 ` Michael Richardson
@ 2013-03-20 20:21 ` grenville armitage
2013-03-20 23:16 ` Stephen Hemminger
2013-04-03 18:14 ` Juliusz Chroboczek
4 siblings, 1 reply; 15+ messages in thread
From: grenville armitage @ 2013-03-20 20:21 UTC (permalink / raw)
To: bloat
On 03/21/2013 02:36, Steffan Norberhuis wrote:
> Hello Everyone,
>
> For a project for the Delft Technical University myself and 3
> students are writing a review paper on the buffer bloat problem and
> its possible solutions.
My colleagues have been dabbling with delay-based CC algorithms,
with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
if that's of any interest.
Some thoughts:
- When delay-based TCPs share bottlenecks with loss-based TCPs,
the delay-based TCPs are punished. Hard. They back-off,
as queuing delay builds, while the loss-based flow(s)
blissfully continue to push the queue to full (drop).
Everyone sharing the bottleneck sees latency fluctuations,
bounded by the bottleneck queue's effective 'length' (set
by physical RAM limits or operator-configurable threshold).
- The previous point suggests perhaps a hybrid TCP which uses
delay-based control, but switches (briefly) to loss-based
control if it detects the bottleneck queue is being
hammered by other, loss-based TCP flows. Challenging
questions arise as to what triggers switching between
delay-based and loss-based modes.
- Reducing a buffer's length requires meddling with the
bottleneck(s) (new firmware or new devices). Deploying
delay-based TCPs requires meddling with endpoints (OS
upgrade/patch). Many more of the latter than the former.
- But yes, in self-contained networks where the end hosts can all
be told to run a delay-based CC algorithm, delay-based CC
can mitigate the impact of bloated buffers in your bottleneck
network devices. Such homogeneous environments do exist, but
the Internet is quite different.
- Alternatively, if one could classify delay-based CC flows into one
queue and loss-based CC flows into another queue at each
bottleneck, the first point above might not be such a problem.
I also want a pink pony ;) (Of course, once we're considering
tweak the bottlenecks with classifiers and multiple queues,
might as continue the surgery and reduce the bloated buffers too.)
cheers,
gja
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-20 20:21 ` grenville armitage
@ 2013-03-20 23:16 ` Stephen Hemminger
2013-03-21 1:01 ` Jonathan Morton
` (2 more replies)
0 siblings, 3 replies; 15+ messages in thread
From: Stephen Hemminger @ 2013-03-20 23:16 UTC (permalink / raw)
To: grenville armitage; +Cc: bloat
On Thu, 21 Mar 2013 07:21:52 +1100
grenville armitage <garmitage@swin.edu.au> wrote:
>
>
> On 03/21/2013 02:36, Steffan Norberhuis wrote:
> > Hello Everyone,
> >
> > For a project for the Delft Technical University myself and 3
> > students are writing a review paper on the buffer bloat problem and
> > its possible solutions.
>
> My colleagues have been dabbling with delay-based CC algorithms,
> with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
> if that's of any interest.
>
> Some thoughts:
>
> - When delay-based TCPs share bottlenecks with loss-based TCPs,
> the delay-based TCPs are punished. Hard. They back-off,
> as queuing delay builds, while the loss-based flow(s)
> blissfully continue to push the queue to full (drop).
> Everyone sharing the bottleneck sees latency fluctuations,
> bounded by the bottleneck queue's effective 'length' (set
> by physical RAM limits or operator-configurable threshold).
>
> - The previous point suggests perhaps a hybrid TCP which uses
> delay-based control, but switches (briefly) to loss-based
> control if it detects the bottleneck queue is being
> hammered by other, loss-based TCP flows. Challenging
> questions arise as to what triggers switching between
> delay-based and loss-based modes.
>
> - Reducing a buffer's length requires meddling with the
> bottleneck(s) (new firmware or new devices). Deploying
> delay-based TCPs requires meddling with endpoints (OS
> upgrade/patch). Many more of the latter than the former.
>
> - But yes, in self-contained networks where the end hosts can all
> be told to run a delay-based CC algorithm, delay-based CC
> can mitigate the impact of bloated buffers in your bottleneck
> network devices. Such homogeneous environments do exist, but
> the Internet is quite different.
>
> - Alternatively, if one could classify delay-based CC flows into one
> queue and loss-based CC flows into another queue at each
> bottleneck, the first point above might not be such a problem.
> I also want a pink pony ;) (Of course, once we're considering
> tweak the bottlenecks with classifiers and multiple queues,
> might as continue the surgery and reduce the bloated buffers too.)
Everyone has to go through the phase of thinking
"it can't be that hard, I can invent a better TCP congestion algorithm"
But it is hard, and the delay based algorithms are fundamentally
flawed because they see reverse path delay and cross traffic as false
positives. The hybrid ones all fall back to loss under "interesting
times" so they really don't buy much.
Really not convinced that Bufferbloat will be solved by TCP.
You can make a TCP algorithm that causes worse latency than Cubic or Reno
very easily. But doing better is hard, especially since TCP really
can't assume much about its underlying network. There maybe random
delays and packet loss (wireless), there maybe spikes in RTT and
sessions maybe long or short lived. And you can't assume the whole
world is running your algorithm.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-20 23:16 ` Stephen Hemminger
@ 2013-03-21 1:01 ` Jonathan Morton
2013-03-26 13:10 ` Maarten de Vries
2013-04-04 0:10 ` Simon Barber
2013-03-21 8:26 ` Hagen Paul Pfeifer
2013-04-03 18:16 ` Juliusz Chroboczek
2 siblings, 2 replies; 15+ messages in thread
From: Jonathan Morton @ 2013-03-21 1:01 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 4440 bytes --]
One beneficial approach is to focus on the receive side rather than the
send side. It is possible to implement a delay based algorithm there, where
it will coexist naturally with a loss based system on the send side, and
also with AQM and FQ at the bottleneck link if present.
I did this to make the behaviour of a 3G modem tolerable, which was
exhibiting extreme (tens of seconds) delays on the downlink through the
traffic shaper on the provider side. The algorithm simply combined the
latency measurement with the current receive window size to calculate
bandwidth, then chose a new receive window size based on that. It worked
sufficiently well.
The approach is a logical development of receive window sizing algorithms
which simply measure how long and fat the network is, and size the window
to encompass that statistic. In fact I implemented it by modifying the
basic algorithm in Linux, rather than adding a new module.
- Jonathan Morton
On Mar 21, 2013 1:16 AM, "Stephen Hemminger" <stephen@networkplumber.org>
wrote:
> On Thu, 21 Mar 2013 07:21:52 +1100
> grenville armitage <garmitage@swin.edu.au> wrote:
>
> >
> >
> > On 03/21/2013 02:36, Steffan Norberhuis wrote:
> > > Hello Everyone,
> > >
> > > For a project for the Delft Technical University myself and 3
> > > students are writing a review paper on the buffer bloat problem and
> > > its possible solutions.
> >
> > My colleagues have been dabbling with delay-based CC algorithms,
> > with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
> > if that's of any interest.
> >
> > Some thoughts:
> >
> > - When delay-based TCPs share bottlenecks with loss-based TCPs,
> > the delay-based TCPs are punished. Hard. They back-off,
> > as queuing delay builds, while the loss-based flow(s)
> > blissfully continue to push the queue to full (drop).
> > Everyone sharing the bottleneck sees latency fluctuations,
> > bounded by the bottleneck queue's effective 'length' (set
> > by physical RAM limits or operator-configurable threshold).
> >
> > - The previous point suggests perhaps a hybrid TCP which uses
> > delay-based control, but switches (briefly) to loss-based
> > control if it detects the bottleneck queue is being
> > hammered by other, loss-based TCP flows. Challenging
> > questions arise as to what triggers switching between
> > delay-based and loss-based modes.
> >
> > - Reducing a buffer's length requires meddling with the
> > bottleneck(s) (new firmware or new devices). Deploying
> > delay-based TCPs requires meddling with endpoints (OS
> > upgrade/patch). Many more of the latter than the former.
> >
> > - But yes, in self-contained networks where the end hosts can all
> > be told to run a delay-based CC algorithm, delay-based CC
> > can mitigate the impact of bloated buffers in your bottleneck
> > network devices. Such homogeneous environments do exist, but
> > the Internet is quite different.
> >
> > - Alternatively, if one could classify delay-based CC flows into one
> > queue and loss-based CC flows into another queue at each
> > bottleneck, the first point above might not be such a problem.
> > I also want a pink pony ;) (Of course, once we're considering
> > tweak the bottlenecks with classifiers and multiple queues,
> > might as continue the surgery and reduce the bloated buffers too.)
>
> Everyone has to go through the phase of thinking
> "it can't be that hard, I can invent a better TCP congestion algorithm"
> But it is hard, and the delay based algorithms are fundamentally
> flawed because they see reverse path delay and cross traffic as false
> positives. The hybrid ones all fall back to loss under "interesting
> times" so they really don't buy much.
>
> Really not convinced that Bufferbloat will be solved by TCP.
> You can make a TCP algorithm that causes worse latency than Cubic or Reno
> very easily. But doing better is hard, especially since TCP really
> can't assume much about its underlying network. There maybe random
> delays and packet loss (wireless), there maybe spikes in RTT and
> sessions maybe long or short lived. And you can't assume the whole
> world is running your algorithm.
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 5385 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-20 23:16 ` Stephen Hemminger
2013-03-21 1:01 ` Jonathan Morton
@ 2013-03-21 8:26 ` Hagen Paul Pfeifer
2013-04-03 18:16 ` Juliusz Chroboczek
2 siblings, 0 replies; 15+ messages in thread
From: Hagen Paul Pfeifer @ 2013-03-21 8:26 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: bloat
* Stephen Hemminger | 2013-03-20 16:16:22 [-0700]:
>Everyone has to go through the phase of thinking
> "it can't be that hard, I can invent a better TCP congestion algorithm"
>But it is hard, and the delay based algorithms are fundamentally
>flawed because they see reverse path delay and cross traffic as false
>positives. The hybrid ones all fall back to loss under "interesting
>times" so they really don't buy much.
>
>Really not convinced that Bufferbloat will be solved by TCP.
>You can make a TCP algorithm that causes worse latency than Cubic or Reno
>very easily. But doing better is hard, especially since TCP really
>can't assume much about its underlying network. There maybe random
>delays and packet loss (wireless), there maybe spikes in RTT and
>sessions maybe long or short lived. And you can't assume the whole
>world is running your algorithm.
+1 plus: bufferbloat is a queue problem (say link layer), the right way is to
address the problem at that level. Sure, the network and transport layer is
involved and a key factor. But a pure (probably delay based) TCP congestion
control based solution do not solve the problem: we also have to deal with
(greedy) UDP (in a ideal world DCCP) applications as well.
Imagine a pure UDP setup: one greedy UDP application (media stream) and now
try to ping a host. You will experience the same bufferbloat problems.
Hagen
--
http://protocollabs.com
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-21 1:01 ` Jonathan Morton
@ 2013-03-26 13:10 ` Maarten de Vries
2013-03-26 13:24 ` Jonathan Morton
2013-04-04 0:10 ` Simon Barber
1 sibling, 1 reply; 15+ messages in thread
From: Maarten de Vries @ 2013-03-26 13:10 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 4604 bytes --]
I won't deny that there are problems with delay based congestion control,
but at least some of the same problems also apply to AQM.
In the presence of greedy UDP flows, AQM would highly bias those flows.
Sure, the more packets a flow sends, the more it will have it's packets
dropped. But only TCP will lower its congestion window and the greedy UDP
flow will happily continue filling the buffer. The effect is that only the
UDP flow will use most of the available bandwidth. Yes, the queue will be
shorter so the TCP flow will have a lower delay, but it will also have a
much lower throughput.
Of course, the same applies for delay based congestion control and the
greedy flow will still use most of the bandwidth and in addition the delay
will be higher. The point remains that neither AQM nor delay based
congestion control can provide a fair solution when there are greedy or
stupid flows present. Unless of course there are multiple queues for the
different types of flows, but yeah, where's that pink pony? Let's make it a
unicorn while we're at it.
And yes, there are much more endpoints than switches/routers. But I imagine
they are also much more homogeneous. I could be wrong about this since I
don't know too much about network equipment used by ISPs. Either way, it
seems to me that most endpoints are running either Windows, Linux (or
Android), a BSD variant or something made by Apple. And I would imagine
most embedded systems (the remaining endpoints that don't run a consumer
OS) aren't connected directly to the internet and won't be able to wreak
much havoc. Consumer operating systems are regularly updated as it is, so
sneaking in a new TCP variant should be *relatively* easy. Again, I might
be wrong, but these are just my thoughts.
I short, I'm not saying there are no problems, but I'm saying it might be
easy to ignore the idea as ineffective too quickly.
Kind regards,
Maarten de Vries
> On Thu, 21 Mar 2013 07:21:52 +1100
>> grenville armitage <garmitage@swin.edu.au> wrote:
>>
>> >
>> >
>> > On 03/21/2013 02:36, Steffan Norberhuis wrote:
>> > > Hello Everyone,
>> > >
>> > > For a project for the Delft Technical University myself and 3
>> > > students are writing a review paper on the buffer bloat problem and
>> > > its possible solutions.
>> >
>> > My colleagues have been dabbling with delay-based CC algorithms,
>> > with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
>> > if that's of any interest.
>> >
>> > Some thoughts:
>> >
>> > - When delay-based TCPs share bottlenecks with loss-based TCPs,
>> > the delay-based TCPs are punished. Hard. They back-off,
>> > as queuing delay builds, while the loss-based flow(s)
>> > blissfully continue to push the queue to full (drop).
>> > Everyone sharing the bottleneck sees latency fluctuations,
>> > bounded by the bottleneck queue's effective 'length' (set
>> > by physical RAM limits or operator-configurable threshold).
>> >
>> > - The previous point suggests perhaps a hybrid TCP which uses
>> > delay-based control, but switches (briefly) to loss-based
>> > control if it detects the bottleneck queue is being
>> > hammered by other, loss-based TCP flows. Challenging
>> > questions arise as to what triggers switching between
>> > delay-based and loss-based modes.
>> >
>> > - Reducing a buffer's length requires meddling with the
>> > bottleneck(s) (new firmware or new devices). Deploying
>> > delay-based TCPs requires meddling with endpoints (OS
>> > upgrade/patch). Many more of the latter than the former.
>> >
>> > - But yes, in self-contained networks where the end hosts can all
>> > be told to run a delay-based CC algorithm, delay-based CC
>> > can mitigate the impact of bloated buffers in your bottleneck
>> > network devices. Such homogeneous environments do exist, but
>> > the Internet is quite different.
>> >
>> > - Alternatively, if one could classify delay-based CC flows into one
>> > queue and loss-based CC flows into another queue at each
>> > bottleneck, the first point above might not be such a problem.
>> > I also want a pink pony ;) (Of course, once we're considering
>> > tweak the bottlenecks with classifiers and multiple queues,
>> > might as continue the surgery and reduce the bloated buffers too.)
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloa<https://lists.bufferbloat.net/listinfo/bloat>
>
>
>
[-- Attachment #2: Type: text/html, Size: 5625 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-26 13:10 ` Maarten de Vries
@ 2013-03-26 13:24 ` Jonathan Morton
0 siblings, 0 replies; 15+ messages in thread
From: Jonathan Morton @ 2013-03-26 13:24 UTC (permalink / raw)
To: Maarten de Vries; +Cc: bloat
On 26 Mar, 2013, at 3:10 pm, Maarten de Vries wrote:
> Unless of course there are multiple queues for the different types of flows, but yeah, where's that pink pony?
Ah, but fq_codel *does* cope with that sort of problem relatively well. It doesn't need to distinguish types of flows, just flows in general.
- Jonathan Morton
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-20 15:36 [Bloat] Solving bufferbloat with TCP using packet delay Steffan Norberhuis
` (3 preceding siblings ...)
2013-03-20 20:21 ` grenville armitage
@ 2013-04-03 18:14 ` Juliusz Chroboczek
4 siblings, 0 replies; 15+ messages in thread
From: Juliusz Chroboczek @ 2013-04-03 18:14 UTC (permalink / raw)
To: Steffan Norberhuis; +Cc: bloat
> - Is TCP using packet delay considered as part of the solution for
> bufferbloat?
Not on this list, apparently. However, there has been a fair amount
of research on that approach in the late noughts. I'd suggest
you search for papers on TCP Vegas, TCP-LP and LEDBAT.
In particular, LEDBAT is very widely deployed as part of uTP. (Your
students are probably using it right now for making a backup of their
DVD collection.)
> - What are the problems of TCP delay variants that keep it from
> solving bufferbloat?
The main issue is the so-called late joiner advantage: these
algorithms tend to give an unfair advantage to flows that only joined
when the network was already congested, as compared to older flows.
I don't know if the issue has been solved yet, it's been some time
since I last looked at that stuff.
> - What are the drawbacks of the TCP delay variants that would favor
> AQM over TCP?
> - What are the advantages of TCP delay varaints that would favor TCP
> over AQM?
Please be aware that they are not competitors: the two can coexist in
the same network. I would say that there is consensus that AQM is
necessary, and that delay-based congestion control is a nice thing to
have for low-priority traffic (e.g. uTP). In other words, you cannot
do without AQM, but that certainly doesn't mean there aren't people
interested in deploying low-delay congestion control.
-- Juliusz
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-20 23:16 ` Stephen Hemminger
2013-03-21 1:01 ` Jonathan Morton
2013-03-21 8:26 ` Hagen Paul Pfeifer
@ 2013-04-03 18:16 ` Juliusz Chroboczek
2013-04-03 18:23 ` Hagen Paul Pfeifer
2 siblings, 1 reply; 15+ messages in thread
From: Juliusz Chroboczek @ 2013-04-03 18:16 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: bloat
> But it is hard, and the delay based algorithms are fundamentally
> flawed because they see reverse path delay and cross traffic as false
> positives.
I'm not sure what you mean. All modern delay-based congestion
algorithms that I am aware of use one-way delay as an indicator of
congestion, not RTT.
-- Juliusz
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-04-03 18:16 ` Juliusz Chroboczek
@ 2013-04-03 18:23 ` Hagen Paul Pfeifer
2013-04-03 19:35 ` Juliusz Chroboczek
0 siblings, 1 reply; 15+ messages in thread
From: Hagen Paul Pfeifer @ 2013-04-03 18:23 UTC (permalink / raw)
To: Juliusz Chroboczek; +Cc: bloat
* Juliusz Chroboczek | 2013-04-03 20:16:50 [+0200]:
>> But it is hard, and the delay based algorithms are fundamentally
>> flawed because they see reverse path delay and cross traffic as false
>> positives.
>
>I'm not sure what you mean. All modern delay-based congestion
>algorithms that I am aware of use one-way delay as an indicator of
>congestion, not RTT.
It depends on what you define as modern Juliusz. Vegas? Yeah-TCP? Nope.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-04-03 18:23 ` Hagen Paul Pfeifer
@ 2013-04-03 19:35 ` Juliusz Chroboczek
0 siblings, 0 replies; 15+ messages in thread
From: Juliusz Chroboczek @ 2013-04-03 19:35 UTC (permalink / raw)
To: Hagen Paul Pfeifer; +Cc: bloat
>> All modern delay-based congestion algorithms that I am aware of use
>> one-way delay as an indicator of congestion, not RTT.
> It depends on what you define as modern [...] Vegas? Yeah-TCP? Nope.
Vegas is some 20 years old, if memory serves. I'm not familiar with Yeah-TCP.
(TCP-LP and LEDBAT both use one-way delay.)
-- Juliusz
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Bloat] Solving bufferbloat with TCP using packet delay
2013-03-21 1:01 ` Jonathan Morton
2013-03-26 13:10 ` Maarten de Vries
@ 2013-04-04 0:10 ` Simon Barber
1 sibling, 0 replies; 15+ messages in thread
From: Simon Barber @ 2013-04-04 0:10 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
Is your TCP mod available? This would be useful on Android phones, to
avoid background downloads (e.g. updates) from disrupting applications
(Skype).
Simon
On 03/20/2013 06:01 PM, Jonathan Morton wrote:
> One beneficial approach is to focus on the receive side rather than the
> send side. It is possible to implement a delay based algorithm there,
> where it will coexist naturally with a loss based system on the send
> side, and also with AQM and FQ at the bottleneck link if present.
>
> I did this to make the behaviour of a 3G modem tolerable, which was
> exhibiting extreme (tens of seconds) delays on the downlink through the
> traffic shaper on the provider side. The algorithm simply combined the
> latency measurement with the current receive window size to calculate
> bandwidth, then chose a new receive window size based on that. It worked
> sufficiently well.
>
> The approach is a logical development of receive window sizing
> algorithms which simply measure how long and fat the network is, and
> size the window to encompass that statistic. In fact I implemented it by
> modifying the basic algorithm in Linux, rather than adding a new module.
>
> - Jonathan Morton
>
> On Mar 21, 2013 1:16 AM, "Stephen Hemminger" <stephen@networkplumber.org
> <mailto:stephen@networkplumber.org>> wrote:
>
> On Thu, 21 Mar 2013 07:21:52 +1100
> grenville armitage <garmitage@swin.edu.au
> <mailto:garmitage@swin.edu.au>> wrote:
>
> >
> >
> > On 03/21/2013 02:36, Steffan Norberhuis wrote:
> > > Hello Everyone,
> > >
> > > For a project for the Delft Technical University myself and 3
> > > students are writing a review paper on the buffer bloat problem and
> > > its possible solutions.
> >
> > My colleagues have been dabbling with delay-based CC algorithms,
> > with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
> > if that's of any interest.
> >
> > Some thoughts:
> >
> > - When delay-based TCPs share bottlenecks with loss-based TCPs,
> > the delay-based TCPs are punished. Hard. They back-off,
> > as queuing delay builds, while the loss-based flow(s)
> > blissfully continue to push the queue to full (drop).
> > Everyone sharing the bottleneck sees latency fluctuations,
> > bounded by the bottleneck queue's effective 'length' (set
> > by physical RAM limits or operator-configurable threshold).
> >
> > - The previous point suggests perhaps a hybrid TCP which uses
> > delay-based control, but switches (briefly) to loss-based
> > control if it detects the bottleneck queue is being
> > hammered by other, loss-based TCP flows. Challenging
> > questions arise as to what triggers switching between
> > delay-based and loss-based modes.
> >
> > - Reducing a buffer's length requires meddling with the
> > bottleneck(s) (new firmware or new devices). Deploying
> > delay-based TCPs requires meddling with endpoints (OS
> > upgrade/patch). Many more of the latter than the former.
> >
> > - But yes, in self-contained networks where the end hosts can all
> > be told to run a delay-based CC algorithm, delay-based CC
> > can mitigate the impact of bloated buffers in your bottleneck
> > network devices. Such homogeneous environments do exist, but
> > the Internet is quite different.
> >
> > - Alternatively, if one could classify delay-based CC flows
> into one
> > queue and loss-based CC flows into another queue at each
> > bottleneck, the first point above might not be such a problem.
> > I also want a pink pony ;) (Of course, once we're considering
> > tweak the bottlenecks with classifiers and multiple queues,
> > might as continue the surgery and reduce the bloated
> buffers too.)
>
> Everyone has to go through the phase of thinking
> "it can't be that hard, I can invent a better TCP congestion
> algorithm"
> But it is hard, and the delay based algorithms are fundamentally
> flawed because they see reverse path delay and cross traffic as false
> positives. The hybrid ones all fall back to loss under "interesting
> times" so they really don't buy much.
>
> Really not convinced that Bufferbloat will be solved by TCP.
> You can make a TCP algorithm that causes worse latency than Cubic or
> Reno
> very easily. But doing better is hard, especially since TCP really
> can't assume much about its underlying network. There maybe random
> delays and packet loss (wireless), there maybe spikes in RTT and
> sessions maybe long or short lived. And you can't assume the whole
> world is running your algorithm.
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2013-04-04 0:10 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-03-20 15:36 [Bloat] Solving bufferbloat with TCP using packet delay Steffan Norberhuis
2013-03-20 15:55 ` Dave Taht
2013-03-20 16:12 ` Oliver Hohlfeld
2013-03-20 16:35 ` Michael Richardson
2013-03-20 20:21 ` grenville armitage
2013-03-20 23:16 ` Stephen Hemminger
2013-03-21 1:01 ` Jonathan Morton
2013-03-26 13:10 ` Maarten de Vries
2013-03-26 13:24 ` Jonathan Morton
2013-04-04 0:10 ` Simon Barber
2013-03-21 8:26 ` Hagen Paul Pfeifer
2013-04-03 18:16 ` Juliusz Chroboczek
2013-04-03 18:23 ` Hagen Paul Pfeifer
2013-04-03 19:35 ` Juliusz Chroboczek
2013-04-03 18:14 ` Juliusz Chroboczek
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox