* [Bloat] sweet tcp
@ 2013-07-09 17:38 Jaume Barcelo
2013-07-09 17:56 ` Stephen Hemminger
2013-07-09 19:10 ` Eric Dumazet
0 siblings, 2 replies; 6+ messages in thread
From: Jaume Barcelo @ 2013-07-09 17:38 UTC (permalink / raw)
To: bloat
Hi,
I was explaining the bufferbloat problem to some undergrad students
showing them the "Bufferbloat: Dark Buffers in the Internet" paper. I
asked them to find a solution for the problem and someone pointed at
Fig. 1 and said "That's easy. All you have to do is to operate in the
sweet point where the throughput is maximum and the delay is minimum".
It seemed to me that it was a good idea and I tried to think a way to
force TCP to operate close to the optimal point. The goal is to
increase the congestion window until it is larger than the optimal
one. At that point, start decreasing the congestion window until is
lower than the optimal point.
To be more specific, TCP would be at any time increasing or decreasing
the congestion window. In other words, it will be moving in one
direction (right or left) along the x axis of Fig. 1 of Getty's paper.
Each RTT, the performance is measured in terms of delay and
throughput. If there is a performance improvement, we keep moving in
the same direction. If there is a performance loss, we change the
direction.
I tried to explain the algorithm here:
https://github.com/jbarcelo/sweet-tcp-paper/blob/master/document.pdf?raw=true
I am not an expert on TCP, so I decided to share it with this list to
get some expert opinions.
Thanks,
Jaume
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] sweet tcp
2013-07-09 17:38 [Bloat] sweet tcp Jaume Barcelo
@ 2013-07-09 17:56 ` Stephen Hemminger
2013-07-09 19:10 ` Eric Dumazet
1 sibling, 0 replies; 6+ messages in thread
From: Stephen Hemminger @ 2013-07-09 17:56 UTC (permalink / raw)
To: Jaume Barcelo; +Cc: bloat
On Tue, 9 Jul 2013 19:38:40 +0200
Jaume Barcelo <jaume.barcelo@upf.edu> wrote:
> To be more specific, TCP would be at any time increasing or decreasing
> the congestion window. In other words, it will be moving in one
> direction (right or left) along the x axis of Fig. 1 of Getty's paper.
> Each RTT, the performance is measured in terms of delay and
> throughput. If there is a performance improvement, we keep moving in
> the same direction. If there is a performance loss, we change the
> direction.
TCP can not get a reliable measure of RTT. This is the whole reason
that delay based algorithms are not widely deployed and don't work
outside the lab. The issue is that other traffic perturbs RTT measurements
have a huge variance on a real network (more noise than signal).
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] sweet tcp
2013-07-09 17:38 [Bloat] sweet tcp Jaume Barcelo
2013-07-09 17:56 ` Stephen Hemminger
@ 2013-07-09 19:10 ` Eric Dumazet
2013-07-09 19:33 ` Hannes Frederic Sowa
1 sibling, 1 reply; 6+ messages in thread
From: Eric Dumazet @ 2013-07-09 19:10 UTC (permalink / raw)
To: Jaume Barcelo; +Cc: bloat
On Tue, 2013-07-09 at 19:38 +0200, Jaume Barcelo wrote:
> Hi,
>
> I was explaining the bufferbloat problem to some undergrad students
> showing them the "Bufferbloat: Dark Buffers in the Internet" paper. I
> asked them to find a solution for the problem and someone pointed at
> Fig. 1 and said "That's easy. All you have to do is to operate in the
> sweet point where the throughput is maximum and the delay is minimum".
>
> It seemed to me that it was a good idea and I tried to think a way to
> force TCP to operate close to the optimal point. The goal is to
> increase the congestion window until it is larger than the optimal
> one. At that point, start decreasing the congestion window until is
> lower than the optimal point.
>
> To be more specific, TCP would be at any time increasing or decreasing
> the congestion window. In other words, it will be moving in one
> direction (right or left) along the x axis of Fig. 1 of Getty's paper.
> Each RTT, the performance is measured in terms of delay and
> throughput. If there is a performance improvement, we keep moving in
> the same direction. If there is a performance loss, we change the
> direction.
>
> I tried to explain the algorithm here:
> https://github.com/jbarcelo/sweet-tcp-paper/blob/master/document.pdf?raw=true
>
> I am not an expert on TCP, so I decided to share it with this list to
> get some expert opinions.
Are you familiar with existing delay based algorithms ?
A known one is TCP Vegas.
Problem is that it would work well only if all flows would use it.
Alas, lot of flows (or non flows traffic) will still use Reno/cubic (or
no congestion at all) and they will clamp flows that are willing to
reduce delays.
So that's definitely not 'easy' ...
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] sweet tcp
2013-07-09 19:10 ` Eric Dumazet
@ 2013-07-09 19:33 ` Hannes Frederic Sowa
2013-07-09 21:37 ` Jaume Barcelo
2013-07-16 1:15 ` Lawrence Stewart
0 siblings, 2 replies; 6+ messages in thread
From: Hannes Frederic Sowa @ 2013-07-09 19:33 UTC (permalink / raw)
To: Eric Dumazet; +Cc: bloat
On Tue, Jul 09, 2013 at 12:10:48PM -0700, Eric Dumazet wrote:
> On Tue, 2013-07-09 at 19:38 +0200, Jaume Barcelo wrote:
> > Hi,
> >
> > I was explaining the bufferbloat problem to some undergrad students
> > showing them the "Bufferbloat: Dark Buffers in the Internet" paper. I
> > asked them to find a solution for the problem and someone pointed at
> > Fig. 1 and said "That's easy. All you have to do is to operate in the
> > sweet point where the throughput is maximum and the delay is minimum".
> >
> > It seemed to me that it was a good idea and I tried to think a way to
> > force TCP to operate close to the optimal point. The goal is to
> > increase the congestion window until it is larger than the optimal
> > one. At that point, start decreasing the congestion window until is
> > lower than the optimal point.
> >
> > To be more specific, TCP would be at any time increasing or decreasing
> > the congestion window. In other words, it will be moving in one
> > direction (right or left) along the x axis of Fig. 1 of Getty's paper.
> > Each RTT, the performance is measured in terms of delay and
> > throughput. If there is a performance improvement, we keep moving in
> > the same direction. If there is a performance loss, we change the
> > direction.
> >
> > I tried to explain the algorithm here:
> > https://github.com/jbarcelo/sweet-tcp-paper/blob/master/document.pdf?raw=true
> >
> > I am not an expert on TCP, so I decided to share it with this list to
> > get some expert opinions.
>
> Are you familiar with existing delay based algorithms ?
>
> A known one is TCP Vegas.
>
> Problem is that it would work well only if all flows would use it.
>
> Alas, lot of flows (or non flows traffic) will still use Reno/cubic (or
> no congestion at all) and they will clamp flows that are willing to
> reduce delays.
>
> So that's definitely not 'easy' ...
FreeBSD recently imported a new CC algorithm. From the commit msg[0]:
Import an implementation of the CAIA Delay-Gradient (CDG) congestion control
algorithm, which is based on the 2011 v0.1 patch release and described in the
paper "Revisiting TCP Congestion Control using Delay Gradients" by David Hayes
and Grenville Armitage. It is implemented as a kernel module compatible with the
modular congestion control framework.
CDG is a hybrid congestion control algorithm which reacts to both packet loss
and inferred queuing delay. It attempts to operate as a delay-based algorithm
where possible, but utilises heuristics to detect loss-based TCP cross traffic
and will compete effectively as required. CDG is therefore incrementally
deployable and suitable for use on shared networks.
In collaboration with: David Hayes <david.hayes at ieee.org> and
Grenville Armitage <garmitage at swin edu au>
MFC after: 4 days
Sponsored by: Cisco University Research Program and FreeBSD Foundation
I had no time to play with it myself, yet.
[0] http://svnweb.freebsd.org/base/head/sys/netinet/cc/cc_cdg.c?revision=252504&view=markup
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] sweet tcp
2013-07-09 19:33 ` Hannes Frederic Sowa
@ 2013-07-09 21:37 ` Jaume Barcelo
2013-07-16 1:15 ` Lawrence Stewart
1 sibling, 0 replies; 6+ messages in thread
From: Jaume Barcelo @ 2013-07-09 21:37 UTC (permalink / raw)
Cc: bloat
Thanks for your insights :)
I have to read that paper by Hayes and Armitage. It seems very promising.
Cheers
On Tue, Jul 9, 2013 at 9:33 PM, Hannes Frederic Sowa
<hannes@stressinduktion.org> wrote:
> On Tue, Jul 09, 2013 at 12:10:48PM -0700, Eric Dumazet wrote:
>> On Tue, 2013-07-09 at 19:38 +0200, Jaume Barcelo wrote:
>> > Hi,
>> >
>> > I was explaining the bufferbloat problem to some undergrad students
>> > showing them the "Bufferbloat: Dark Buffers in the Internet" paper. I
>> > asked them to find a solution for the problem and someone pointed at
>> > Fig. 1 and said "That's easy. All you have to do is to operate in the
>> > sweet point where the throughput is maximum and the delay is minimum".
>> >
>> > It seemed to me that it was a good idea and I tried to think a way to
>> > force TCP to operate close to the optimal point. The goal is to
>> > increase the congestion window until it is larger than the optimal
>> > one. At that point, start decreasing the congestion window until is
>> > lower than the optimal point.
>> >
>> > To be more specific, TCP would be at any time increasing or decreasing
>> > the congestion window. In other words, it will be moving in one
>> > direction (right or left) along the x axis of Fig. 1 of Getty's paper.
>> > Each RTT, the performance is measured in terms of delay and
>> > throughput. If there is a performance improvement, we keep moving in
>> > the same direction. If there is a performance loss, we change the
>> > direction.
>> >
>> > I tried to explain the algorithm here:
>> > https://github.com/jbarcelo/sweet-tcp-paper/blob/master/document.pdf?raw=true
>> >
>> > I am not an expert on TCP, so I decided to share it with this list to
>> > get some expert opinions.
>>
>> Are you familiar with existing delay based algorithms ?
>>
>> A known one is TCP Vegas.
>>
>> Problem is that it would work well only if all flows would use it.
>>
>> Alas, lot of flows (or non flows traffic) will still use Reno/cubic (or
>> no congestion at all) and they will clamp flows that are willing to
>> reduce delays.
>>
>> So that's definitely not 'easy' ...
>
> FreeBSD recently imported a new CC algorithm. From the commit msg[0]:
>
> Import an implementation of the CAIA Delay-Gradient (CDG) congestion control
> algorithm, which is based on the 2011 v0.1 patch release and described in the
> paper "Revisiting TCP Congestion Control using Delay Gradients" by David Hayes
> and Grenville Armitage. It is implemented as a kernel module compatible with the
> modular congestion control framework.
>
> CDG is a hybrid congestion control algorithm which reacts to both packet loss
> and inferred queuing delay. It attempts to operate as a delay-based algorithm
> where possible, but utilises heuristics to detect loss-based TCP cross traffic
> and will compete effectively as required. CDG is therefore incrementally
> deployable and suitable for use on shared networks.
>
> In collaboration with: David Hayes <david.hayes at ieee.org> and
> Grenville Armitage <garmitage at swin edu au>
> MFC after: 4 days
> Sponsored by: Cisco University Research Program and FreeBSD Foundation
>
> I had no time to play with it myself, yet.
>
> [0] http://svnweb.freebsd.org/base/head/sys/netinet/cc/cc_cdg.c?revision=252504&view=markup
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] sweet tcp
2013-07-09 19:33 ` Hannes Frederic Sowa
2013-07-09 21:37 ` Jaume Barcelo
@ 2013-07-16 1:15 ` Lawrence Stewart
1 sibling, 0 replies; 6+ messages in thread
From: Lawrence Stewart @ 2013-07-16 1:15 UTC (permalink / raw)
To: Hannes Frederic Sowa; +Cc: David Hayes, bloat
On 07/10/13 05:33, Hannes Frederic Sowa wrote:
> On Tue, Jul 09, 2013 at 12:10:48PM -0700, Eric Dumazet wrote:
>> On Tue, 2013-07-09 at 19:38 +0200, Jaume Barcelo wrote:
>>> Hi,
>>>
>>> I was explaining the bufferbloat problem to some undergrad students
>>> showing them the "Bufferbloat: Dark Buffers in the Internet" paper. I
>>> asked them to find a solution for the problem and someone pointed at
>>> Fig. 1 and said "That's easy. All you have to do is to operate in the
>>> sweet point where the throughput is maximum and the delay is minimum".
>>>
>>> It seemed to me that it was a good idea and I tried to think a way to
>>> force TCP to operate close to the optimal point. The goal is to
>>> increase the congestion window until it is larger than the optimal
>>> one. At that point, start decreasing the congestion window until is
>>> lower than the optimal point.
>>>
>>> To be more specific, TCP would be at any time increasing or decreasing
>>> the congestion window. In other words, it will be moving in one
>>> direction (right or left) along the x axis of Fig. 1 of Getty's paper.
>>> Each RTT, the performance is measured in terms of delay and
>>> throughput. If there is a performance improvement, we keep moving in
>>> the same direction. If there is a performance loss, we change the
>>> direction.
>>>
>>> I tried to explain the algorithm here:
>>> https://github.com/jbarcelo/sweet-tcp-paper/blob/master/document.pdf?raw=true
>>>
>>> I am not an expert on TCP, so I decided to share it with this list to
>>> get some expert opinions.
>>
>> Are you familiar with existing delay based algorithms ?
>>
>> A known one is TCP Vegas.
>>
>> Problem is that it would work well only if all flows would use it.
>>
>> Alas, lot of flows (or non flows traffic) will still use Reno/cubic (or
>> no congestion at all) and they will clamp flows that are willing to
>> reduce delays.
>>
>> So that's definitely not 'easy' ...
>
> FreeBSD recently imported a new CC algorithm. From the commit msg[0]:
>
> Import an implementation of the CAIA Delay-Gradient (CDG) congestion control
> algorithm, which is based on the 2011 v0.1 patch release and described in the
> paper "Revisiting TCP Congestion Control using Delay Gradients" by David Hayes
> and Grenville Armitage. It is implemented as a kernel module compatible with the
> modular congestion control framework.
>
> CDG is a hybrid congestion control algorithm which reacts to both packet loss
> and inferred queuing delay. It attempts to operate as a delay-based algorithm
> where possible, but utilises heuristics to detect loss-based TCP cross traffic
> and will compete effectively as required. CDG is therefore incrementally
> deployable and suitable for use on shared networks.
>
> In collaboration with: David Hayes <david.hayes at ieee.org> and
> Grenville Armitage <garmitage at swin edu au>
> MFC after: 4 days
> Sponsored by: Cisco University Research Program and FreeBSD Foundation
>
> I had no time to play with it myself, yet.
>
> [0] http://svnweb.freebsd.org/base/head/sys/netinet/cc/cc_cdg.c?revision=252504&view=markup
FYI, I'm the guy who did the import of CDG into FreeBSD - I work at CAIA
with Grenville and formerly with David who is now at the University of Oslo.
CDG will ship as a loadable kernel module in FreeBSD 9.2-RELEASE due out
early September. If anyone is keen to play with it in the meantime, the
post 20130707 9.1-STABLE, 9.2-PRERELEASE and 10-CURRENT snapshots have
the code. You can grab snapshot ISOs or virtual disk images which
contain CDG from:
ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/amd64/amd64/ISO-IMAGES/9.1/
ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/amd64/amd64/ISO-IMAGES/9.2/
ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/amd64/amd64/ISO-IMAGES/10.0/
ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/VM-IMAGES/20130713/10.0-CURRENT/amd64/
ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/VM-IMAGES/20130713/9.2-PRERELEASE/amd64/
I should draw everyone's attention to the caveat in the BUGS section of
the cc_cdg(4) man page:
The underlying algorithm and parameter values are still a work
in progress and may not be optimal for some network scenarios.
i.e. CDG is the focus of active research and development and we have
unreleased algorithm/parameter improvements which will be rolled into
the FreeBSD public code as they pass internal muster.
We would certainly welcome any feedback and experiences from anyone who
plays with it.
Cheers,
Lawrence
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2013-07-16 1:15 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-09 17:38 [Bloat] sweet tcp Jaume Barcelo
2013-07-09 17:56 ` Stephen Hemminger
2013-07-09 19:10 ` Eric Dumazet
2013-07-09 19:33 ` Hannes Frederic Sowa
2013-07-09 21:37 ` Jaume Barcelo
2013-07-16 1:15 ` Lawrence Stewart
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox