[Bloat] sweet tcp
Hannes Frederic Sowa
hannes at stressinduktion.org
Tue Jul 9 15:33:09 EDT 2013
On Tue, Jul 09, 2013 at 12:10:48PM -0700, Eric Dumazet wrote:
> On Tue, 2013-07-09 at 19:38 +0200, Jaume Barcelo wrote:
> > Hi,
> >
> > I was explaining the bufferbloat problem to some undergrad students
> > showing them the "Bufferbloat: Dark Buffers in the Internet" paper. I
> > asked them to find a solution for the problem and someone pointed at
> > Fig. 1 and said "That's easy. All you have to do is to operate in the
> > sweet point where the throughput is maximum and the delay is minimum".
> >
> > It seemed to me that it was a good idea and I tried to think a way to
> > force TCP to operate close to the optimal point. The goal is to
> > increase the congestion window until it is larger than the optimal
> > one. At that point, start decreasing the congestion window until is
> > lower than the optimal point.
> >
> > To be more specific, TCP would be at any time increasing or decreasing
> > the congestion window. In other words, it will be moving in one
> > direction (right or left) along the x axis of Fig. 1 of Getty's paper.
> > Each RTT, the performance is measured in terms of delay and
> > throughput. If there is a performance improvement, we keep moving in
> > the same direction. If there is a performance loss, we change the
> > direction.
> >
> > I tried to explain the algorithm here:
> > https://github.com/jbarcelo/sweet-tcp-paper/blob/master/document.pdf?raw=true
> >
> > I am not an expert on TCP, so I decided to share it with this list to
> > get some expert opinions.
>
> Are you familiar with existing delay based algorithms ?
>
> A known one is TCP Vegas.
>
> Problem is that it would work well only if all flows would use it.
>
> Alas, lot of flows (or non flows traffic) will still use Reno/cubic (or
> no congestion at all) and they will clamp flows that are willing to
> reduce delays.
>
> So that's definitely not 'easy' ...
FreeBSD recently imported a new CC algorithm. From the commit msg[0]:
Import an implementation of the CAIA Delay-Gradient (CDG) congestion control
algorithm, which is based on the 2011 v0.1 patch release and described in the
paper "Revisiting TCP Congestion Control using Delay Gradients" by David Hayes
and Grenville Armitage. It is implemented as a kernel module compatible with the
modular congestion control framework.
CDG is a hybrid congestion control algorithm which reacts to both packet loss
and inferred queuing delay. It attempts to operate as a delay-based algorithm
where possible, but utilises heuristics to detect loss-based TCP cross traffic
and will compete effectively as required. CDG is therefore incrementally
deployable and suitable for use on shared networks.
In collaboration with: David Hayes <david.hayes at ieee.org> and
Grenville Armitage <garmitage at swin edu au>
MFC after: 4 days
Sponsored by: Cisco University Research Program and FreeBSD Foundation
I had no time to play with it myself, yet.
[0] http://svnweb.freebsd.org/base/head/sys/netinet/cc/cc_cdg.c?revision=252504&view=markup
More information about the Bloat
mailing list