From: Lawrence Stewart <lstewart@room52.net>
To: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: David Hayes <david.hayes@ieee.org>, bloat@lists.bufferbloat.net
Subject: Re: [Bloat] sweet tcp
Date: Tue, 16 Jul 2013 11:15:01 +1000 [thread overview]
Message-ID: <51E49E95.7060102@room52.net> (raw)
In-Reply-To: <20130709193309.GA1272@order.stressinduktion.org>
On 07/10/13 05:33, Hannes Frederic Sowa wrote:
> On Tue, Jul 09, 2013 at 12:10:48PM -0700, Eric Dumazet wrote:
>> On Tue, 2013-07-09 at 19:38 +0200, Jaume Barcelo wrote:
>>> Hi,
>>>
>>> I was explaining the bufferbloat problem to some undergrad students
>>> showing them the "Bufferbloat: Dark Buffers in the Internet" paper. I
>>> asked them to find a solution for the problem and someone pointed at
>>> Fig. 1 and said "That's easy. All you have to do is to operate in the
>>> sweet point where the throughput is maximum and the delay is minimum".
>>>
>>> It seemed to me that it was a good idea and I tried to think a way to
>>> force TCP to operate close to the optimal point. The goal is to
>>> increase the congestion window until it is larger than the optimal
>>> one. At that point, start decreasing the congestion window until is
>>> lower than the optimal point.
>>>
>>> To be more specific, TCP would be at any time increasing or decreasing
>>> the congestion window. In other words, it will be moving in one
>>> direction (right or left) along the x axis of Fig. 1 of Getty's paper.
>>> Each RTT, the performance is measured in terms of delay and
>>> throughput. If there is a performance improvement, we keep moving in
>>> the same direction. If there is a performance loss, we change the
>>> direction.
>>>
>>> I tried to explain the algorithm here:
>>> https://github.com/jbarcelo/sweet-tcp-paper/blob/master/document.pdf?raw=true
>>>
>>> I am not an expert on TCP, so I decided to share it with this list to
>>> get some expert opinions.
>>
>> Are you familiar with existing delay based algorithms ?
>>
>> A known one is TCP Vegas.
>>
>> Problem is that it would work well only if all flows would use it.
>>
>> Alas, lot of flows (or non flows traffic) will still use Reno/cubic (or
>> no congestion at all) and they will clamp flows that are willing to
>> reduce delays.
>>
>> So that's definitely not 'easy' ...
>
> FreeBSD recently imported a new CC algorithm. From the commit msg[0]:
>
> Import an implementation of the CAIA Delay-Gradient (CDG) congestion control
> algorithm, which is based on the 2011 v0.1 patch release and described in the
> paper "Revisiting TCP Congestion Control using Delay Gradients" by David Hayes
> and Grenville Armitage. It is implemented as a kernel module compatible with the
> modular congestion control framework.
>
> CDG is a hybrid congestion control algorithm which reacts to both packet loss
> and inferred queuing delay. It attempts to operate as a delay-based algorithm
> where possible, but utilises heuristics to detect loss-based TCP cross traffic
> and will compete effectively as required. CDG is therefore incrementally
> deployable and suitable for use on shared networks.
>
> In collaboration with: David Hayes <david.hayes at ieee.org> and
> Grenville Armitage <garmitage at swin edu au>
> MFC after: 4 days
> Sponsored by: Cisco University Research Program and FreeBSD Foundation
>
> I had no time to play with it myself, yet.
>
> [0] http://svnweb.freebsd.org/base/head/sys/netinet/cc/cc_cdg.c?revision=252504&view=markup
FYI, I'm the guy who did the import of CDG into FreeBSD - I work at CAIA
with Grenville and formerly with David who is now at the University of Oslo.
CDG will ship as a loadable kernel module in FreeBSD 9.2-RELEASE due out
early September. If anyone is keen to play with it in the meantime, the
post 20130707 9.1-STABLE, 9.2-PRERELEASE and 10-CURRENT snapshots have
the code. You can grab snapshot ISOs or virtual disk images which
contain CDG from:
ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/amd64/amd64/ISO-IMAGES/9.1/
ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/amd64/amd64/ISO-IMAGES/9.2/
ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/amd64/amd64/ISO-IMAGES/10.0/
ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/VM-IMAGES/20130713/10.0-CURRENT/amd64/
ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/VM-IMAGES/20130713/9.2-PRERELEASE/amd64/
I should draw everyone's attention to the caveat in the BUGS section of
the cc_cdg(4) man page:
The underlying algorithm and parameter values are still a work
in progress and may not be optimal for some network scenarios.
i.e. CDG is the focus of active research and development and we have
unreleased algorithm/parameter improvements which will be rolled into
the FreeBSD public code as they pass internal muster.
We would certainly welcome any feedback and experiences from anyone who
plays with it.
Cheers,
Lawrence
prev parent reply other threads:[~2013-07-16 1:15 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-09 17:38 Jaume Barcelo
2013-07-09 17:56 ` Stephen Hemminger
2013-07-09 19:10 ` Eric Dumazet
2013-07-09 19:33 ` Hannes Frederic Sowa
2013-07-09 21:37 ` Jaume Barcelo
2013-07-16 1:15 ` Lawrence Stewart [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51E49E95.7060102@room52.net \
--to=lstewart@room52.net \
--cc=bloat@lists.bufferbloat.net \
--cc=david.hayes@ieee.org \
--cc=hannes@stressinduktion.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox