[Bloat] TCP vegas vs TCP cubic

Dave Täht d at taht.net
Wed Feb 2 11:29:16 EST 2011


Thx for the feedback. I've put up more information on the wiki at:

http://www.bufferbloat.net/projects/bloat/wiki/Experiment_-_TCP_cubic_vs_TCP_vegas

(At least netnews had a "C"ancel message option. Wikis are safer to use
 before your first cup of coffee)

Justin McCann <jneilm at gmail.com> writes:

> On Wed, Feb 2, 2011 at 10:20 AM, Dave Täht <d at taht.net> wrote:
>> Can I surmise that TCP cubic is like a dragster, able to go really fast
>> in one direction down a straightaway, and TCP vegas more like an 80s
>> model MR2, maneuverable, but underpowered?
>
> There are some parameters to tune, essentially setting the number of
> packets you want queued in the network at any one time (see
> http://neal.nu/uw/linux-vegas/). I haven't messed with it much myself,
> but you might try to increase those just a bit -- if Vegas

I am reading now.

> underestimates the queue size and the queue empties, you'll never get
> the throughput. Ideally there would always be exactly one packet in
> the bottleneck queue.

What a happy day that would be.

>
> But I think your results are pretty much expected with Vegas, since it
> uses the increase in queuing latency as an early congestion indicator.
> If everyone used it, we may be better off, but other congestion
> algorithms aren't fair to Vegas since they wait until there are packet
> drops to notice congestion.

My thought was, is that if it were possible that the wireless side of a
given stack used it, life might be better on that front. Ultimately. For
people that upload stuff.

>> On a failed hunch, I also re-ran the tests with a much larger
>> congestion window:
> I think you mean larger send/receive buffers instead of congestion
> window? I'll bet the Vegas parameters are keeping the congestion

Correction noted. Coffee needed.

> window smaller than your send/receive buffer sizes, so they aren't
> limiting you in the first place, so no improvement.

I'll take a packet trace next time I run the test.

>
> The web100 patches (web100.org) are great for getting into the details
> of how TCP is working. If you don't want to apply them yourself, you
> can try the Live CD of perfSONAR-PS (http://psps.perfsonar.net/). It
> might be useful to have an NDT
> (http://www.internet2.edu/performance/ndt/) server running on your
> home network, or use one at M-Lab. It doesn't need much resource-wise
> but the web100 patches.

Excellent suggestions. Building now. (It seems to want java and I don't
think the little clients I have on this testbed can handle that well)

At the moment my little testbed is fairly flexible and my queue of
things to test is quite large.

I have bloat-reducing patches for all the devices in the picture except
for the laptop's , which is proving to be painful to look at.

At the moment, I'd like to be getting, useful, interesting,
*repeatable* results for a variety of well defined latency + throughput
tests with... stock firmware and then be able to re-run the interesting
series(s) against more custom configurations.

I've only deployed the first patch on the wndr3700 thus far. It was
*amazing*. 

>
>    Justin

-- 
Dave Taht
http://nex-6.taht.net



More information about the Bloat mailing list