[Bloat] TCP vegas vs TCP cubic

Richard Scheffenegger rscheff at gmx.at
Wed Feb 2 13:36:42 PST 2011


I guess so, but only with limited tunables:

Control Panel -> All Control Panel Items -> Network and Sharing Center ->
  Change Adapter Settings -> (Select Adapter, right click, Properties)
    Configure -> Advanced

Then you get a list of things the driver allows to tweak (or at least show 
stuff).

Interesting for us:
  Receive Buffers (256 on a Marvell Yukon)
  Transmit Buffers (256)
  Interrupt Moderation (also adds latency for slow flows).

Regards,
   Richard


----- Original Message ----- 
From: "Dave "Täht"" <d at taht.net>
To: "Richard Scheffenegger" <rscheff at gmx.at>
Cc: "Justin McCann" <jneilm at gmail.com>; "bloat" 
<bloat at lists.bufferbloat.net>
Sent: Wednesday, February 02, 2011 8:16 PM
Subject: Re: [Bloat] TCP vegas vs TCP cubic



Thx. Wiki'd:

http://www.bufferbloat.net/projects/bloat/wiki/Windows_Tips

Is there a windows equivalent of ethtool?

"Richard Scheffenegger" <rscheff at gmx.at> writes:

> For all the Windows Vista / Windows 7 Users around, they can enable
> "Compound TCP", which is a Hybrid TCP Congestion Control approach:
>
> netsh int tcp set global congestionprovider=ctcp
>
> and, while you're at it, also enable ECN (lossless congestion control
> feedback):
>
> netsh int tcp set global ecncapability=enabled
>
> If enough End Users enable ECN, core providers may be inclined to
> deploy AQM with Packet Marking too... And as Home Gateways are those
> which are prone to ECN implementation bugs, a full disconnect from the
> internet (rather than certain sites not reachable) is quite easy to
> diagnose at that end.
>
> Been running with ECN since a couple of months, and so far I have yet
> to encounter a site that will consistently fail with ECN. Actually,
> enabling ECN gives you more retries with the SYN (3x ECN+SYN, 3x
> normal SYN), so in a heavy congested / bufferbloated environment, your
> small flows might get through eventually, with higher probability.
>
> Regards,
>   Richard
>
>
>
> ----- Original Message ----- 
> From: "Dave "Täht"" <d at taht.net>
> To: "Justin McCann" <jneilm at gmail.com>
> Cc: "bloat" <bloat at lists.bufferbloat.net>
> Sent: Wednesday, February 02, 2011 5:29 PM
> Subject: Re: [Bloat] TCP vegas vs TCP cubic
>
>
>>
>> Thx for the feedback. I've put up more information on the wiki at:
>>
>> http://www.bufferbloat.net/projects/bloat/wiki/Experiment_-_TCP_cubic_vs_TCP_vegas
>>
>> (At least netnews had a "C"ancel message option. Wikis are safer to use
>> before your first cup of coffee)
>>
>> Justin McCann <jneilm at gmail.com> writes:
>>
>>> On Wed, Feb 2, 2011 at 10:20 AM, Dave Täht <d at taht.net> wrote:
>>>> Can I surmise that TCP cubic is like a dragster, able to go really fast
>>>> in one direction down a straightaway, and TCP vegas more like an 80s
>>>> model MR2, maneuverable, but underpowered?
>>>
>>> There are some parameters to tune, essentially setting the number of
>>> packets you want queued in the network at any one time (see
>>> http://neal.nu/uw/linux-vegas/). I haven't messed with it much myself,
>>> but you might try to increase those just a bit -- if Vegas
>>
>> I am reading now.
>>
>>> underestimates the queue size and the queue empties, you'll never get
>>> the throughput. Ideally there would always be exactly one packet in
>>> the bottleneck queue.
>>
>> What a happy day that would be.
>>
>>>
>>> But I think your results are pretty much expected with Vegas, since it
>>> uses the increase in queuing latency as an early congestion indicator.
>>> If everyone used it, we may be better off, but other congestion
>>> algorithms aren't fair to Vegas since they wait until there are packet
>>> drops to notice congestion.
>>
>> My thought was, is that if it were possible that the wireless side of a
>> given stack used it, life might be better on that front. Ultimately. For
>> people that upload stuff.
>>
>>>> On a failed hunch, I also re-ran the tests with a much larger
>>>> congestion window:
>>> I think you mean larger send/receive buffers instead of congestion
>>> window? I'll bet the Vegas parameters are keeping the congestion
>>
>> Correction noted. Coffee needed.
>>
>>> window smaller than your send/receive buffer sizes, so they aren't
>>> limiting you in the first place, so no improvement.
>>
>> I'll take a packet trace next time I run the test.
>>
>>>
>>> The web100 patches (web100.org) are great for getting into the details
>>> of how TCP is working. If you don't want to apply them yourself, you
>>> can try the Live CD of perfSONAR-PS (http://psps.perfsonar.net/). It
>>> might be useful to have an NDT
>>> (http://www.internet2.edu/performance/ndt/) server running on your
>>> home network, or use one at M-Lab. It doesn't need much resource-wise
>>> but the web100 patches.
>>
>> Excellent suggestions. Building now. (It seems to want java and I don't
>> think the little clients I have on this testbed can handle that well)
>>
>> At the moment my little testbed is fairly flexible and my queue of
>> things to test is quite large.
>>
>> I have bloat-reducing patches for all the devices in the picture except
>> for the laptop's , which is proving to be painful to look at.
>>
>> At the moment, I'd like to be getting, useful, interesting,
>> *repeatable* results for a variety of well defined latency + throughput
>> tests with... stock firmware and then be able to re-run the interesting
>> series(s) against more custom configurations.
>>
>> I've only deployed the first patch on the wndr3700 thus far. It was
>> *amazing*.
>>
>>>
>>>    Justin
>>
>> -- 
>> Dave Taht
>> http://nex-6.taht.net
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>

-- 
Dave Taht
http://nex-6.taht.net 



More information about the Bloat mailing list