General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] TCP vegas vs TCP cubic
@ 2011-02-02 15:20 Dave Täht
  2011-02-02 16:05 ` Justin McCann
  0 siblings, 1 reply; 11+ messages in thread
From: Dave Täht @ 2011-02-02 15:20 UTC (permalink / raw)
  To: bloat


On a suggestion from one of the posters to jg's blog, I took a look at
tcp vegas. The results I got were puzzling. 

With tcp cubic, I typically get 71Mbit/sec and all the side effects of
bufferbloat with a single stream.

With vegas turned on, a single stream peaks at around 20Mbit.

10 vegas streams did about 55Mbit in total. 

Can I surmise that TCP cubic is like a dragster, able to go really fast
in one direction down a straightaway, and TCP vegas more like an 80s
model MR2, maneuverable, but underpowered?

The testbed network: http://nex-6.taht.net/images/housenet.png 
The test path: laptop->nano-m->nano-m->openrd 
    (I note that this path almost never exhibits packet loss) 

Most of the machines on the path are running with
minimal txqueues and dma buffers running as low as they can go.

The tests:


With cubic:

$ openrd: iperf -s 
$ laptop: iperf -t 60 -c openrd

With vegas (on both laptop and server)
modprobe tcp_vegas
echo vegas > /proc/sys/net/ipv4/tcp_congestion_control

openrd:$ iperf -s 
laptop:$ iperf -t 60 -c openrd &
laptop:$ ping  

On a failed hunch, I also re-ran the tests with a much larger 
congestion window:

echo /proc/sys/net/core/rmem_max  
echo /proc/sys/net/core/wmem_max  

iperf -w8m -s

To no net difference in effect.

-- 
Dave Taht
http://nex-6.taht.net

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Bloat] TCP vegas vs TCP cubic
  2011-02-02 15:20 [Bloat] TCP vegas vs TCP cubic Dave Täht
@ 2011-02-02 16:05 ` Justin McCann
  2011-02-02 16:29   ` Dave Täht
  2011-02-03 17:53   ` Dave Taht
  0 siblings, 2 replies; 11+ messages in thread
From: Justin McCann @ 2011-02-02 16:05 UTC (permalink / raw)
  To: Dave Täht; +Cc: bloat

On Wed, Feb 2, 2011 at 10:20 AM, Dave Täht <d@taht.net> wrote:
> Can I surmise that TCP cubic is like a dragster, able to go really fast
> in one direction down a straightaway, and TCP vegas more like an 80s
> model MR2, maneuverable, but underpowered?

There are some parameters to tune, essentially setting the number of
packets you want queued in the network at any one time (see
http://neal.nu/uw/linux-vegas/). I haven't messed with it much myself,
but you might try to increase those just a bit -- if Vegas
underestimates the queue size and the queue empties, you'll never get
the throughput. Ideally there would always be exactly one packet in
the bottleneck queue.

But I think your results are pretty much expected with Vegas, since it
uses the increase in queuing latency as an early congestion indicator.
If everyone used it, we may be better off, but other congestion
algorithms aren't fair to Vegas since they wait until there are packet
drops to notice congestion.


> On a failed hunch, I also re-ran the tests with a much larger
> congestion window:
>
> echo /proc/sys/net/core/rmem_max
> echo /proc/sys/net/core/wmem_max
>
> iperf -w8m -s
>
> To no net difference in effect.

I think you mean larger send/receive buffers instead of congestion
window? I'll bet the Vegas parameters are keeping the congestion
window smaller than your send/receive buffer sizes, so they aren't
limiting you in the first place, so no improvement.

The web100 patches (web100.org) are great for getting into the details
of how TCP is working. If you don't want to apply them yourself, you
can try the Live CD of perfSONAR-PS (http://psps.perfsonar.net/). It
might be useful to have an NDT
(http://www.internet2.edu/performance/ndt/) server running on your
home network, or use one at M-Lab. It doesn't need much resource-wise
but the web100 patches.

   Justin

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Bloat] TCP vegas vs TCP cubic
  2011-02-02 16:05 ` Justin McCann
@ 2011-02-02 16:29   ` Dave Täht
  2011-02-02 18:37     ` Richard Scheffenegger
  2011-02-03 17:53   ` Dave Taht
  1 sibling, 1 reply; 11+ messages in thread
From: Dave Täht @ 2011-02-02 16:29 UTC (permalink / raw)
  To: Justin McCann; +Cc: bloat


Thx for the feedback. I've put up more information on the wiki at:

http://www.bufferbloat.net/projects/bloat/wiki/Experiment_-_TCP_cubic_vs_TCP_vegas

(At least netnews had a "C"ancel message option. Wikis are safer to use
 before your first cup of coffee)

Justin McCann <jneilm@gmail.com> writes:

> On Wed, Feb 2, 2011 at 10:20 AM, Dave Täht <d@taht.net> wrote:
>> Can I surmise that TCP cubic is like a dragster, able to go really fast
>> in one direction down a straightaway, and TCP vegas more like an 80s
>> model MR2, maneuverable, but underpowered?
>
> There are some parameters to tune, essentially setting the number of
> packets you want queued in the network at any one time (see
> http://neal.nu/uw/linux-vegas/). I haven't messed with it much myself,
> but you might try to increase those just a bit -- if Vegas

I am reading now.

> underestimates the queue size and the queue empties, you'll never get
> the throughput. Ideally there would always be exactly one packet in
> the bottleneck queue.

What a happy day that would be.

>
> But I think your results are pretty much expected with Vegas, since it
> uses the increase in queuing latency as an early congestion indicator.
> If everyone used it, we may be better off, but other congestion
> algorithms aren't fair to Vegas since they wait until there are packet
> drops to notice congestion.

My thought was, is that if it were possible that the wireless side of a
given stack used it, life might be better on that front. Ultimately. For
people that upload stuff.

>> On a failed hunch, I also re-ran the tests with a much larger
>> congestion window:
> I think you mean larger send/receive buffers instead of congestion
> window? I'll bet the Vegas parameters are keeping the congestion

Correction noted. Coffee needed.

> window smaller than your send/receive buffer sizes, so they aren't
> limiting you in the first place, so no improvement.

I'll take a packet trace next time I run the test.

>
> The web100 patches (web100.org) are great for getting into the details
> of how TCP is working. If you don't want to apply them yourself, you
> can try the Live CD of perfSONAR-PS (http://psps.perfsonar.net/). It
> might be useful to have an NDT
> (http://www.internet2.edu/performance/ndt/) server running on your
> home network, or use one at M-Lab. It doesn't need much resource-wise
> but the web100 patches.

Excellent suggestions. Building now. (It seems to want java and I don't
think the little clients I have on this testbed can handle that well)

At the moment my little testbed is fairly flexible and my queue of
things to test is quite large.

I have bloat-reducing patches for all the devices in the picture except
for the laptop's , which is proving to be painful to look at.

At the moment, I'd like to be getting, useful, interesting,
*repeatable* results for a variety of well defined latency + throughput
tests with... stock firmware and then be able to re-run the interesting
series(s) against more custom configurations.

I've only deployed the first patch on the wndr3700 thus far. It was
*amazing*. 

>
>    Justin

-- 
Dave Taht
http://nex-6.taht.net

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Bloat] TCP vegas vs TCP cubic
  2011-02-02 16:29   ` Dave Täht
@ 2011-02-02 18:37     ` Richard Scheffenegger
  2011-02-02 19:16       ` Dave Täht
  0 siblings, 1 reply; 11+ messages in thread
From: Richard Scheffenegger @ 2011-02-02 18:37 UTC (permalink / raw)
  To: Dave "Täht", Justin McCann; +Cc: bloat


For all the Windows Vista / Windows 7 Users around, they can enable 
"Compound TCP", which is a Hybrid TCP Congestion Control approach:

netsh int tcp set global congestionprovider=ctcp

and, while you're at it, also enable ECN (lossless congestion control 
feedback):

netsh int tcp set global ecncapability=enabled

If enough End Users enable ECN, core providers may be inclined to deploy AQM 
with Packet Marking too... And as Home Gateways are those which are prone to 
ECN implementation bugs, a full disconnect from the internet (rather than 
certain sites not reachable) is quite easy to diagnose at that end.

Been running with ECN since a couple of months, and so far I have yet to 
encounter a site that will consistently fail with ECN. Actually, enabling 
ECN gives you more retries with the SYN (3x ECN+SYN, 3x normal SYN), so in a 
heavy congested / bufferbloated environment, your small flows might get 
through eventually, with higher probability.

Regards,
   Richard



----- Original Message ----- 
From: "Dave "Täht"" <d@taht.net>
To: "Justin McCann" <jneilm@gmail.com>
Cc: "bloat" <bloat@lists.bufferbloat.net>
Sent: Wednesday, February 02, 2011 5:29 PM
Subject: Re: [Bloat] TCP vegas vs TCP cubic


>
> Thx for the feedback. I've put up more information on the wiki at:
>
> http://www.bufferbloat.net/projects/bloat/wiki/Experiment_-_TCP_cubic_vs_TCP_vegas
>
> (At least netnews had a "C"ancel message option. Wikis are safer to use
> before your first cup of coffee)
>
> Justin McCann <jneilm@gmail.com> writes:
>
>> On Wed, Feb 2, 2011 at 10:20 AM, Dave Täht <d@taht.net> wrote:
>>> Can I surmise that TCP cubic is like a dragster, able to go really fast
>>> in one direction down a straightaway, and TCP vegas more like an 80s
>>> model MR2, maneuverable, but underpowered?
>>
>> There are some parameters to tune, essentially setting the number of
>> packets you want queued in the network at any one time (see
>> http://neal.nu/uw/linux-vegas/). I haven't messed with it much myself,
>> but you might try to increase those just a bit -- if Vegas
>
> I am reading now.
>
>> underestimates the queue size and the queue empties, you'll never get
>> the throughput. Ideally there would always be exactly one packet in
>> the bottleneck queue.
>
> What a happy day that would be.
>
>>
>> But I think your results are pretty much expected with Vegas, since it
>> uses the increase in queuing latency as an early congestion indicator.
>> If everyone used it, we may be better off, but other congestion
>> algorithms aren't fair to Vegas since they wait until there are packet
>> drops to notice congestion.
>
> My thought was, is that if it were possible that the wireless side of a
> given stack used it, life might be better on that front. Ultimately. For
> people that upload stuff.
>
>>> On a failed hunch, I also re-ran the tests with a much larger
>>> congestion window:
>> I think you mean larger send/receive buffers instead of congestion
>> window? I'll bet the Vegas parameters are keeping the congestion
>
> Correction noted. Coffee needed.
>
>> window smaller than your send/receive buffer sizes, so they aren't
>> limiting you in the first place, so no improvement.
>
> I'll take a packet trace next time I run the test.
>
>>
>> The web100 patches (web100.org) are great for getting into the details
>> of how TCP is working. If you don't want to apply them yourself, you
>> can try the Live CD of perfSONAR-PS (http://psps.perfsonar.net/). It
>> might be useful to have an NDT
>> (http://www.internet2.edu/performance/ndt/) server running on your
>> home network, or use one at M-Lab. It doesn't need much resource-wise
>> but the web100 patches.
>
> Excellent suggestions. Building now. (It seems to want java and I don't
> think the little clients I have on this testbed can handle that well)
>
> At the moment my little testbed is fairly flexible and my queue of
> things to test is quite large.
>
> I have bloat-reducing patches for all the devices in the picture except
> for the laptop's , which is proving to be painful to look at.
>
> At the moment, I'd like to be getting, useful, interesting,
> *repeatable* results for a variety of well defined latency + throughput
> tests with... stock firmware and then be able to re-run the interesting
> series(s) against more custom configurations.
>
> I've only deployed the first patch on the wndr3700 thus far. It was
> *amazing*.
>
>>
>>    Justin
>
> -- 
> Dave Taht
> http://nex-6.taht.net
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Bloat] TCP vegas vs TCP cubic
  2011-02-02 18:37     ` Richard Scheffenegger
@ 2011-02-02 19:16       ` Dave Täht
  2011-02-02 20:01         ` Jim Gettys
  2011-02-02 21:36         ` Richard Scheffenegger
  0 siblings, 2 replies; 11+ messages in thread
From: Dave Täht @ 2011-02-02 19:16 UTC (permalink / raw)
  To: Richard Scheffenegger; +Cc: bloat


Thx. Wiki'd: 

http://www.bufferbloat.net/projects/bloat/wiki/Windows_Tips

Is there a windows equivalent of ethtool?

"Richard Scheffenegger" <rscheff@gmx.at> writes:

> For all the Windows Vista / Windows 7 Users around, they can enable
> "Compound TCP", which is a Hybrid TCP Congestion Control approach:
>
> netsh int tcp set global congestionprovider=ctcp
>
> and, while you're at it, also enable ECN (lossless congestion control
> feedback):
>
> netsh int tcp set global ecncapability=enabled
>
> If enough End Users enable ECN, core providers may be inclined to
> deploy AQM with Packet Marking too... And as Home Gateways are those
> which are prone to ECN implementation bugs, a full disconnect from the
> internet (rather than certain sites not reachable) is quite easy to
> diagnose at that end.
>
> Been running with ECN since a couple of months, and so far I have yet
> to encounter a site that will consistently fail with ECN. Actually,
> enabling ECN gives you more retries with the SYN (3x ECN+SYN, 3x
> normal SYN), so in a heavy congested / bufferbloated environment, your
> small flows might get through eventually, with higher probability.
>
> Regards,
>   Richard
>
>
>
> ----- Original Message ----- 
> From: "Dave "Täht"" <d@taht.net>
> To: "Justin McCann" <jneilm@gmail.com>
> Cc: "bloat" <bloat@lists.bufferbloat.net>
> Sent: Wednesday, February 02, 2011 5:29 PM
> Subject: Re: [Bloat] TCP vegas vs TCP cubic
>
>
>>
>> Thx for the feedback. I've put up more information on the wiki at:
>>
>> http://www.bufferbloat.net/projects/bloat/wiki/Experiment_-_TCP_cubic_vs_TCP_vegas
>>
>> (At least netnews had a "C"ancel message option. Wikis are safer to use
>> before your first cup of coffee)
>>
>> Justin McCann <jneilm@gmail.com> writes:
>>
>>> On Wed, Feb 2, 2011 at 10:20 AM, Dave Täht <d@taht.net> wrote:
>>>> Can I surmise that TCP cubic is like a dragster, able to go really fast
>>>> in one direction down a straightaway, and TCP vegas more like an 80s
>>>> model MR2, maneuverable, but underpowered?
>>>
>>> There are some parameters to tune, essentially setting the number of
>>> packets you want queued in the network at any one time (see
>>> http://neal.nu/uw/linux-vegas/). I haven't messed with it much myself,
>>> but you might try to increase those just a bit -- if Vegas
>>
>> I am reading now.
>>
>>> underestimates the queue size and the queue empties, you'll never get
>>> the throughput. Ideally there would always be exactly one packet in
>>> the bottleneck queue.
>>
>> What a happy day that would be.
>>
>>>
>>> But I think your results are pretty much expected with Vegas, since it
>>> uses the increase in queuing latency as an early congestion indicator.
>>> If everyone used it, we may be better off, but other congestion
>>> algorithms aren't fair to Vegas since they wait until there are packet
>>> drops to notice congestion.
>>
>> My thought was, is that if it were possible that the wireless side of a
>> given stack used it, life might be better on that front. Ultimately. For
>> people that upload stuff.
>>
>>>> On a failed hunch, I also re-ran the tests with a much larger
>>>> congestion window:
>>> I think you mean larger send/receive buffers instead of congestion
>>> window? I'll bet the Vegas parameters are keeping the congestion
>>
>> Correction noted. Coffee needed.
>>
>>> window smaller than your send/receive buffer sizes, so they aren't
>>> limiting you in the first place, so no improvement.
>>
>> I'll take a packet trace next time I run the test.
>>
>>>
>>> The web100 patches (web100.org) are great for getting into the details
>>> of how TCP is working. If you don't want to apply them yourself, you
>>> can try the Live CD of perfSONAR-PS (http://psps.perfsonar.net/). It
>>> might be useful to have an NDT
>>> (http://www.internet2.edu/performance/ndt/) server running on your
>>> home network, or use one at M-Lab. It doesn't need much resource-wise
>>> but the web100 patches.
>>
>> Excellent suggestions. Building now. (It seems to want java and I don't
>> think the little clients I have on this testbed can handle that well)
>>
>> At the moment my little testbed is fairly flexible and my queue of
>> things to test is quite large.
>>
>> I have bloat-reducing patches for all the devices in the picture except
>> for the laptop's , which is proving to be painful to look at.
>>
>> At the moment, I'd like to be getting, useful, interesting,
>> *repeatable* results for a variety of well defined latency + throughput
>> tests with... stock firmware and then be able to re-run the interesting
>> series(s) against more custom configurations.
>>
>> I've only deployed the first patch on the wndr3700 thus far. It was
>> *amazing*.
>>
>>>
>>>    Justin
>>
>> -- 
>> Dave Taht
>> http://nex-6.taht.net
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>

-- 
Dave Taht
http://nex-6.taht.net

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Bloat] TCP vegas vs TCP cubic
  2011-02-02 19:16       ` Dave Täht
@ 2011-02-02 20:01         ` Jim Gettys
  2011-02-03 18:34           ` Seth Teller
  2011-02-02 21:36         ` Richard Scheffenegger
  1 sibling, 1 reply; 11+ messages in thread
From: Jim Gettys @ 2011-02-02 20:01 UTC (permalink / raw)
  To: bloat

On 02/02/2011 02:16 PM, Dave Täht wrote:
>
> Thx. Wiki'd:
>
> http://www.bufferbloat.net/projects/bloat/wiki/Windows_Tips
>
> Is there a windows equivalent of ethtool?

Dunno what command line there may be.

Certainly the device driver I looked at had a UI that would let me 
control the ring buffer size, IIRC from my November experiments on Windows.
                   - Jim

>
> "Richard Scheffenegger"<rscheff@gmx.at>  writes:
>
>> For all the Windows Vista / Windows 7 Users around, they can enable
>> "Compound TCP", which is a Hybrid TCP Congestion Control approach:
>>
>> netsh int tcp set global congestionprovider=ctcp
>>
>> and, while you're at it, also enable ECN (lossless congestion control
>> feedback):
>>
>> netsh int tcp set global ecncapability=enabled
>>
>> If enough End Users enable ECN, core providers may be inclined to
>> deploy AQM with Packet Marking too... And as Home Gateways are those
>> which are prone to ECN implementation bugs, a full disconnect from the
>> internet (rather than certain sites not reachable) is quite easy to
>> diagnose at that end.
>>
>> Been running with ECN since a couple of months, and so far I have yet
>> to encounter a site that will consistently fail with ECN. Actually,
>> enabling ECN gives you more retries with the SYN (3x ECN+SYN, 3x
>> normal SYN), so in a heavy congested / bufferbloated environment, your
>> small flows might get through eventually, with higher probability.
>>
>> Regards,
>>    Richard
>>
>>
>>
>> ----- Original Message -----
>> From: "Dave "Täht""<d@taht.net>
>> To: "Justin McCann"<jneilm@gmail.com>
>> Cc: "bloat"<bloat@lists.bufferbloat.net>
>> Sent: Wednesday, February 02, 2011 5:29 PM
>> Subject: Re: [Bloat] TCP vegas vs TCP cubic
>>
>>
>>>
>>> Thx for the feedback. I've put up more information on the wiki at:
>>>
>>> http://www.bufferbloat.net/projects/bloat/wiki/Experiment_-_TCP_cubic_vs_TCP_vegas
>>>
>>> (At least netnews had a "C"ancel message option. Wikis are safer to use
>>> before your first cup of coffee)
>>>
>>> Justin McCann<jneilm@gmail.com>  writes:
>>>
>>>> On Wed, Feb 2, 2011 at 10:20 AM, Dave Täht<d@taht.net>  wrote:
>>>>> Can I surmise that TCP cubic is like a dragster, able to go really fast
>>>>> in one direction down a straightaway, and TCP vegas more like an 80s
>>>>> model MR2, maneuverable, but underpowered?
>>>>
>>>> There are some parameters to tune, essentially setting the number of
>>>> packets you want queued in the network at any one time (see
>>>> http://neal.nu/uw/linux-vegas/). I haven't messed with it much myself,
>>>> but you might try to increase those just a bit -- if Vegas
>>>
>>> I am reading now.
>>>
>>>> underestimates the queue size and the queue empties, you'll never get
>>>> the throughput. Ideally there would always be exactly one packet in
>>>> the bottleneck queue.
>>>
>>> What a happy day that would be.
>>>
>>>>
>>>> But I think your results are pretty much expected with Vegas, since it
>>>> uses the increase in queuing latency as an early congestion indicator.
>>>> If everyone used it, we may be better off, but other congestion
>>>> algorithms aren't fair to Vegas since they wait until there are packet
>>>> drops to notice congestion.
>>>
>>> My thought was, is that if it were possible that the wireless side of a
>>> given stack used it, life might be better on that front. Ultimately. For
>>> people that upload stuff.
>>>
>>>>> On a failed hunch, I also re-ran the tests with a much larger
>>>>> congestion window:
>>>> I think you mean larger send/receive buffers instead of congestion
>>>> window? I'll bet the Vegas parameters are keeping the congestion
>>>
>>> Correction noted. Coffee needed.
>>>
>>>> window smaller than your send/receive buffer sizes, so they aren't
>>>> limiting you in the first place, so no improvement.
>>>
>>> I'll take a packet trace next time I run the test.
>>>
>>>>
>>>> The web100 patches (web100.org) are great for getting into the details
>>>> of how TCP is working. If you don't want to apply them yourself, you
>>>> can try the Live CD of perfSONAR-PS (http://psps.perfsonar.net/). It
>>>> might be useful to have an NDT
>>>> (http://www.internet2.edu/performance/ndt/) server running on your
>>>> home network, or use one at M-Lab. It doesn't need much resource-wise
>>>> but the web100 patches.
>>>
>>> Excellent suggestions. Building now. (It seems to want java and I don't
>>> think the little clients I have on this testbed can handle that well)
>>>
>>> At the moment my little testbed is fairly flexible and my queue of
>>> things to test is quite large.
>>>
>>> I have bloat-reducing patches for all the devices in the picture except
>>> for the laptop's , which is proving to be painful to look at.
>>>
>>> At the moment, I'd like to be getting, useful, interesting,
>>> *repeatable* results for a variety of well defined latency + throughput
>>> tests with... stock firmware and then be able to re-run the interesting
>>> series(s) against more custom configurations.
>>>
>>> I've only deployed the first patch on the wndr3700 thus far. It was
>>> *amazing*.
>>>
>>>>
>>>>     Justin
>>>
>>> --
>>> Dave Taht
>>> http://nex-6.taht.net
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Bloat] TCP vegas vs TCP cubic
  2011-02-02 19:16       ` Dave Täht
  2011-02-02 20:01         ` Jim Gettys
@ 2011-02-02 21:36         ` Richard Scheffenegger
  1 sibling, 0 replies; 11+ messages in thread
From: Richard Scheffenegger @ 2011-02-02 21:36 UTC (permalink / raw)
  To: Dave "Täht"; +Cc: bloat


I guess so, but only with limited tunables:

Control Panel -> All Control Panel Items -> Network and Sharing Center ->
  Change Adapter Settings -> (Select Adapter, right click, Properties)
    Configure -> Advanced

Then you get a list of things the driver allows to tweak (or at least show 
stuff).

Interesting for us:
  Receive Buffers (256 on a Marvell Yukon)
  Transmit Buffers (256)
  Interrupt Moderation (also adds latency for slow flows).

Regards,
   Richard


----- Original Message ----- 
From: "Dave "Täht"" <d@taht.net>
To: "Richard Scheffenegger" <rscheff@gmx.at>
Cc: "Justin McCann" <jneilm@gmail.com>; "bloat" 
<bloat@lists.bufferbloat.net>
Sent: Wednesday, February 02, 2011 8:16 PM
Subject: Re: [Bloat] TCP vegas vs TCP cubic



Thx. Wiki'd:

http://www.bufferbloat.net/projects/bloat/wiki/Windows_Tips

Is there a windows equivalent of ethtool?

"Richard Scheffenegger" <rscheff@gmx.at> writes:

> For all the Windows Vista / Windows 7 Users around, they can enable
> "Compound TCP", which is a Hybrid TCP Congestion Control approach:
>
> netsh int tcp set global congestionprovider=ctcp
>
> and, while you're at it, also enable ECN (lossless congestion control
> feedback):
>
> netsh int tcp set global ecncapability=enabled
>
> If enough End Users enable ECN, core providers may be inclined to
> deploy AQM with Packet Marking too... And as Home Gateways are those
> which are prone to ECN implementation bugs, a full disconnect from the
> internet (rather than certain sites not reachable) is quite easy to
> diagnose at that end.
>
> Been running with ECN since a couple of months, and so far I have yet
> to encounter a site that will consistently fail with ECN. Actually,
> enabling ECN gives you more retries with the SYN (3x ECN+SYN, 3x
> normal SYN), so in a heavy congested / bufferbloated environment, your
> small flows might get through eventually, with higher probability.
>
> Regards,
>   Richard
>
>
>
> ----- Original Message ----- 
> From: "Dave "Täht"" <d@taht.net>
> To: "Justin McCann" <jneilm@gmail.com>
> Cc: "bloat" <bloat@lists.bufferbloat.net>
> Sent: Wednesday, February 02, 2011 5:29 PM
> Subject: Re: [Bloat] TCP vegas vs TCP cubic
>
>
>>
>> Thx for the feedback. I've put up more information on the wiki at:
>>
>> http://www.bufferbloat.net/projects/bloat/wiki/Experiment_-_TCP_cubic_vs_TCP_vegas
>>
>> (At least netnews had a "C"ancel message option. Wikis are safer to use
>> before your first cup of coffee)
>>
>> Justin McCann <jneilm@gmail.com> writes:
>>
>>> On Wed, Feb 2, 2011 at 10:20 AM, Dave Täht <d@taht.net> wrote:
>>>> Can I surmise that TCP cubic is like a dragster, able to go really fast
>>>> in one direction down a straightaway, and TCP vegas more like an 80s
>>>> model MR2, maneuverable, but underpowered?
>>>
>>> There are some parameters to tune, essentially setting the number of
>>> packets you want queued in the network at any one time (see
>>> http://neal.nu/uw/linux-vegas/). I haven't messed with it much myself,
>>> but you might try to increase those just a bit -- if Vegas
>>
>> I am reading now.
>>
>>> underestimates the queue size and the queue empties, you'll never get
>>> the throughput. Ideally there would always be exactly one packet in
>>> the bottleneck queue.
>>
>> What a happy day that would be.
>>
>>>
>>> But I think your results are pretty much expected with Vegas, since it
>>> uses the increase in queuing latency as an early congestion indicator.
>>> If everyone used it, we may be better off, but other congestion
>>> algorithms aren't fair to Vegas since they wait until there are packet
>>> drops to notice congestion.
>>
>> My thought was, is that if it were possible that the wireless side of a
>> given stack used it, life might be better on that front. Ultimately. For
>> people that upload stuff.
>>
>>>> On a failed hunch, I also re-ran the tests with a much larger
>>>> congestion window:
>>> I think you mean larger send/receive buffers instead of congestion
>>> window? I'll bet the Vegas parameters are keeping the congestion
>>
>> Correction noted. Coffee needed.
>>
>>> window smaller than your send/receive buffer sizes, so they aren't
>>> limiting you in the first place, so no improvement.
>>
>> I'll take a packet trace next time I run the test.
>>
>>>
>>> The web100 patches (web100.org) are great for getting into the details
>>> of how TCP is working. If you don't want to apply them yourself, you
>>> can try the Live CD of perfSONAR-PS (http://psps.perfsonar.net/). It
>>> might be useful to have an NDT
>>> (http://www.internet2.edu/performance/ndt/) server running on your
>>> home network, or use one at M-Lab. It doesn't need much resource-wise
>>> but the web100 patches.
>>
>> Excellent suggestions. Building now. (It seems to want java and I don't
>> think the little clients I have on this testbed can handle that well)
>>
>> At the moment my little testbed is fairly flexible and my queue of
>> things to test is quite large.
>>
>> I have bloat-reducing patches for all the devices in the picture except
>> for the laptop's , which is proving to be painful to look at.
>>
>> At the moment, I'd like to be getting, useful, interesting,
>> *repeatable* results for a variety of well defined latency + throughput
>> tests with... stock firmware and then be able to re-run the interesting
>> series(s) against more custom configurations.
>>
>> I've only deployed the first patch on the wndr3700 thus far. It was
>> *amazing*.
>>
>>>
>>>    Justin
>>
>> -- 
>> Dave Taht
>> http://nex-6.taht.net
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>

-- 
Dave Taht
http://nex-6.taht.net 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Bloat] TCP vegas vs TCP cubic
  2011-02-02 16:05 ` Justin McCann
  2011-02-02 16:29   ` Dave Täht
@ 2011-02-03 17:53   ` Dave Taht
  2011-02-04  9:51     ` Juliusz Chroboczek
  1 sibling, 1 reply; 11+ messages in thread
From: Dave Taht @ 2011-02-03 17:53 UTC (permalink / raw)
  To: Justin McCann; +Cc: bloat

re: vegas
On Wed, Feb 2, 2011 at 9:05 AM, Justin McCann <jneilm@gmail.com> wrote:

> There are some parameters to tune, essentially setting the number of
> packets you want queued in the network at any one time (see
> http://neal.nu/uw/linux-vegas/). I haven't messed with it much myself,
> but you might try to increase those just a bit -- if Vegas
> underestimates the queue size and the queue empties, you'll never get
> the throughput. Ideally there would always be exactly one packet in
> the bottleneck queue.
>
> But I think your results are pretty much expected with Vegas, since it
> uses the increase in queuing latency as an early congestion indicator.
> If everyone used it, we may be better off, but other congestion
> algorithms aren't fair to Vegas since they wait until there are packet
> drops to notice congestion.

After talking to the original author (who suggested that I also look
at TCP veno) ...

I suspect that Vegas is actually doing a *great* job measuring queuing
latency, which is artificially high due to the network path under test
doing 13 TX_Retries.

I did some testing on a de-bloated device and vegas actually did
slightly better than cubic (noted over on bloat-devel) but will need
to either get my radio speed ratcheted down or update more radios to a
de-bloated condition to test further.


-- 
Dave Täht

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Bloat] TCP vegas vs TCP cubic
  2011-02-02 20:01         ` Jim Gettys
@ 2011-02-03 18:34           ` Seth Teller
  0 siblings, 0 replies; 11+ messages in thread
From: Seth Teller @ 2011-02-03 18:34 UTC (permalink / raw)
  To: bloat


For completeness, are the commands needed to undo

   netsh int tcp set global congestionprovider=ctcp
   netsh int tcp set global ecncapability=enabled

these:

   netsh int tcp set global congestionprovider=tcp
   netsh int tcp set global ecncapability=disabled

?

On 2/2/2011 3:01 PM, Jim Gettys wrote:
> On 02/02/2011 02:16 PM, Dave Täht wrote:
>>
>> Thx. Wiki'd:
>>
>> http://www.bufferbloat.net/projects/bloat/wiki/Windows_Tips
>>
>> Is there a windows equivalent of ethtool?
>
> Dunno what command line there may be.
>
> Certainly the device driver I looked at had a UI that would let me
> control the ring buffer size, IIRC from my November experiments on Windows.
> - Jim
>
>>
>> "Richard Scheffenegger"<rscheff@gmx.at> writes:
>>
>>> For all the Windows Vista / Windows 7 Users around, they can enable
>>> "Compound TCP", which is a Hybrid TCP Congestion Control approach:
>>>
>>> netsh int tcp set global congestionprovider=ctcp
>>>
>>> and, while you're at it, also enable ECN (lossless congestion control
>>> feedback):
>>>
>>> netsh int tcp set global ecncapability=enabled
>>>
>>> If enough End Users enable ECN, core providers may be inclined to
>>> deploy AQM with Packet Marking too... And as Home Gateways are those
>>> which are prone to ECN implementation bugs, a full disconnect from the
>>> internet (rather than certain sites not reachable) is quite easy to
>>> diagnose at that end.
>>>
>>> Been running with ECN since a couple of months, and so far I have yet
>>> to encounter a site that will consistently fail with ECN. Actually,
>>> enabling ECN gives you more retries with the SYN (3x ECN+SYN, 3x
>>> normal SYN), so in a heavy congested / bufferbloated environment, your
>>> small flows might get through eventually, with higher probability.
>>>
>>> Regards,
>>> Richard
>>>
>>>
>>>
>>> ----- Original Message -----
>>> From: "Dave "Täht""<d@taht.net>
>>> To: "Justin McCann"<jneilm@gmail.com>
>>> Cc: "bloat"<bloat@lists.bufferbloat.net>
>>> Sent: Wednesday, February 02, 2011 5:29 PM
>>> Subject: Re: [Bloat] TCP vegas vs TCP cubic
>>>
>>>
>>>>
>>>> Thx for the feedback. I've put up more information on the wiki at:
>>>>
>>>> http://www.bufferbloat.net/projects/bloat/wiki/Experiment_-_TCP_cubic_vs_TCP_vegas
>>>>
>>>>
>>>> (At least netnews had a "C"ancel message option. Wikis are safer to use
>>>> before your first cup of coffee)
>>>>
>>>> Justin McCann<jneilm@gmail.com> writes:
>>>>
>>>>> On Wed, Feb 2, 2011 at 10:20 AM, Dave Täht<d@taht.net> wrote:
>>>>>> Can I surmise that TCP cubic is like a dragster, able to go really
>>>>>> fast
>>>>>> in one direction down a straightaway, and TCP vegas more like an 80s
>>>>>> model MR2, maneuverable, but underpowered?
>>>>>
>>>>> There are some parameters to tune, essentially setting the number of
>>>>> packets you want queued in the network at any one time (see
>>>>> http://neal.nu/uw/linux-vegas/). I haven't messed with it much myself,
>>>>> but you might try to increase those just a bit -- if Vegas
>>>>
>>>> I am reading now.
>>>>
>>>>> underestimates the queue size and the queue empties, you'll never get
>>>>> the throughput. Ideally there would always be exactly one packet in
>>>>> the bottleneck queue.
>>>>
>>>> What a happy day that would be.
>>>>
>>>>>
>>>>> But I think your results are pretty much expected with Vegas, since it
>>>>> uses the increase in queuing latency as an early congestion indicator.
>>>>> If everyone used it, we may be better off, but other congestion
>>>>> algorithms aren't fair to Vegas since they wait until there are packet
>>>>> drops to notice congestion.
>>>>
>>>> My thought was, is that if it were possible that the wireless side of a
>>>> given stack used it, life might be better on that front. Ultimately.
>>>> For
>>>> people that upload stuff.
>>>>
>>>>>> On a failed hunch, I also re-ran the tests with a much larger
>>>>>> congestion window:
>>>>> I think you mean larger send/receive buffers instead of congestion
>>>>> window? I'll bet the Vegas parameters are keeping the congestion
>>>>
>>>> Correction noted. Coffee needed.
>>>>
>>>>> window smaller than your send/receive buffer sizes, so they aren't
>>>>> limiting you in the first place, so no improvement.
>>>>
>>>> I'll take a packet trace next time I run the test.
>>>>
>>>>>
>>>>> The web100 patches (web100.org) are great for getting into the details
>>>>> of how TCP is working. If you don't want to apply them yourself, you
>>>>> can try the Live CD of perfSONAR-PS (http://psps.perfsonar.net/). It
>>>>> might be useful to have an NDT
>>>>> (http://www.internet2.edu/performance/ndt/) server running on your
>>>>> home network, or use one at M-Lab. It doesn't need much resource-wise
>>>>> but the web100 patches.
>>>>
>>>> Excellent suggestions. Building now. (It seems to want java and I don't
>>>> think the little clients I have on this testbed can handle that well)
>>>>
>>>> At the moment my little testbed is fairly flexible and my queue of
>>>> things to test is quite large.
>>>>
>>>> I have bloat-reducing patches for all the devices in the picture except
>>>> for the laptop's , which is proving to be painful to look at.
>>>>
>>>> At the moment, I'd like to be getting, useful, interesting,
>>>> *repeatable* results for a variety of well defined latency + throughput
>>>> tests with... stock firmware and then be able to re-run the interesting
>>>> series(s) against more custom configurations.
>>>>
>>>> I've only deployed the first patch on the wndr3700 thus far. It was
>>>> *amazing*.
>>>>
>>>>>
>>>>> Justin
>>>>
>>>> --
>>>> Dave Taht
>>>> http://nex-6.taht.net
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>
>>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Bloat] TCP vegas vs TCP cubic
  2011-02-03 17:53   ` Dave Taht
@ 2011-02-04  9:51     ` Juliusz Chroboczek
  2011-02-04 15:18       ` Dave Täht
  0 siblings, 1 reply; 11+ messages in thread
From: Juliusz Chroboczek @ 2011-02-04  9:51 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

> I did some testing on a de-bloated device and vegas actually did
> slightly better than cubic

That's strange.  Vegas should not be reacting to the base delay, only to
jitter.

--Juliusz





^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Bloat] TCP vegas vs TCP cubic
  2011-02-04  9:51     ` Juliusz Chroboczek
@ 2011-02-04 15:18       ` Dave Täht
  0 siblings, 0 replies; 11+ messages in thread
From: Dave Täht @ 2011-02-04 15:18 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: bloat


Juliusz Chroboczek <jch@pps.jussieu.fr> writes:

>> I did some testing on a de-bloated device and vegas actually did
>> slightly better than cubic
>
> That's strange.  Vegas should not be reacting to the base delay, only to
> jitter.

The insanely high amount of TX_RETRY (13) on the first (nanostation)
network might have had something to do with it. The debloated device (a
wndr 5700) had TX_RETRY=4. 

It remains puzzling.

I am doing a new build of openwrt for the nanostation net (incorporating
babel-1.1 - thx!) But ran into problems elsewhere in getting a build
done, so I'm a ways from being able to play with this comprehensibly.

And I'd like to expose TX_RETRY to userspace at some point. 
And it would be cool to get more drivers - like the iwl-lagn - debloated.

>
> --Juliusz
>
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

-- 
Dave Taht
http://nex-6.taht.net

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2011-02-04 15:18 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-02-02 15:20 [Bloat] TCP vegas vs TCP cubic Dave Täht
2011-02-02 16:05 ` Justin McCann
2011-02-02 16:29   ` Dave Täht
2011-02-02 18:37     ` Richard Scheffenegger
2011-02-02 19:16       ` Dave Täht
2011-02-02 20:01         ` Jim Gettys
2011-02-03 18:34           ` Seth Teller
2011-02-02 21:36         ` Richard Scheffenegger
2011-02-03 17:53   ` Dave Taht
2011-02-04  9:51     ` Juliusz Chroboczek
2011-02-04 15:18       ` Dave Täht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox