[Bloat] Best practices for paced TCP on Linux?
rick.jones2 at hp.com
Mon Apr 16 10:05:08 PDT 2012
On 04/14/2012 02:06 PM, Roger Jørgensen wrote:
> On Sat, Apr 14, 2012 at 2:35 AM, Rick Jones<rick.jones2 at hp.com> wrote:
>> On 04/06/2012 03:21 PM, Steinar H. Gunderson wrote:
>>> On Fri, Apr 06, 2012 at 02:49:38PM -0700, Dave Taht wrote:
>>>> However in your environment you will need the beefed up SFQ that is in
>>>> and BQL. If you are not saturating that 10GigE card, you can turn off
>>>> as well.
>>> We're not anywhere near saturating our 10GigE card, and even if we did, we
>>> could add at least one 10GigE card more.
>> TSO/GSO isn't so much about saturating the 10 GbE NIC as it is avoiding
>> saturating the CPU(s) driving the 10 GbE NIC. That is, they save trips down
>> the protocol stack, saving CPU cycles. So, if you are not saturating one or
>> more of the CPUs in the system, disabling TSO/GSO should not affect your
>> ability to drive bits out the NIC.
> What will happen in a virtual only environment when all the VM's got
> more than one 10Gbps and you push close to 10Gbps through each VM?
> like heavy iperf between lots of the VM's?
I don't know, I run netperf :)
> Unless the platform does something that should start to saturate some
> of the CPU core's in the entire playform.
If the VMs are all on the same system, or there are enough 10 GbEs yes,
that would probably start to saturate the CPUs, perhaps even with TSO on
(if the VMs have that on their emulated interfaces). Probably lots of
time spent moving data around. If though all the VMs are talking out
the one 10 Gbps pipe, even with bloat the TCP connections (I'm assuming
TCP) will get backed-off and the CPUs won't be at saturation.
But the eash with which things like TSO/GSO and LRO/GRO can hide a
multitude of path-length sins is why I prefer to use aggregate,
burst-mode TCP_RR to measure scalability - lots and lots of trips up and
down the protocol stack.
(I should probably switch the example to TCP_RR -
More information about the Bloat