[Bloat] Burst Loss

Rick Jones rick.jones2 at hp.com
Fri May 13 16:47:48 EDT 2011


On Fri, 2011-05-13 at 12:32 -0700, Denton Gentry wrote:

>   NICs seem to be responding by hashing incoming 5-tuples to
> distribute flows across cores.

When I first kicked netperf out onto the Internet, when 10
Megabits/second was really fast, people started asking me "Why can't I
get link-rate on a single-stream netperf test?"  The answer was "Because
you don't have enough CPU horsepower, but perhaps the next processor
will." Then when 100BT happened, people asked me "Why can't I get
link-rate on a single-stream netperf test?"  And the answer was the
same.  Then when 1 GbE happened, people asked me "Why can't I get
link-rate on a single-stream netperf test?"  And the answer was the
same, tweaked slightly to suggest they get a NIC with CKO.  Then when 10
GbE happened people asked me "Why can't I get link-rate on a
single-stream netperf test?" And the answer was "Because you don't have
enough CPU, try a NIC with TSO and LRO."

Based on the past 20 years I am quite confident that when 40 and 100 GbE
NICs appear for end systems, I will again be asked "Why can't I get
link-rate on a single-stream netperf test?"  While indeed, the world is
not just unidirectional bulk flows (if it were netperf and its
request-response tests would never have come into being to replace
ttcp), even after decades it is still something people seem to expect.
There must be some value to high performance unidirectional transfer.

Only now the cores aren't going to have gotten any faster, and spreading
incoming 5-tuples across cores isn't going to help a single stream.

So, the "answer" will likely end-up being to add still more complexity -
either in the applications to use multiple streams, or to push the full
stack into the NIC. Adde parvum parvo manus acervus erit. But, by
Metcalf, we will have preserved the sacrosanct Ethernet maximum frame
size.

Crossing emails a bit, Kevin wrote about the 6X increase in latency.  It
is a 6X increase in *potential* latency *if* someone actually enables
the larger MTU.  And yes, the "We want to be on the Top 500 list" types
do worry about latency and some perhaps even many of them use Ethernet
instead of Infiniband (which does, BTW offer at least the illusion of a
quite large MTU to IP), but a sanctioned way to run a larger MTU over
Ethernet does not *force* them to use it if they want to make the
explicit latency vs overhead trade-off.  As it stands, those who do not
worry about micro or nanoseconds are forced off the standard in the name
of preserving something for those who do.  (And with 100 GbE it would be
nanosecond differences we would talking about - the 12 and 72 usec of 1
GbE become 120 and 720 nanoseconds at 100 GbE - the realm of a processor
cache miss because memory latency hasn't and won't likely get much
better either)

And, are transaction or SAN latencies actually measured in microseconds
or nanoseconds?  If "transactions" are OLTP, those things are measured
in milliseconds and even whole seconds (TPC), and spinning rust (yes,
but not SSDs) still has latencies measured in milliseconds.

rick jones


>  
>         And while it isn't the
>         strongest point in the world, one might even argue that the
>         need to use
>         TSO/LRO to achieve performance hinders new transport protocol
>         adoption -
>         the presence of NIC offloads for only TCP (or UDP) leaves a
>         new
>         transport protocol (perhaps SCTP) at a disadvantage.
> 
> 
>   True, and even UDP seems to be often blocked for anything other than
> DNS.





More information about the Bloat mailing list