From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from g4t0014.houston.hp.com (g4t0014.houston.hp.com [15.201.24.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "smtp1.hp.com", Issuer "VeriSign Class 3 Secure Server CA - G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 1197F201A42 for ; Fri, 13 May 2011 07:27:09 -0700 (PDT) Received: from g4t0018.houston.hp.com (g4t0018.houston.hp.com [16.234.32.27]) by g4t0014.houston.hp.com (Postfix) with ESMTP id 7D44B24A46; Fri, 13 May 2011 14:35:24 +0000 (UTC) Received: from [16.89.244.213] (tardy.cup.hp.com [16.89.244.213]) by g4t0018.houston.hp.com (Postfix) with ESMTP id D5F5A1002D; Fri, 13 May 2011 14:35:21 +0000 (UTC) From: Rick Jones To: Kevin Gross In-Reply-To: References: <4DB70FDA.6000507@mti-systems.com> <4DC2C9D2.8040703@freedesktop.org> <20110505091046.3c73e067@nehalam> <6E25D2CF-D0F0-4C41-BABC-4AB0C00862A6@pnsol.com> <35D8AC71C7BF46E29CC3118AACD97FA6@srichardlxp2> <1304964368.8149.202.camel@tardy> <4DD9A464-8845-49AA-ADC4-A0D36D91AAEC@cisco.com> Content-Type: text/plain; charset="UTF-8" Date: Fri, 13 May 2011 07:35:21 -0700 Message-ID: <1305297321.8149.549.camel@tardy> Mime-Version: 1.0 X-Mailer: Evolution 2.30.3 Content-Transfer-Encoding: 7bit Cc: bloat@lists.bufferbloat.net Subject: Re: [Bloat] Burst Loss X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list Reply-To: rick.jones2@hp.com List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 May 2011 14:27:10 -0000 On Thu, 2011-05-12 at 23:00 -0600, Kevin Gross wrote: > One of the principal reasons jumbo frames have not been standardized > is due to latency concerns. I assume this group can appreciate the > IEEE holding ground on this. Thusfar at least, bloaters are fighting to eliminate 10s of milliseconds of queuing delay. I don't think this list is worrying about the tens of microseconds difference between the transmission time of a 9000 byte frame at 1 GbE vs a 1500 byte frame, or the single digit microseconds difference at 10 GbE. The "lets try to get onto the Top 500 list" crowd might, but official sanction for a 9000 byte MTU (or larger) doesn't mean it *must* be used. > For a short time, servers with gigabit NICs suffered but smarter NICs > were developed (TSO, LRO, other TLAs) and OSs upgraded to support them > and I believe it is no longer a significant issue. Are TSO and LRO going to be sufficient at 40 and 100 GbE? Cores aren't getting any faster. Only more plentiful. And while it isn't the strongest point in the world, one might even argue that the need to use TSO/LRO to achieve performance hinders new transport protocol adoption - the presence of NIC offloads for only TCP (or UDP) leaves a new transport protocol (perhaps SCTP) at a disadvantage. rick jones > Kevin Gross > > On Thu, May 12, 2011 at 10:31 AM, Fred Baker wrote: > > On May 9, 2011, at 11:06 AM, Rick Jones wrote: > > > GSO/TSO can be thought of as a symptom of standards bodies > (eg the IEEE) > > refusing to standardize an increase in frame sizes. Put > another way, > > they are a "poor man's jumbo frames." > > I'll agree, but only half; once the packets are transferred on > the local wire, any jumbo-ness is lost. GSO/TSO mostly > squeezes interframe gaps out of the wire and perhaps limits > the amount of work the driver has to do. The real value of an > end to end (IP) jumbo frame is that the receiving system > experiences less interrupt load - a 9K frame replaces half a > dozen 1500 byte frames, and as a result the receiver > experiences 1/5 or 1/6 of the interrupts. Given that it has to > save state, activate the kernel thread, and at least enqueue > and perhaps acknowledge the received message, reducing > interrupt load on the receiver makes it far more effective. > This has the greatest effect on multi-gigabit file transfers. > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat