From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qw0-f43.google.com (mail-qw0-f43.google.com [209.85.216.43]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 782B6201A9D for ; Thu, 5 May 2011 11:30:09 -0700 (PDT) Received: by qwf6 with SMTP id 6so2615170qwf.16 for ; Thu, 05 May 2011 11:34:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:sender:message-id:date:from:organization :user-agent:mime-version:to:subject:references:in-reply-to :content-type:content-transfer-encoding; bh=YhuFK5JxhoUf1E5MQ+CLEX46MW9bzDd0OvsR3MIRGXg=; b=v3JUlSMn0X0onQrILT/x0ECo7WAKpaH7n+2JiRFvNE5c/0NLsKo8B9qhBeM6JPUfHF 3z7XRzuk0ElnsIfMHsgctYcuF5WSjiZMxDkviJulfry9mYCpIVjNzTpeWoLueogSOEEu GjQj0LMnl6fF51EEd60BgojC/zwApmNfgJMm4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:message-id:date:from:organization:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; b=oQ8EPEEKdiHQxynz9a87Z5+2zfI2Bvdgosu5GUwR4hPFnBRuJOg5cgfmJvW0brGKk4 ck6D3bVpVuKJkL2UK8kH2j0Mpo/KyooyoEccOlSr34Kzl2ukV2wM897zOP8mstlLyHMc eesqWVlCiaGBYc54SyIDsYWJowh01NC7Zge+s= Received: by 10.52.172.34 with SMTP id az2mr2056902vdc.143.1304620481194; Thu, 05 May 2011 11:34:41 -0700 (PDT) Received: from [192.168.1.119] (c-98-229-99-32.hsd1.ma.comcast.net [98.229.99.32]) by mx.google.com with ESMTPS id z18sm894677vbx.14.2011.05.05.11.34.38 (version=SSLv3 cipher=OTHER); Thu, 05 May 2011 11:34:39 -0700 (PDT) Sender: Jim Gettys Message-ID: <4DC2EDBD.1070508@freedesktop.org> Date: Thu, 05 May 2011 14:34:37 -0400 From: Jim Gettys Organization: Bell Labs User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.14) Gecko/20110223 Lightning/1.0b2 Thunderbird/3.1.8 MIME-Version: 1.0 To: bloat@lists.bufferbloat.net References: <4DB70FDA.6000507@mti-systems.com> <4DC2C9D2.8040703@freedesktop.org> <20110505091046.3c73e067@nehalam> <6E25D2CF-D0F0-4C41-BABC-4AB0C00862A6@pnsol.com> In-Reply-To: <6E25D2CF-D0F0-4C41-BABC-4AB0C00862A6@pnsol.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Bloat] Burst Loss X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 May 2011 18:30:09 -0000 On 05/05/2011 12:49 PM, Neil Davies wrote: > On the issue of loss - we did a study of the UK's ADSL access network back in 2006 over several weeks, looking at the loss and delay that was introduced into the bi-directional traffic. > > We found that the delay variability (that bit left over after you've taken the effects of geography and line sync rates) was broadly > the same over the half dozen locations we studied - it was there all the time to the same level of variance and that what did vary by time of day was the loss rate. > > We also found out, at the time much to our surprise - but we understand why now, that loss was broadly independent of the offered load - we used a constant data rate (with either fixed or variable packet sizes) . > > We found that loss rates were in the range 1% to 3% (which is what would be expected from a large number of TCP streams contending for a limiting resource). > > As for burst loss, yes it does occur - but it could be argued that this more the fault of the sending TCP stack than the network. > > This phenomenon was well covered in the academic literature in the '90s (if I remember correctly folks at INRIA lead the way) - it is all down to the nature of random processes and how you observe them. > > Back to back packets see higher loss rates than packets more spread out in time. Consider a pair of packets, back to back, arriving over a 1Gbit/sec link into a queue being serviced at 34Mbit/sec, the first packet being 'lost' is equivalent to saying that the first packet 'observed' the queue full - the system's state is no longer a random variable - it is known to be full. The second packet (lets assume it is also a full one) 'makes an observation' of the state of that queue about 12us later - but that is only 3% of the time that it takes to service such large packets at 34 Mbit/sec. The system has not had any time to 'relax' anywhere near to back its steady state, it is highly likely that it is still full. > > Fixing this makes a phenomenal difference on the goodput (with the usual delay effects that implies), we've even built and deployed systems with this sort of engineering embedded (deployed as a network 'wrap') that mean that end users can sustainably (days on end) achieve effective throughput that is better than 98% of (the transmission media imposed) maximum. What we had done is make the network behave closer to the underlying statistical assumptions made in TCP's design. > > Neil Good point: in phone conversations with Van Jacobson, he made the point that we'd really like the hardware to allow scheduling of packet transmission to allow proper paceing of packets, to avoid clumping and smooth flow. - Jim > > > > On 5 May 2011, at 17:10, Stephen Hemminger wrote: > >> On Thu, 05 May 2011 12:01:22 -0400 >> Jim Gettys wrote: >> >>> On 04/30/2011 03:18 PM, Richard Scheffenegger wrote: >>>> I'm curious, has anyone done some simulations to check if the >>>> following qualitative statement holds true, and if, what the >>>> quantitative effect is: >>>> >>>> With bufferbloat, the TCP congestion control reaction is unduely >>>> delayed. When it finally happens, the tcp stream is likely facing a >>>> "burst loss" event - multiple consecutive packets get dropped. Worse >>>> yet, the sender with the lowest RTT across the bottleneck will likely >>>> start to retransmit while the (tail-drop) queue is still overflowing. >>>> >>>> And a lost retransmission means a major setback in bandwidth (except >>>> for Linux with bulk transfers and SACK enabled), as the standard (RFC >>>> documented) behaviour asks for a RTO (1sec nominally, 200-500 ms >>>> typically) to recover such a lost retransmission... >>>> >>>> The second part (more important as an incentive to the ISPs actually), >>>> how does the fraction of goodput vs. throughput change, when AQM >>>> schemes are deployed, and TCP CC reacts in a timely manner? Small ISPs >>>> have to pay for their upstream volume, regardless if that is "real" >>>> work (goodput) or unneccessary retransmissions. >>>> >>>> When I was at a small cable ISP in switzerland last week, surely >>>> enough bufferbloat was readily observable (17ms -> 220ms after 30 sec >>>> of a bulk transfer), but at first they had the "not our problem" view, >>>> until I started discussing burst loss / retransmissions / goodput vs >>>> throughput - with the latest point being a real commercial incentive >>>> to them. (They promised to check if AQM would be available in the CPE >>>> / CMTS, and put latency bounds in their tenders going forward). >>>> >>> I wish I had a good answer to your very good questions. Simulation >>> would be interesting though real daa is more convincing. >>> >>> I haven't looked in detail at all that many traces to try to get a feel >>> for how much bandwidth waste there actually is, and more formal studies >>> like Netalyzr, SamKnows, or the Bismark project would be needed to >>> quantify the loss on the network as a whole. >>> >>> I did spend some time last fall with the traces I've taken. In those, >>> I've typically been seeing 1-3% packet loss in the main TCP transfers. >>> On the wireless trace I took, I saw 9% loss, but whether that is >>> bufferbloat induced loss or not, I don't know (the data is out there for >>> those who might want to dig). And as you note, the losses are >>> concentrated in bursts (probably due to the details of Cubic, so I'm told). >>> >>> I've had anecdotal reports (and some first hand experience) with much >>> higher loss rates, for example from Nick Weaver at ICSI; but I believe >>> in playing things conservatively with any numbers I quote and I've not >>> gotten consistent results when I've tried, so I just report what's in >>> the packet captures I did take. >>> >>> A phenomena that could be occurring is that during congestion avoidance >>> (until TCP loses its cookies entirely and probes for a higher operating >>> point) that TCP is carefully timing it's packets to keep the buffers >>> almost exactly full, so that competing flows (in my case, simple pings) >>> are likely to arrive just when there is no buffer space to accept them >>> and therefore you see higher losses on them than you would on the single >>> flow I've been tracing and getting loss statistics from. >>> >>> People who want to look into this further would be a great help. >>> - Jim >> I would not put a lot of trust in measuring loss with pings. >> I heard that some ISP's do different processing on ICMP's used >> for ping packets. They either prioritize them high to provide >> artificially good response (better marketing numbers); or >> prioritize them low since they aren't useful traffic. >> There are also filters that only allow N ICMP requests per second >> which means repeated probes will be dropped. >> >> >> >> -- >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat