From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-12-ewr.dyndns.com (mxout-076-ewr.mailhop.org [216.146.33.76]) by lists.bufferbloat.net (Postfix) with ESMTP id AB3D12E015C for ; Sat, 12 Mar 2011 14:24:04 -0800 (PST) Received: from scan-12-ewr.mailhop.org (scan-12-ewr.local [10.0.141.230]) by mail-12-ewr.dyndns.com (Postfix) with ESMTP id 6D6749346DC for ; Sat, 12 Mar 2011 22:24:03 +0000 (UTC) X-Spam-Score: 0.0 () X-Mail-Handler: MailHop by DynDNS X-Originating-IP: 24.71.223.10 Received: from idcmail-mo1so.shaw.ca (idcmail-mo1so.shaw.ca [24.71.223.10]) by mail-12-ewr.dyndns.com (Postfix) with ESMTP id 90A7C93467B for ; Sat, 12 Mar 2011 22:24:02 +0000 (UTC) Received: from pd4ml3so-ssvc.prod.shaw.ca ([10.0.141.150]) by pd3mo1so-svcs.prod.shaw.ca with ESMTP; 12 Mar 2011 15:24:01 -0700 X-Cloudmark-SP-Filtered: true X-Cloudmark-SP-Result: v=1.1 cv=fp/YrIG6YJuYYWILSlVONgBms0XWtFqTjqHWlNKfnDg= c=1 sm=1 a=b28nCewJ6d8A:10 a=BLceEmwcHowA:10 a=wPDyFdB5xvgA:10 a=xqWC_Br6kY4A:10 a=5cEFxojLHbSazGx3ptQdfQ==:17 a=3dZX8JWgAAAA:8 a=b7SLfKwVAAAA:8 a=TpMqR4cP7JzZ_uazt2UA:9 a=i7iuXxpg_21yGeJeaJIA:7 a=JeNL0ICL3S-UvZQYHOcimp-HyqwA:4 a=Fw8iwiUKpeAA:10 a=TphoKWqS9HQA:10 a=HpAAvcLHHh0Zw7uRqdWCyQ==:117 Received: from unknown (HELO amd.pacdat.net) ([96.48.80.31]) by pd4ml3so-dmz.prod.shaw.ca with ESMTP; 12 Mar 2011 15:24:01 -0700 Received: from localhost ([::1]) by amd.pacdat.net with esmtp (Exim 4.69) (envelope-from ) id 1PyXE7-0002pc-95; Sat, 12 Mar 2011 14:24:01 -0800 From: richard To: Jonathan Morton In-Reply-To: <462034BF-919D-4AE4-BA58-EA98C95D870F@gmail.com> References: <16808EAB-2F52-4D32-8A8C-2AE09CD4D103@gmail.com> <1299899959.1835.10.camel@amd.pacdat.net> <10491D5A-AA1B-4F41-99A9-15A0C06ADF25@gmail.com> <1299902651.31981.7.camel@amd.pacdat.net> <462034BF-919D-4AE4-BA58-EA98C95D870F@gmail.com> Content-Type: text/plain Date: Sat, 12 Mar 2011 14:23:59 -0800 Message-Id: <1299968639.31851.30.camel@amd.pacdat.net> Mime-Version: 1.0 X-Mailer: Evolution 2.26.3 (2.26.3-1.fc11) Content-Transfer-Encoding: 7bit X-Spam_score: -2.9 X-Spam_score_int: -28 X-Spam_bar: -- Cc: bloat@lists.bufferbloat.net Subject: Re: [Bloat] Measuring latency-under-load consistently X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Mar 2011 22:24:05 -0000 On Sat, 2011-03-12 at 23:57 +0200, Jonathan Morton wrote: > On 12 Mar, 2011, at 6:04 am, richard wrote: > > > OK - you make a good case for a new measure as my understanding of > > jitter is latency related and typically measured at the link level (udp) > > rather than at the application level. > > > > I infer then that this will do things like impact the CPU load and disk > > load, and might for example introduce "ringing" or harmonics into such > > sub systems if/when applications end up "in sync" due to being "less > > smooth" in their data output to the lower level IP levels. > > I'm not sure how significant those effects would be, compared to simple > data starvation at the client. Most Web servers operate with all the > frequently-accessed data in RAM (via disk cache) and serve many > clients at once or in quick succession, whose network paths don't have > the same bottlenecks in general. > > It was my understanding that UDP-bsed protocols tended to tolerate my bad - meant ICMP (as in ping) > packet loss through redundancy and graceful degradation rather than > retransmission, though there are always exceptions to the rule. So a > video streaming server would be transmitting smoothly, with the client > giving feedback on how much data had been received and how much packet > loss it was experiencing. Even if that status information is > considerably delayed, I don't see why load spikes at the server should > occur. some of the video servers I deal with are using TCP (Windows Media for example unless you configure it otherwise) but in general you're right, the general rule with concurrent (as opposed to multicast) unicast streams is that the server clocks the outbound and cares little about whether the packets actually get there - that's the receiver's problem. > > A fileserver, on the other hand, would not care very much. Even if the > TCP window has grown to a megabyte, it takes longer to seek disk heads > than to read that much off the platter, so these lumps would be > absorbed by the normal readahead and elevator algorithms anyway. > However, large TCP windows do consume RAM in both server and client, > and with a sufficient number of simultaneous clients, that could > theoretically cause trouble. Constraining TCP windows to near the > actual BDP is more efficient all around. Yes - have had RAM exhaustion problems on busy servers with large video files - major headache. > > > It will be affected by session drops due to timeouts as well as the need > > to "fill the pipe" on a reconnect in such applications as streaming > > video (my area) so that a key frame can come into the system and restart > > the interrupted video play. > > In the event of a TCP session drop, I think I will consider it a test > failure and give zero scores across the board. Sufficient delay or > packet loss to cause that indicates a pretty badly broken network. Yup - that's what the problem is all right > > With that said, I can think of a case where it is likely to happen. > Remember that I was seeing 30 seconds of buffering on a 500kbps 3G > link... now what happens if the link drops to GPRS speeds? There > would be over a megabyte of data queued up behind a 50Kbps link (at > best). No wonder stuff basically didn't work when that happened. > > - Jonathan > Your insights into this are great - thanks :) richard -- Richard C. Pitt Pacific Data Capture rcpitt@pacdat.net 604-644-9265 http://digital-rag.com www.pacdat.net PGP Fingerprint: FCEF 167D 151B 64C4 3333 57F0 4F18 AF98 9F59 DD73