From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qw0-f43.google.com (mail-qw0-f43.google.com [209.85.216.43]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 8AC6F200126 for ; Fri, 27 Jan 2012 09:10:28 -0800 (PST) Received: by qabg1 with SMTP id g1so976455qab.16 for ; Fri, 27 Jan 2012 09:10:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=sender:message-id:date:from:organization:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=njOwhMJ+MO9IYK+/BmLlMFVaJB7UnbIyx70MhZVGwMw=; b=m9CS0dJ8EGeKnu4Ly+LKWRP+DNx9+03wkWEHZ5eGmthynlk0lzjfl7lN9k8GwLp2tA YH/9mpUqJMmA4fw5KUcXDC1JWYI8tIojV4+3P0zCfrpx5DkMBF1c0LbVZ8m/Sy1B/GBC eQYRhIA+aBQLHLS1mbatXKaUdMkXQvCceZASE= Received: by 10.224.10.19 with SMTP id n19mr9537115qan.68.1327684225933; Fri, 27 Jan 2012 09:10:25 -0800 (PST) Received: from [192.168.1.27] (c-24-63-191-17.hsd1.ma.comcast.net. [24.63.191.17]) by mx.google.com with ESMTPS id ft9sm16109089qab.20.2012.01.27.09.10.24 (version=SSLv3 cipher=OTHER); Fri, 27 Jan 2012 09:10:24 -0800 (PST) Sender: Jim Gettys Message-ID: <4F22DA7F.6090302@freedesktop.org> Date: Fri, 27 Jan 2012 12:10:23 -0500 From: Jim Gettys Organization: Bell Labs User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:9.0) Gecko/20111229 Thunderbird/9.0 MIME-Version: 1.0 To: Paul Gleichauf References: <4F21B1E8.9010504@freedesktop.org> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: bloat Subject: Re: [Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players. X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Jan 2012 17:10:28 -0000 On 01/26/2012 03:48 PM, Paul Gleichauf wrote: > Jim, > > I suggest you also look at YouTube. The last time I checked, their > initial firehose behavior was more pronounced than Hulu and may have > had more to do with deliberately designed server behavior to support a > specific usage pattern for viewing their videos. It's hard to tell > without server logs how much the intermediate network nodes > contribute, vs buffbloat, vs. the server's rate limiting. It appears > as if YouTube may want to move on to the next relatively short request > as fast as possible, rather rate limit in pulses to stream longer > running streams such as a movie or a television show. > > Here are a couple of old traces to make the point. Both are > Flash-based on the same platform, but different content. I will. But also Simon Leinen on google+ pointed me at - "For a characterization of the behaviors of various video streaming services, see "Network Characteristics of Video Streaming Traffic" by Rao et al., CoNEXT 2011 (http://conferences.sigcomm.org/co-next/2011/program.html)" In the short term, I need to understand what current TCP systems *should* do in the face of long idle periods, which invalidate the congestion information. I fear these may be large line rate bursts hitting the broadband head end, and then propagating to the home router queues... under some circumstances (e.g. the RTO gets driven up due to bloat, so the congestion window doesn't come down very fast in the idle periods). Help from people who know TCP better than I do (which isn't all that well), gratefully appreciated. My immediate reaction to these bursts was "boy, we may need someone (not me) to write a BCP on how to do more network friendly video players".... There's been a significant discussion on Google+ as a result since then. > > > YouTube (paragliding clip): > > > Hulu (SNL clip): > > Paul > > On Jan 26, 2012, at Jan 26, 2012 12:04 PM, Jim Gettys wrote: > >> Since we're rapidly getting to the point of needing to test AQM >> algorithms (as enough of the underbrush has been cleared away), I've >> been thinking a bit about how to test the algorithms. >> >> I've been inspired to think a bit about whether, given both the changes >> in web browsers and the advent of "streaming" media to add to the >> classic full speed "elephant" flow, what a test scenario for both >> simulation and actual tests should look like. >> >> I feel I have some idea of how HTTP works and web browsers given my >> experience of the 1990's; if people have any real data (taken from the >> edge please!) I'd like to update my knowledge; but it's going to be >> bursty, and due to using so many connections, most of those packets will >> not be under congestion avoidance (would that SPDY and HTTP/1.1 >> pipelining get deployed quickly...) >> >> And classic elephant flows we know... >> >> I finally thought today to just use CeroWrt/OpenWrt's nice traffic >> plotting stuff to get a feel for a number of video players's behaviour >> (I happened to use an iPad; I sampled Netflix, Hulu, and IMDB). >> >> As I expected, there is a large amount of data transferred as fast as >> possible at the beginning, to try to hide performance problems (much of >> which are being caused by bufferbloat, of course). These will clearly >> drive latencies well up. What's interesting is what happens after that: >> rather than, having had a period to observe how fast they are able to >> transfer data, both Netfix and Hulu appear to actually "burst" their >> later transfers. The data is not "streamed" at all, rather it is sent >> in full rate bursts approximately every 10 seconds thereafter. (which >> will induce bursts of latency). IMDB seems to buffer yet more than >> Netfix and Hulu (watching HD movie clips); I should experiment a bit >> more with it. >> >> The interesting question is what do current operating systems/standards >> do to the TCP window when idle? Anyone know? I'm making a (possibly >> poor) presumption that they are using single TCP connections; I should >> probably take some traces.... >> - Jim >> >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat >