General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: Jim Gettys <jg@freedesktop.org>
To: Paul Gleichauf <gleichauf@apple.com>
Cc: bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players.
Date: Fri, 27 Jan 2012 12:10:23 -0500	[thread overview]
Message-ID: <4F22DA7F.6090302@freedesktop.org> (raw)
In-Reply-To: <D671F0F0-7EC4-4875-983D-BB8BFF38BFDC@apple.com>

On 01/26/2012 03:48 PM, Paul Gleichauf wrote:
> Jim,
>
> I suggest you also look at YouTube. The last time I checked, their
> initial firehose behavior was more pronounced than Hulu and may have
> had more to do with deliberately designed server behavior to support a
> specific usage pattern for viewing their videos. It's hard to tell
> without server logs how much the intermediate network nodes
> contribute, vs buffbloat, vs. the server's rate limiting.  It appears
> as if YouTube may want to move on to the next relatively short request
> as fast as possible, rather rate limit in pulses to stream longer
> running streams such as a movie or a television show. 
>
> Here are a couple of old traces to make the point. Both are
> Flash-based on the same platform, but different content.

I will.  But also Simon Leinen
<https://plus.google.com/107530144481437668885>on google+ pointed me at
- "For a characterization of the behaviors of various video streaming
services, see "Network Characteristics of Video Streaming Traffic" by
Rao et al., CoNEXT 2011
(http://conferences.sigcomm.org/co-next/2011/program.html)"

In the short term, I need to understand what current TCP systems
*should* do in the face of long idle periods, which invalidate the
congestion information.  I fear these may be large line rate bursts
hitting the broadband head end, and then propagating to the home router
queues... under some circumstances (e.g. the RTO gets driven up due to
bloat, so the congestion window doesn't come down very fast in the idle
periods).  Help from people who know TCP better than I do (which isn't
all that well), gratefully appreciated.

My immediate reaction to these bursts was "boy, we may need someone (not
me) to write a BCP on how to do more network friendly video
players"....  There's been a significant discussion on Google+ as a
result since then.


>
>
> YouTube (paragliding clip):
>
>
> Hulu (SNL clip):
>
> Paul
>
> On Jan 26, 2012, at Jan 26, 2012 12:04 PM, Jim Gettys wrote:
>
>> Since we're rapidly getting to the point of needing to test AQM
>> algorithms (as enough of the underbrush has been cleared away), I've
>> been thinking a bit about how to test the algorithms.
>>
>> I've been inspired to think a bit about whether, given both the changes
>> in web browsers and the advent of "streaming" media to add to the
>> classic full speed "elephant" flow, what a test scenario for both
>> simulation and actual tests should look like.
>>
>> I feel I have some idea of how HTTP works and web browsers given my
>> experience of the 1990's; if people have any real data (taken from the
>> edge please!) I'd like to update my knowledge; but it's going to be
>> bursty, and due to using so many connections, most of those packets will
>> not be under congestion avoidance (would that SPDY and HTTP/1.1
>> pipelining get deployed quickly...)
>>
>> And classic elephant flows we know...
>>
>> I finally thought today to just use CeroWrt/OpenWrt's nice traffic
>> plotting stuff to get a feel for a number of video players's behaviour
>> (I happened to use an iPad; I sampled Netflix, Hulu, and IMDB).
>>
>> As I expected, there is a large amount of data transferred as fast as
>> possible at the beginning, to try to hide performance problems (much of
>> which are being caused by bufferbloat, of course).  These will clearly
>> drive latencies well up.  What's interesting is what happens after that:
>> rather than, having had a period to observe how fast they are able to
>> transfer data, both Netfix and Hulu appear to actually "burst" their
>> later transfers.  The data is not "streamed" at all, rather it is sent
>> in full rate bursts approximately every 10 seconds thereafter. (which
>> will induce bursts of latency).  IMDB seems to buffer yet more than
>> Netfix and Hulu (watching HD movie clips); I should experiment a bit
>> more with it.
>>
>> The interesting question is what do current operating systems/standards
>> do to the TCP window when idle?  Anyone know?  I'm making a (possibly
>> poor) presumption that they are using single TCP connections; I should
>> probably take some traces....
>>                        - Jim
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/bloat
>


  reply	other threads:[~2012-01-27 17:10 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-01-26 20:04 Jim Gettys
2012-01-26 20:14 ` Justin McCann
2012-01-26 20:48 ` Paul Gleichauf
2012-01-27 17:10   ` Jim Gettys [this message]
2012-01-27 18:11     ` Eggert, Lars
2012-01-27 21:42     ` Rick Jones

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F22DA7F.6090302@freedesktop.org \
    --to=jg@freedesktop.org \
    --cc=bloat@lists.bufferbloat.net \
    --cc=gleichauf@apple.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox