[Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players.

Paul Gleichauf gleichauf at apple.com
Thu Jan 26 15:48:30 EST 2012


Jim,

I suggest you also look at YouTube. The last time I checked, their initial firehose behavior was more pronounced than Hulu and may have had more to do with deliberately designed server behavior to support a specific usage pattern for viewing their videos. It's hard to tell without server logs how much the intermediate network nodes contribute, vs buffbloat, vs. the server's rate limiting.  It appears as if YouTube may want to move on to the next relatively short request as fast as possible, rather rate limit in pulses to stream longer running streams such as a movie or a television show. 

Here are a couple of old traces to make the point. Both are Flash-based on the same platform, but different content.


YouTube (paragliding clip):



Hulu (SNL clip):


Paul

On Jan 26, 2012, at Jan 26, 2012 12:04 PM, Jim Gettys wrote:

> Since we're rapidly getting to the point of needing to test AQM
> algorithms (as enough of the underbrush has been cleared away), I've
> been thinking a bit about how to test the algorithms.
> 
> I've been inspired to think a bit about whether, given both the changes
> in web browsers and the advent of "streaming" media to add to the
> classic full speed "elephant" flow, what a test scenario for both
> simulation and actual tests should look like.
> 
> I feel I have some idea of how HTTP works and web browsers given my
> experience of the 1990's; if people have any real data (taken from the
> edge please!) I'd like to update my knowledge; but it's going to be
> bursty, and due to using so many connections, most of those packets will
> not be under congestion avoidance (would that SPDY and HTTP/1.1
> pipelining get deployed quickly...)
> 
> And classic elephant flows we know...
> 
> I finally thought today to just use CeroWrt/OpenWrt's nice traffic
> plotting stuff to get a feel for a number of video players's behaviour
> (I happened to use an iPad; I sampled Netflix, Hulu, and IMDB).
> 
> As I expected, there is a large amount of data transferred as fast as
> possible at the beginning, to try to hide performance problems (much of
> which are being caused by bufferbloat, of course).  These will clearly
> drive latencies well up.  What's interesting is what happens after that:
> rather than, having had a period to observe how fast they are able to
> transfer data, both Netfix and Hulu appear to actually "burst" their
> later transfers.  The data is not "streamed" at all, rather it is sent
> in full rate bursts approximately every 10 seconds thereafter. (which
> will induce bursts of latency).  IMDB seems to buffer yet more than
> Netfix and Hulu (watching HD movie clips); I should experiment a bit
> more with it.
> 
> The interesting question is what do current operating systems/standards
> do to the TCP window when idle?  Anyone know?  I'm making a (possibly
> poor) presumption that they are using single TCP connections; I should
> probably take some traces....
>                        - Jim
> 
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20120126/8e8919c9/attachment-0002.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: PastedGraphic-1.tiff
Type: image/tiff
Size: 117858 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20120126/8e8919c9/attachment-0004.tiff>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: PastedGraphic-2.tiff
Type: image/tiff
Size: 141900 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20120126/8e8919c9/attachment-0005.tiff>


More information about the Bloat mailing list