General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players.
@ 2012-01-26 20:04 Jim Gettys
  2012-01-26 20:14 ` Justin McCann
  2012-01-26 20:48 ` Paul Gleichauf
  0 siblings, 2 replies; 6+ messages in thread
From: Jim Gettys @ 2012-01-26 20:04 UTC (permalink / raw)
  To: bloat

Since we're rapidly getting to the point of needing to test AQM
algorithms (as enough of the underbrush has been cleared away), I've
been thinking a bit about how to test the algorithms.

I've been inspired to think a bit about whether, given both the changes
in web browsers and the advent of "streaming" media to add to the
classic full speed "elephant" flow, what a test scenario for both
simulation and actual tests should look like.

I feel I have some idea of how HTTP works and web browsers given my
experience of the 1990's; if people have any real data (taken from the
edge please!) I'd like to update my knowledge; but it's going to be
bursty, and due to using so many connections, most of those packets will
not be under congestion avoidance (would that SPDY and HTTP/1.1
pipelining get deployed quickly...)

And classic elephant flows we know...

I finally thought today to just use CeroWrt/OpenWrt's nice traffic
plotting stuff to get a feel for a number of video players's behaviour
(I happened to use an iPad; I sampled Netflix, Hulu, and IMDB).

As I expected, there is a large amount of data transferred as fast as
possible at the beginning, to try to hide performance problems (much of
which are being caused by bufferbloat, of course).  These will clearly
drive latencies well up.  What's interesting is what happens after that:
rather than, having had a period to observe how fast they are able to
transfer data, both Netfix and Hulu appear to actually "burst" their
later transfers.  The data is not "streamed" at all, rather it is sent
in full rate bursts approximately every 10 seconds thereafter. (which
will induce bursts of latency).  IMDB seems to buffer yet more than
Netfix and Hulu (watching HD movie clips); I should experiment a bit
more with it.

The interesting question is what do current operating systems/standards
do to the TCP window when idle?  Anyone know?  I'm making a (possibly
poor) presumption that they are using single TCP connections; I should
probably take some traces....
                        - Jim


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players.
  2012-01-26 20:04 [Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players Jim Gettys
@ 2012-01-26 20:14 ` Justin McCann
  2012-01-26 20:48 ` Paul Gleichauf
  1 sibling, 0 replies; 6+ messages in thread
From: Justin McCann @ 2012-01-26 20:14 UTC (permalink / raw)
  To: Jim Gettys; +Cc: bloat

On Thu, Jan 26, 2012 at 3:04 PM, Jim Gettys <jg@freedesktop.org> wrote:
>...
> The interesting question is what do current operating systems/standards
> do to the TCP window when idle?  Anyone know?  I'm making a (possibly
> poor) presumption that they are using single TCP connections; I should
> probably take some traces....

You're looking for "congestion window validation" or "congestion
window decay" from RFC2861.

I believe the function you're looking for in Linux is tcp_cwnd_validate()
  http://lxr.linux.no/linux+v3.2.2/net/ipv4/tcp_output.c#L1287

    Justin

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players.
  2012-01-26 20:04 [Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players Jim Gettys
  2012-01-26 20:14 ` Justin McCann
@ 2012-01-26 20:48 ` Paul Gleichauf
  2012-01-27 17:10   ` Jim Gettys
  1 sibling, 1 reply; 6+ messages in thread
From: Paul Gleichauf @ 2012-01-26 20:48 UTC (permalink / raw)
  To: Jim Gettys; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 3122 bytes --]

Jim,

I suggest you also look at YouTube. The last time I checked, their initial firehose behavior was more pronounced than Hulu and may have had more to do with deliberately designed server behavior to support a specific usage pattern for viewing their videos. It's hard to tell without server logs how much the intermediate network nodes contribute, vs buffbloat, vs. the server's rate limiting.  It appears as if YouTube may want to move on to the next relatively short request as fast as possible, rather rate limit in pulses to stream longer running streams such as a movie or a television show. 

Here are a couple of old traces to make the point. Both are Flash-based on the same platform, but different content.


YouTube (paragliding clip):



Hulu (SNL clip):


Paul

On Jan 26, 2012, at Jan 26, 2012 12:04 PM, Jim Gettys wrote:

> Since we're rapidly getting to the point of needing to test AQM
> algorithms (as enough of the underbrush has been cleared away), I've
> been thinking a bit about how to test the algorithms.
> 
> I've been inspired to think a bit about whether, given both the changes
> in web browsers and the advent of "streaming" media to add to the
> classic full speed "elephant" flow, what a test scenario for both
> simulation and actual tests should look like.
> 
> I feel I have some idea of how HTTP works and web browsers given my
> experience of the 1990's; if people have any real data (taken from the
> edge please!) I'd like to update my knowledge; but it's going to be
> bursty, and due to using so many connections, most of those packets will
> not be under congestion avoidance (would that SPDY and HTTP/1.1
> pipelining get deployed quickly...)
> 
> And classic elephant flows we know...
> 
> I finally thought today to just use CeroWrt/OpenWrt's nice traffic
> plotting stuff to get a feel for a number of video players's behaviour
> (I happened to use an iPad; I sampled Netflix, Hulu, and IMDB).
> 
> As I expected, there is a large amount of data transferred as fast as
> possible at the beginning, to try to hide performance problems (much of
> which are being caused by bufferbloat, of course).  These will clearly
> drive latencies well up.  What's interesting is what happens after that:
> rather than, having had a period to observe how fast they are able to
> transfer data, both Netfix and Hulu appear to actually "burst" their
> later transfers.  The data is not "streamed" at all, rather it is sent
> in full rate bursts approximately every 10 seconds thereafter. (which
> will induce bursts of latency).  IMDB seems to buffer yet more than
> Netfix and Hulu (watching HD movie clips); I should experiment a bit
> more with it.
> 
> The interesting question is what do current operating systems/standards
> do to the TCP window when idle?  Anyone know?  I'm making a (possibly
> poor) presumption that they are using single TCP connections; I should
> probably take some traces....
>                        - Jim
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


[-- Attachment #2.1: Type: text/html, Size: 4188 bytes --]

[-- Attachment #2.2: PastedGraphic-1.tiff --]
[-- Type: image/tiff, Size: 117858 bytes --]

[-- Attachment #2.3: PastedGraphic-2.tiff --]
[-- Type: image/tiff, Size: 141900 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players.
  2012-01-26 20:48 ` Paul Gleichauf
@ 2012-01-27 17:10   ` Jim Gettys
  2012-01-27 18:11     ` Eggert, Lars
  2012-01-27 21:42     ` Rick Jones
  0 siblings, 2 replies; 6+ messages in thread
From: Jim Gettys @ 2012-01-27 17:10 UTC (permalink / raw)
  To: Paul Gleichauf; +Cc: bloat

On 01/26/2012 03:48 PM, Paul Gleichauf wrote:
> Jim,
>
> I suggest you also look at YouTube. The last time I checked, their
> initial firehose behavior was more pronounced than Hulu and may have
> had more to do with deliberately designed server behavior to support a
> specific usage pattern for viewing their videos. It's hard to tell
> without server logs how much the intermediate network nodes
> contribute, vs buffbloat, vs. the server's rate limiting.  It appears
> as if YouTube may want to move on to the next relatively short request
> as fast as possible, rather rate limit in pulses to stream longer
> running streams such as a movie or a television show. 
>
> Here are a couple of old traces to make the point. Both are
> Flash-based on the same platform, but different content.

I will.  But also Simon Leinen
<https://plus.google.com/107530144481437668885>on google+ pointed me at
- "For a characterization of the behaviors of various video streaming
services, see "Network Characteristics of Video Streaming Traffic" by
Rao et al., CoNEXT 2011
(http://conferences.sigcomm.org/co-next/2011/program.html)"

In the short term, I need to understand what current TCP systems
*should* do in the face of long idle periods, which invalidate the
congestion information.  I fear these may be large line rate bursts
hitting the broadband head end, and then propagating to the home router
queues... under some circumstances (e.g. the RTO gets driven up due to
bloat, so the congestion window doesn't come down very fast in the idle
periods).  Help from people who know TCP better than I do (which isn't
all that well), gratefully appreciated.

My immediate reaction to these bursts was "boy, we may need someone (not
me) to write a BCP on how to do more network friendly video
players"....  There's been a significant discussion on Google+ as a
result since then.


>
>
> YouTube (paragliding clip):
>
>
> Hulu (SNL clip):
>
> Paul
>
> On Jan 26, 2012, at Jan 26, 2012 12:04 PM, Jim Gettys wrote:
>
>> Since we're rapidly getting to the point of needing to test AQM
>> algorithms (as enough of the underbrush has been cleared away), I've
>> been thinking a bit about how to test the algorithms.
>>
>> I've been inspired to think a bit about whether, given both the changes
>> in web browsers and the advent of "streaming" media to add to the
>> classic full speed "elephant" flow, what a test scenario for both
>> simulation and actual tests should look like.
>>
>> I feel I have some idea of how HTTP works and web browsers given my
>> experience of the 1990's; if people have any real data (taken from the
>> edge please!) I'd like to update my knowledge; but it's going to be
>> bursty, and due to using so many connections, most of those packets will
>> not be under congestion avoidance (would that SPDY and HTTP/1.1
>> pipelining get deployed quickly...)
>>
>> And classic elephant flows we know...
>>
>> I finally thought today to just use CeroWrt/OpenWrt's nice traffic
>> plotting stuff to get a feel for a number of video players's behaviour
>> (I happened to use an iPad; I sampled Netflix, Hulu, and IMDB).
>>
>> As I expected, there is a large amount of data transferred as fast as
>> possible at the beginning, to try to hide performance problems (much of
>> which are being caused by bufferbloat, of course).  These will clearly
>> drive latencies well up.  What's interesting is what happens after that:
>> rather than, having had a period to observe how fast they are able to
>> transfer data, both Netfix and Hulu appear to actually "burst" their
>> later transfers.  The data is not "streamed" at all, rather it is sent
>> in full rate bursts approximately every 10 seconds thereafter. (which
>> will induce bursts of latency).  IMDB seems to buffer yet more than
>> Netfix and Hulu (watching HD movie clips); I should experiment a bit
>> more with it.
>>
>> The interesting question is what do current operating systems/standards
>> do to the TCP window when idle?  Anyone know?  I'm making a (possibly
>> poor) presumption that they are using single TCP connections; I should
>> probably take some traces....
>>                        - Jim
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/bloat
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players.
  2012-01-27 17:10   ` Jim Gettys
@ 2012-01-27 18:11     ` Eggert, Lars
  2012-01-27 21:42     ` Rick Jones
  1 sibling, 0 replies; 6+ messages in thread
From: Eggert, Lars @ 2012-01-27 18:11 UTC (permalink / raw)
  To: Jim Gettys; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 490 bytes --]

On Jan 27, 2012, at 9:10, Jim Gettys wrote:
> I fear these may be large line rate bursts
> hitting the broadband head end, and then propagating to the home router
> queues... under some circumstances (e.g. the RTO gets driven up due to
> bloat, so the congestion window doesn't come down very fast in the idle
> periods).  Help from people who know TCP better than I do (which isn't
> all that well), gratefully appreciated.

Justin McCann already provided the reference to RFC2861...

Lars

[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 4361 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players.
  2012-01-27 17:10   ` Jim Gettys
  2012-01-27 18:11     ` Eggert, Lars
@ 2012-01-27 21:42     ` Rick Jones
  1 sibling, 0 replies; 6+ messages in thread
From: Rick Jones @ 2012-01-27 21:42 UTC (permalink / raw)
  To: Jim Gettys; +Cc: bloat

> In the short term, I need to understand what current TCP systems
> *should* do in the face of long idle periods, which invalidate the
> congestion information.

Is your glass half-empty and/or has a leak?  Then TCP should decay or 
reset its cwnd while/when idle.

Is your glass half-full?  Then TCP should leave the cwnd alone.

Both are ass-u-me-ing something about the state of the path between the 
sender and the receiver.

rick jones

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2012-01-27 21:42 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-26 20:04 [Bloat] Traffic patterns for AQM testing.... Bursty behaviour of video players Jim Gettys
2012-01-26 20:14 ` Justin McCann
2012-01-26 20:48 ` Paul Gleichauf
2012-01-27 17:10   ` Jim Gettys
2012-01-27 18:11     ` Eggert, Lars
2012-01-27 21:42     ` Rick Jones

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox