[Bloat] review: Deployment of RITE mechanisms, in use-case trial testbeds report part 1

Alan Jenkins alan.christopher.jenkins at gmail.com
Sun Feb 28 08:33:38 EST 2016

On 27/02/2016, Dave Täht <dave at taht.net> wrote:
> On 2/26/16 3:23 AM, De Schepper, Koen (Nokia - BE) wrote:
>> Hi Wes,
>> Just to let you know that we are still working on AQMs that support
>> scalable (L4S) TCPs.
>> We could present some of our latest results (if there will be a meeting in
>> Buenos Aires, otherwise in Berlin?)
>> * Performance of HTTP Adaptive Video Streaming (HAS) with different TCP's
>> and AQMs
>>    o HAS is currently ~30% of Internet traffic, but no AQM testing so far
>> has included it

> I am aware of several unpublished studies. There was also something that
> compared 1-3 HAS flows from several years back from stanford that I've
> longed to be repeated against these aqm technologies.
> https://reproducingnetworkresearch.wordpress.com/2014/06/03/cs244-14-confused-timid-and-unstable-picking-a-video-streaming-rate-is-hard/


>>    o the results are very poor with a particular popular AQM
> Define "very poor". ?

Heh.  After skimming the same sections as you, I think my restraint
must be vastly inferior to yours.

I didn't like to compain on the AQM list.  I think your point about
the measurements made for background web traffic are more interesting.
But it looks a bit weird and I can imagine the results being used

>> Presenter: Inton Tsang
>> Duration: 10mins
>> Draft: Comparative testing of draft-ietf-aqm-pie-01,
>> draft-ietf-aqm-fq-codel-04, draft-briscoe-aqm-dualq-coupled
>> For experiment write-up, see Section 3 of
>> https://riteproject.files.wordpress.com/2015/12/rite-deliverable-3-3-public1.pdf

I wouldn't complain that I can't sustain 2056Kbps goodput when my fair
share of the shaped bandwidth is 2000Kbps.  The results might be
showing a significant degradation, or it could be a marginal one that
pushes over the boundary (between the 2056k and 1427k encodes).  Which
of those conclusions you start from might be influenced by whether
you're developing a different AQM, hmm.

The HAS over TCP must be more bursty than a simple TCP download.  All
this could be explained by a competitive advantage in queues without
fair scheduling.  There's no effort to rule it out here, at least.

You can see in the first link above, HAS used to basically die in
competition with a simple TCP download.  Clearly it's been fixed for
this case - assuming current commonly deployed queues.  SFQ looks
*very* different to these queues, as they point out, and you see in
the throughput graphs.  So it is possible there's an issue with the
closed-source HAS being tested.  Somehow I doubt the Google Video's of
this world would remain degraded, if there's a real issue here.  (I
know, game theory of incremental deployment, sigh).

It can't be a co-incidence that the variation below 1.5Mbps happens
when the HAS decides to switch the roles of video and audio TCP
streams (!).  That it's below 1.5Mbps for a the first 30s is a second
pointer, towards the handling of bursts related to "slow start".

It's strange to just say "the upside with a possibility to peak in
throughput cannot happen".  What that means is you want a new HAS
stream to inflict this quality drop on an established one, so your new
stream doesn't have to start at the lower quality in order to build up
a buffer.

Sure, that's even more contrived.  And it's probably what you'd prefer
in practice.  Either you've got enough bandwidth for two streams, or
the established one is going to visibly downgrade anyway.  But that's
a path-dependent result, on SFQ [1990] not being deployed when the HAS
was being optimized.

It doesn't seem to be a clear demonstration of the fundamental
advantages of L4S / DualQ.


More information about the Bloat mailing list