[Bloat] The Remy paper (was: Re: CS244's work on netflix streaming)

Dave Taht dave.taht at gmail.com
Mon Jul 22 17:54:53 EDT 2013


On Mon, Jul 22, 2013 at 2:21 PM, Eric S. Raymond <esr at thyrsus.com> wrote:
> Dave Taht <dave.taht at gmail.com>:
>> Also the discussion of "remy" going on is pretty nifty.
>>
>> https://plus.google.com/u/0/103530621949492999968/posts/2L9e4kxo9y3
>
> Thanks for that link.  I've been mad curious what your tak on the Remy
> paper was ever since I read it myself. Because while I didn't spot any
> obvious bogons, you know the domain far better than I do.

Well, it's a little presently computationally intensive for this decade...


> The only thing about their methods that troubles me is a suspicion that the
> generated algorithms might be overspecialized - that is, very vulnerable
> to small changes in traffic statistics.

They point that out too, in (for example) in fig 11. I LIKE fig 11, I
keep thinking about it - although codel is designed to run against
varying link rates, it makes sense to optimize for the most observed
actual bandwidth, if possible, and this result shows that is possible.
(if not understandable). So, what if you could generate a model
showing your bandwidth range at location Z is X, optimize for that,
download it and run it?

I also like their presentation plot mechanism inverting the queue
delay against throughput to make a "best" result up and to the right
(see fig 4 and 5). I'll use this in future versions of rrul in
particular.

The thing is, we are exploring a very difficult subject (congestion
control) with the equivalent of stone knives and bearskins. We have
all this computing power lying around, serving ads! Why not use that
in useful fashions? Take for comparison, the kind of work and compute
resources that have been poured into physics and sigh.

While they restrict their argument to e2e computation, the same
techniques could apply to optimizing edge networks and interprovider
interconnects, the core, calcuating better multi-path routing, and so
on... for which running a model of existing traffic then implementing
it and iterating fairly rapidly could be automated...

I don't mind being obsoleted...

"How should we design network protocols that free subnetworks and
links to evolve freely, ensuring that the endpoints will adapt prop-
erly no matter what the lower layers do? We believe that the best
way to approach this question is to take the design of specific algo-
rithmic mechanisms out of the hands of human designers (no matter
how sophisticated!), and make the end-to-end algorithm be a func-
tion of the desired overall behavior. "

...

I'm unfond of how g+ (and now email) splits up a conversation, here's
what I said on another thread of discussion:
https://plus.google.com/u/0/107942175615993706558/posts/8MRTLpRyAju

"My happiness at the paper is keyed on three factors: 1) identifying
what a perfect algorithm could be like and the boundaries for it is
inspiring as a goal,

2) and being able to study those results in search of inspiration... and

3) (if you didn't notice), just how well sfqcodel did across a wide
range of benchmarks. No, it wasn't as good as specialized algos that
took weeks of compute time, but it hit a sweet spot in most cases, and
it ain't my fault cubic is so aggressive. I thought of the result as
"kasperov vs deep blue", with clear win on the chess clock, to
kasperov." (kasperov being kathie, van and eric)

Now, I've been looking at bittorrent traffic of late - principal
preliminary result is that traffic in slow start competes well with
it, long duration traffic less so but still gets about double it's
"fair share" (still don't understand the result) and under fq_codel to
compete like that it drops a LOT of packets.

> --
>                 <a href="http://www.catb.org/~esr/">Eric S. Raymond</a>



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html



More information about the Bloat mailing list