So, it has a core assumption: that a stable arrangement of links with small persistent delays is something that both can and should exist.

It's kind of like a fluid model, in many ways.

Thing is, I don't think it's desirable that persistent delays exist anywhere.

Now, autotuning codel is probably a good idea... but the implementation should be local. If you're a bottleneck, you may as well behave as if you're the only bottleneck, even if that isn't the case, because if you're in a distributed bottleneck situation your actions will help anyway. A global distributed solver like is proposed here could possibly work in a private network, but I can't see that solution flying in precisely the place you really need it: peering.

One way to do this locally would be to identify the impulse response of the stochastic DE describing the aggregate running through the link using something like https://arxiv.org/abs/1309.7857 and then tune to optimal behaviour for the resulting DE, along the lines of https://arxiv.org/abs/1510.08439.


On Thu, Jul 19, 2018 at 8:42 AM Dave Taht <dave.taht@gmail.com> wrote:
there's so much math in this that I cannot make heads or tails of it.

https://www.eee.hku.hk/~kcleung/papers/conferences/bufferbloat_multi-bottleneck:INFOCOM_2018/PID5170809.pdf

--

Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
_______________________________________________
Codel mailing list
Codel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/codel