[Cake] Long-RTT broken again

Sebastian Moeller moeller0 at gmx.de
Tue Nov 3 07:41:50 EST 2015

Hi Toke,

On Nov 3, 2015, at 12:57 , Toke Høiland-Jørgensen <toke at toke.dk> wrote:

> Sebastian Moeller <moeller0 at gmx.de> writes:
>> Well, I believe one of the next steps needs to be to expose limit to
>> user-space,
> No, just throwing the problem to user space is not a solution: it's a
> cop-out. We need to fix the issues so things actually work, and then
> *maybe* expose the value to userspace if there is a real need for it.
> Not just go "meh, let userspace sort it out"; that is horrible design.

	I guess all we can do is to agree to disagree, the amount of memory a user is willing to dedicate to a specific functionality is a policy question in my book. It is not a cop out, as the amount of memory the user might require for other perhaps more important functionality can not be deduced without the users input, short of a perfect oracle. 

>> which would have Toke allowed his measurements
> No, it would not: I'm not trying to test whether we have a qdisc that,
> through arcane configuration options, can be made to behave properly.

	Sorry, I misunderstood then, I thought the experiment was about target at long intervals/rtts, I am genuinely sorry.

> That already exists in fq_codel+HTB. What we're doing here is trying to
> build a no-knobs qdisc here that works well in all the scenarios we can
> think of. So let's make sure it does, and then talk about whether a
> variable should be exposed *after* we've done proper auto-tuning.

	I am a biologist by training and profession, in my world there is no perfect auto-tuning, there is just better or worse adaptation to the current environment, the optimality of which changes with changes in said environments that at least partly are driven by the attempt of the environments inhabitants to improve their well being. So in other words I believe "good-enough” is a reasonably achievable optimization goal, while perfect is not; and one implication of that is that I would appreciate to be able to do away with automatic optimization if I feel that it gets in the way. But I realize that this is a very personal opinion not shared by others. Given that most of your have a much stronger CS background, I will not be too sad if I can not convince all of you; I would be unhappy with myself if I did not at least try, though. It is after all not the first time this isse came up, I seem to recall that we had this limit discussion already during the codel or fq_codel development.

>> and would have followed the example of most/all other leaf qdiscs and
>> put policy into user space where it arguably belongs…
> Packet limit is not policy, it's an implementation detail. If you don't
> have the memory to run at 100Mbps / 1 second, then *set those values
> lower*. You're not going to achieve it anyway if you don't have the
> buffer space.

	So much for no-knob… Now the user on such a high-bandwidth high delay link will need to actively “lie" to cake about the link specific parameters to avoid giving remote parties the possibility to easily OOM his/her router, this looks a bit like DDOS on steroids to me. This is not improved by the fact that the increased worst-case memory demand is a (so far un documented) side effect of setting unrelated parameters that describe parameters a user might know a priori about her/his link. Seriously, is this as robust as we can make it? Also we do not even report the worst case memory consumption (or a number that strongly correlates with it) so this is not as user-friendly and obvious as it should be.

> Same thing with the target parameter, BTW: The fact that we still
> haven't got it right is not an argument for exposing it to userspace,
> quite the contrary: If we, the experts, can't even get it right, why on
> earth would be expect users to?

	Because the fact that even we are struggling might indicate that there is no real one-size-fits-all value for target? Now, I agree that the issues we had are not really big conceptual ones but rather small implementation issues, but still.. Anyway, exposing limit is the white whale I am chasing, I am fine with target somewhere between 5% to 10% of interval, once we figured out whether at low bandwidths target is supposed to grow to a larger fraction or not, that is...

Best Regards

> -Toke

More information about the Cake mailing list