[Cake] Long-RTT broken again

Toke Høiland-Jørgensen toke at toke.dk
Tue Nov 3 12:25:40 EST 2015

Sebastian Moeller <moeller0 at gmx.de> writes:

>> Right, well, in that case fixing the calculation to use the actual
>> packet size would make sense in any case?
> Would it? I thought actually using the amount of “pinned” kernel
> memory would be more relevant, it a ACK packet pins 2KB then it should
> be accounted at 2KB, IF the goal of the accounting is to avoid
> un-scheduled OOM, no? And if something like Dave’s patch kicks in that
> copies larger mostly empty skbs to smaller sizes, these packets then
> should be accounted with that smaller size. In anyway I believe with
> default kernels there is a strong correlation between a packet count
> limit and a byte count limit…

No, a limit on a qdisc is in terms of the packets we are transmitted. We
can't expect the user to know how much memory the kernel actually
allocates to an skb in order to configure their package queue. If
someone configures a limit, they are going to do a BDP calculation,
input that, and complain when that doesn't work.

FWIW, this is what the in-kernel FIFO qdisc does when configured in byte

		if (is_bfifo)
			limit *= psched_mtu(qdisc_dev(sch));
on init


	if (likely(sch->qstats.backlog + qdisc_pkt_len(skb) <= sch->limit))
		return qdisc_enqueue_tail(skb, sch);

on enqueue.

Which is the same data that Cake uses. Hmm, weird...


More information about the Cake mailing list