moeller0 at gmx.de
Thu Oct 29 05:56:29 EDT 2015
On Oct 29, 2015, at 10:01 , Dave Taht <dave.taht at gmail.com> wrote:
> There has been so much traffic here that I can't summarize.
> A) But a bit on memory limits - the memory limit enforced in cake is a
> sanity check sort of limit. There are *no* allocations of memory in
> it. It will not fail itself, due to running out of memory except at
> init time.
This is all quite interesting and correct; the point I failed to make clearly it seems, is that a 50MB limit on a 64MB router can not be called sane or safe, no matter how you slice or dice it… It is one thing having cake tell me that it thinks the limit I gave is to low to reach full throughput as long as it accepts the given limit.
> So if you run out of memory elsewhere in the system, the normal set of
> bad things happen. cake's "sane - and, yes, could use more smarts"
> limits can reduce memory pressure elsewhere in the system by
> discarding things when it gets irrational, but packet memory tends to
> be fragmented and hard to recover cleanly in the first place.
To come back to my example, the big issue is that unlike a lot of other allocations the queue is filled by tpackets generated by connected machines, it would be sweet if it would be harder for a mischievous user in my internal network to OOM my router… I guess what I want to say is there is not one size fits all size for a safe limit, it really depends on the machines memory (on the lower end) and on expected traffic at the upper end.
> B) Similarly a HUGE waster of memory is small packets, which usually
> get a full slab of bytes (2k) to play with on the rx side. This
> problem got so bad in some testing that openwrt contains a clever (I
> would say that because I wrote it), patch that once we get to tons of
> packets more than we think is sane and we get close to running out of
> 2k slabs that we start reallocating packets to fit into much smaller
> slabs (like 512 bytes) and copying them into those.
Clever! This argues for a limit on the sum of skb->true size of all queued packets instead of just the number of packets. On the other hand number of packets will not treat small and large packets differently and hence the number of packets limit should correlate pretty nicely with the consumed memory. But for this to work cake needs to interpret the limit as a hard maximum limit.
> so, briefly, memory allocation and release patterns are more complex
> than the discussion I sort of saw go by a over the last few days.
I know that I am just a layman, but the limit thing does not seem to be too complex, we are really just talking about a safety valve, that does not need to be hyper precise, but approximately correlated with the consumed memory. Currently, setting oldish cake up on in- and egress would have the worst case queue memory somewhere at (2 * 10240 * 2) = 40960 KB which on a 64MB is already larger than I would prefer and totally ineffective on a 32MB router ;). I am by the war fine with cake defaulting to what ever number is deemed good, all I want is to be able to tell cake something smaller if need be.
But than cake is not my baby, and all I can do is try to convince Jonathan, Toke and you that I believe that while cake’s principle should be simplicity in use that does not need to mean no knobs as long as everything works fine in most cases without the need to touch those knobs ;)
> Dave Täht
> I just invested five years of my life to making wifi better. And,
> now... the FCC wants to make my work, illegal for people to install.
> Cake mailing list
> Cake at lists.bufferbloat.net
More information about the Cake