* [Cake] tail drop on complete overload
@ 2015-04-12 22:16 Dave Taht
2015-04-12 23:42 ` Jonathan Morton
0 siblings, 1 reply; 3+ messages in thread
From: Dave Taht @ 2015-04-12 22:16 UTC (permalink / raw)
To: cake
The full flow space search on complete overload is terribly expensive,
moreover it is not deterministic, and touches a lot of cache.
In a hardware implementation of cake we would probably drop this and
just do tail drop.
The ability to compile with tail drop for this would be useful in
terms of analysis of overload behavior and be less hard on routers
being flooded.
#ifdef TEENY_ROUTERS
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Cake] tail drop on complete overload
2015-04-12 22:16 [Cake] tail drop on complete overload Dave Taht
@ 2015-04-12 23:42 ` Jonathan Morton
2015-04-13 15:50 ` Dave Taht
0 siblings, 1 reply; 3+ messages in thread
From: Jonathan Morton @ 2015-04-12 23:42 UTC (permalink / raw)
To: Dave Taht; +Cc: cake
> On 13 Apr, 2015, at 01:16, Dave Taht <dave.taht@gmail.com> wrote:
>
> The full flow space search on complete overload is terribly expensive,
> moreover it is not deterministic, and touches a lot of cache.
I would be interested to see if anyone can actually trigger this in practice. It’s a slow-path function, which I assume would be rarely called - only when the queue is physically full. By then, the overload mechanisms would already be in operation, dropping packets instead of marking them. So it’s probably quite a lot easier to trigger it using a local sender than an external one - and if you *can* trigger it externally, then you’ve probably got enough CPU headroom to deal with it the right way.
Also, I don’t think it’s quite as expensive as you think. Note that instead of iterating over the lists of flows, it iterates over the array of flow *backlog* counters in each class. In the default configuration, that’s four 4KB arrays, accessed in ascending linear order (which on most practical memory architectures is the optimal pattern). This is simply to pinpoint the longest queue, which I believe is desirable behaviour.
I don’t see how it can be non-deterministic.
The actual drop is as efficient as it can be. Tail-drop would be *more* efficient, sure, since it just involves an early error return in enqueue(), but has its own behavioural risks.
It may be feasible - and in hardware would probably be easy - to keep track of the identity of the longest queue opportunistically. This would reduce the cost of identifying that queue when it is necessary to do so, at the cost of a little extra overhead on processing every packet. Is that worth it?
- Jonathan Morton
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Cake] tail drop on complete overload
2015-04-12 23:42 ` Jonathan Morton
@ 2015-04-13 15:50 ` Dave Taht
0 siblings, 0 replies; 3+ messages in thread
From: Dave Taht @ 2015-04-13 15:50 UTC (permalink / raw)
To: Jonathan Morton; +Cc: cake
On Sun, Apr 12, 2015 at 4:42 PM, Jonathan Morton <chromatix99@gmail.com> wrote:
>
>> On 13 Apr, 2015, at 01:16, Dave Taht <dave.taht@gmail.com> wrote:
>>
>> The full flow space search on complete overload is terribly expensive,
>> moreover it is not deterministic, and touches a lot of cache.
>
> I would be interested to see if anyone can actually trigger this in practice. It’s a slow-path function, which I assume would be rarely called - only when the queue is physically full. By then, the overload mechanisms would already be in operation, dropping packets instead of marking them. So it’s probably quite a lot easier to trigger it using a local sender than an external one - and if you *can* trigger it externally, then you’ve probably got enough CPU headroom to deal with it the right way.
A ping flood will trigger it trivially but briefly. A more
sophisticated attack would send udp on many different ports.
> Also, I don’t think it’s quite as expensive as you think. Note that instead of iterating over the lists of flows, it iterates over the array of flow *backlog* counters in each class. In the default configuration, that’s four 4KB arrays, accessed in ascending linear order (which on most practical memory architectures is the optimal pattern). This is simply to pinpoint the longest queue, which I believe is desirable behaviour.
I note that most L1 dcaches are 32k-64k in size. Wiping out the dcache
is not a good idea.
> I don’t see how it can be non-deterministic.
Cache misses make it non-deterministic.
> The actual drop is as efficient as it can be. Tail-drop would be *more* efficient, sure, since it just involves an early error return in enqueue(), but has its own behavioural risks.
I agree that it carries it's own behavioral risks and those should be examined.
> It may be feasible - and in hardware would probably be easy - to keep track of the identity of the longest queue opportunistically. This would reduce the cost of identifying that queue when it is necessary to do so, at the cost of a little extra overhead on processing every packet. Is that worth it?
Might be a good idea to try to keep the biggest queue ptr around, yes.
Would you like to sit through a typical cloudflare presentation on the
kinds of DDOSes they see? The one I went through was right after my
talk at nznog.
>
> - Jonathan Morton
>
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-04-13 15:50 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-12 22:16 [Cake] tail drop on complete overload Dave Taht
2015-04-12 23:42 ` Jonathan Morton
2015-04-13 15:50 ` Dave Taht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox