and to clarify more about what else was discussed, it seems to me some of us tend to correspond and relate the notion of "good queue" vs. "bad queue" used by KN+VJ ACM queue paper to my question on "good bursts". While they likely to be correlated (I have no argument on this now), the notion of "good burst" goes beyond the "good queue" defined in that paper. Based on their definition a good queue is a queue that minimizes the standing queue (or gets rid of it entirely) while allowing a certain amount of (sub-RTT? typical 100 ms) bursts while avoiding the link to get under-utilized. That notion (again, I have no argument on its correctness for now) is different from my question on "good bursts" which means that: once we manage to get rid of the standing queue, what types/sizes of bursts I should let the AQM X to protect/handle?
I think your question has a problem in it.
Going back to my thought experiment, suppose that we have a queuing point whose egress speed is X and a sender that is sending data in a CBR fashion at (1 + epsilon)*X. In a very formal sense, the entire transmission stream is a single burst, and one could imagine it taking hundreds or thousands of packets being sent and forwarded before the queue built up to a point that AQM would push back. In that case, I would expect an "acceptable burst" to be hundreds or thousands of packets. If on the other hand you have a new TCP session in slow-start that is using an intermediate link that is at the time fully utilized and on the cusp of AQM pushing back on it, the new session is very likely to tip the balance, and a burst of a few packets might well push it over the top.
So to my mind, the question isn't about the size of the burst. It is about the rate of onset and the effect of that burst on the latency and probably of loss for itself and competing sessions.
And it will never come down to a magic number N in that N is somehow "right", N-1 is "better", and N+1 is "over the top." There are no such magic numbers.