I think that this is a very good comment to the discussion at the defense about the comparison between
SFQ with longest queue drop and FQ_Codel.
A congestion controlled protocol such as TCP or others, including QUIC, LEDBAT and so on
need at least the BDP in the transmission queue to get full link efficiency, i.e. the queue never empties out.
This gives rule of thumbs to size buffers which is also very practical and thanks to flow isolation becomes very accurate.
Which is:
1) find a way to keep the number of backlogged flows at a reasonable value.
This largely depends on the minimum fair rate an application may need in the long term.
We discussed a little bit of available mechanisms to achieve that in the literature.
2) fix the largest RTT you want to serve at full utilization and size the buffer using BDP * N_backlogged.
Or the other way round: check how much memory you can use
in the router/line card/device and for a fixed N, compute the largest RTT you can serve at full utilization.
3) there is still some memory to dimension for sparse flows in addition to that, but this is not based on BDP.
It is just enough to compute the total utilization of sparse flows and use the same simple model Toke has used
to compute the (de)prioritization probability.
This procedure would allow to size FQ_codel but also SFQ.
It would be interesting to compare the two under this buffer sizing.
It would also be interesting to compare another mechanism that we have mentioned during the defense
which is AFD + a sparse flow queue. Which is, BTW, already available in Cisco nexus switches for data centres.
I think that the the codel part would still provide the ECN feature, that all the others cannot have.
However the others, the last one especially can be implemented in silicon with reasonable cost.