IMHO this is another example of 'batching helps to ameliorate per-batch set-up/processing costs'. The trick I would say is to size the batches in a way that they do not introduce too much latency granularity. For a server that might be a coarser granularity than for a client or a home router. My gut feeling tells me that the acceptable batch size is related to the required transmission/processing time of a batch.
In a sense cake already coarsely takes this into account when disabling GSO splitting at >= 1 Gbps rates. Maybe we could also scale the quantum more aggressively, but it is a tradeoff....


Regards

On 26 September 2022 03:19:30 CEST, Dave Taht via Bloat <bloat@lists.bufferbloat.net> wrote:
Some good counterarguments against FQ and pacing.

https://www.usenix.org/system/files/nsdi22-paper-ghasemirahni.pdf

--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.