[Bloat] Comcast & L4S

Sebastian Moeller moeller0 at gmx.de
Sun Feb 2 06:39:46 EST 2025


Hi Jonathan,

thanks for the graphs...

> On 2. Feb 2025, at 01:09, Jonathan Morton <chromatix99 at gmail.com> wrote:
> 
>> On 1 Feb, 2025, at 8:05 pm, Sebastian Moeller <moeller0 at gmx.de> wrote:
>> 
>>> about as tight as you can reasonably make it while still accommodating typical levels of link-level jitter.  
>> 
>> Not sure, in a LAN with proper back pressure I would guess lower than 5ms to be achievable. This does not need to go crazy low, so 1 ms would likely do well, with an interval of 10ms... or if 5 ms is truly a sweet spot, maybe decouple interval and target so these can be configured independently (in spite of the theory that recommends target to be 5-10% of interval).
> 
> Actually, the 5ms target is already too tight for efficient TCP operation on typical Internet paths - unless there is significant statistical multiplexing on the bottleneck link, which is rarely the case in a domestic context.  

I respectfully disagree, even at 1 Gbps we only go down to 85% utilisation with a single flow I assume. That is a trade-off I am happy to make...


> Short RTTs on a LAN allow for achieving full throughput with the queue held this small, but remember that the concept of "LAN" also includes WiFi links whose median latency is orders of magnitude greater than that of switched Ethernet.  That's why I don't want to encourage going below 5ms too much.

Not wanting top be contrarian, but here I believe fixing WiFi is the better path forward.


> DelTiC actually reverts to the 25ms queue target that has historically been typical for AQMs targeting conventional TCP.

Not doubting one bit that 25ms makes a ton of sense for DelTic, but where do these historical 25ms come from and how was this number selected?

>  It adopts 5ms only for SCE marking.  This configuration works very well in testing so far:
> 
> <Screenshot 2024-12-06 at 9.36.25 pm.png>
> As for CPU efficiency, that is indeed something to keep in mind.  The scheduling logic in Cake got very complex in the end, and there are undoubtedly ways to avoid that with a fresh design.

Ah, that was not my main focus here, with 1600 Gbps ethernet already in the horizon, I assume a shaper running out of CPU is not really avoidable, I am more interested in that shaper having a graceful latency-conserving failure mode when running out of timely CPU access. Making scheduling more efficient is something that I am fully behind, but I consider these two mostly orthogonal issues.

> 
>  - Jonathan Morton
> 



More information about the Bloat mailing list