[Cake] [Make-wifi-fast] [Bloat] The most wonderful video ever about bufferbloat

Sebastian Moeller moeller0 at gmx.de
Tue Oct 11 02:28:07 EDT 2022


Hi Bob,



On 10 October 2022 18:45:31 CEST, Bob McMahon <bob.mcmahon at broadcom.com> wrote:
>I think conflating bufferbloat with latency misses the subtle point in that
>bufferbloat is a measurement in memory units more than a measurement in
>time units. The first design flaw is a queue that is too big. 

[SM] I tend to describe the problem as on-path queues being "over-sized and under-managed". IMHO this makes it easier to see both potential approaches to address the consequences:
a) the back-bone solution of working hard to never/rarely actually fill the buffer noticeably.
b) the better-manage-(developing)-queues solution.

While a crude systematic, this covers all approaches to tackle the problem I am aware of.

This youtube
>video analogy doesn't help one understand this important point.
>
>Another subtle point is that the video assumes AQM as the only solution and
>ignores others, i.e. pacing at the source(s) and/or faster service rates. 

[SM] Not all applications/use-cases interested in low latency are fully compatible with pacing. E.g. reaction-time gated multiplayer on-line games tend to send and receive on a 'tick' which looks like natural pacing, except especially on receive depending on circumstances world state updates can consist out of quite a number of packets which the client needs all ASAP so these burst should not be paced out over a tick.
Faster service rates is a solution, that IMHO mostly moves the location of the problematic queue around, in fact the opposite, lowering the service rate is how sqm achieves its goal to get the problematic queue under its control. Also on variable rate links faster service rate is a tricky proposition (as you well know).

>A restaurant that let's one call ahead to put their name on the waitlist
>doesn't change the wait time. Just because a transport layer slowed down
>and hasn't congested a downstream queue doesn't mean the e2e latency
>performance will meet the gaming needs as an example. The delay is still
>there it's just not manifesting itself in a shared queue that may or may
>not negatively impact others using that shared queue.

[SM] +1, this is part of my criticism of how the L4S proponents tout their 1ms queueing delay goal, ignoring that the receiver only cares about total delay and not so much where exactly the delay was 'collected'.
However, experience with sqm makes me believe that trying to avoid naively shared queues can help a lot.
Trying as L4S does to change all senders to play nicer with shared queues, is simply less robust and reliable than actually enforcing the desired behavior at the bottleneck queue in a fine grained and targeted fashion.

>
>Bob
>
>
>
>On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast <
>make-wifi-fast at lists.bufferbloat.net> wrote:
>
>> Hi Erik,
>>
>>
>> > On Oct 10, 2022, at 11:32, Taraldsen Erik <erik.taraldsen at telenor.no>
>> wrote:
>> >
>> > On 10/10/2022, 11:09, "Sebastian Moeller" <moeller0 at gmx.de> wrote:
>> >
>> >    Nice!
>> >
>> >> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake <
>> cake at lists.bufferbloat.net> wrote:
>> >>
>> >> It took about 3 hours from the video was release before we got the
>> first request to have SQM on the CPE's  we manage as a ISP.  Finally
>> getting some customer response on the issue.
>> >
>> >       [SM] Will you be able to bump these requests to higher-ups and at
>> least change some perception of customer demand for tighter latency
>> performance?
>> >
>> > That would be the hope.
>>
>>         [SM} Excellent, hope this plays out as we wish for.
>>
>>
>> >  We actually have fq_codel implemented on the two latest generations of
>> DSL routers.  Use sync rate as input to set the rate.  Works quite well.
>>
>>         [SM] Cool, if I might ask what fraction of the sync are you
>> setting the traffic shaper for and are you doing fine grained overhead
>> accounting (or simply fold that into a grand "de-rating"-factor)?
>>
>>
>> > There is also a bit of traction around speedtest.net's inclusion of
>> latency under load internally.
>>
>>         [SM] Yes, although IIUC they are reporting the interquartile mean
>> for the two loaded latency estimates, which is pretty conservative and only
>> really "triggers" for massive consistently elevated latency; so I expect
>> this to be great for detecting really bad cases, but I fear it is too
>> conservative and will make a number of problematic links look OK. But hey,
>> even that is leaps and bounds better than the old only idle latency report.
>>
>>
>> > My hope is that some publication in Norway will pick up on that score
>> and do a test and get some mainstream publicity with the results.
>>
>>         [SM] Inside the EU the challenge is to get national regulators and
>> the BEREC to start bothering about latency-under-load at all, "some
>> mainstream publicity" would probably help here as well.
>>
>> Regards
>>         Sebastian
>>
>>
>> >
>> > -Erik
>> >
>> >
>> >
>>
>> _______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


More information about the Cake mailing list