From: "Toke Høiland-Jørgensen" <toke@toke.dk>
To: Sebastian Moeller <moeller0@gmx.de>
Cc: cake@lists.bufferbloat.net
Subject: [Cake] Re: help request for cake on a large network
Date: Tue, 30 Sep 2025 11:23:16 +0200 [thread overview]
Message-ID: <87v7l0mgzf.fsf@toke.dk> (raw)
In-Reply-To: <837EA4ED-26D3-4D83-84D9-5C0C75CFB80D@gmx.de>
Sebastian Moeller <moeller0@gmx.de> writes:
> Hi Toke,
>
>
>> On 30. Sep 2025, at 11:04, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>
>> David Lang <david@lang.hm> writes:
>>
>>> Sebastian Moeller wrote:
>>>
>>>> Hi David,
>>>>
>>>> while I have no real answer for your questions (due to never having had that kind of load in my home network ;) ) I would like to ask you to make take scripted captures of tc -s qdisc for the wan interface is reasonable short intervals (say every 10 minutes?) as that might be just what we need to actually answer your question.
>>>
>>> I will do that, however the network is only up under load for 4 days a year, so
>>> it's a slow feedback loop :-)
>>>
>>> I would welcome any other suggestions for data to gather.
>>
>> Having queue statistics at a scale as granular as you can manage would
>> be cool. It's around ~400 bytes of raw data per sample Capturing that
>> every 100ms for four days is only around 1.4 GB of data; should
>> theoretically be manageable? :)
>>
>> Note that the 400 bytes is the in-kernel binary representation; the
>> output of `tc -s` is somewhat more; using JSON output (`tc -j -s`) and
>> compressing the output may get within something that server-grade
>> hardware should handle just fine.
>>
>>>>> On 28. Sep 2025, at 13:06, David Lang <david@lang.hm> wrote:
>>>>>
>>>>> I'm starting to prepare for the next Scale conference and we are switching from Juniper routers to Linux routers. This gives me the ability to implement cake.
>>>>>
>>>>> One problem we have is classes that tell everyone 'go download this' that trigger hundreds of people to hammer the network at the same time (this is both a wifi and a network bandwidth issue, wifi is being worked on)
>>>>
>>>
>>>> So one issue might be that with several 100 users the default compile-time
>>>> size of queues (1024, IIRC) that cake will entertain might be too little, even
>>>> in light of the 8 way assoziative hashing design. I believe this can be
>>>> changed (within limits) only by modifying at source and recompilation of the
>>>> kernel, if that should be needed at all.
>>>
>>> custom compiling a kernel is very much an option (and this sort of tweaking is
>>> the sort of thing I'm expecting to need to do)
>>>
>>> The conference is in March, so we have some time to think about this and
>>> customize things, just no chance to test before the show.
>>>
>>>> I wonder whether multi-queue cake would not solve this to some degree, as I
>>>> assume each queue's instance would bring its own independent set of 1024 bins?
>>>
>>> good thought
>>
>> While I certainly wouldn't mind having a large-scale test of the
>> multi-queue variant of cake, I don't really think it's necessary at 1G.
>> Assuming you're using server/desktop-grade hardware for the gateways,
>> CAKE should scale just fine to 1Gbit.
>>
>> Sebastian is right that the MQ variant will install independent CAKE
>> instances on each hardware queue, which will give you more flow queues.
>> However, the round-robin dequeueing among those queues will also be
>> completely independent, so you won't get fairness among them either
>> (only between the flows that share a HWQ).
>>
>> As for collision probability, we actually have a calculation of this in
>> the original CAKE paper[0], in figure 1. With set-associative hashing,
>> collision probability only start to rise around 500 simultaneous flows.
>> And bear in mind that these need to be active flows *from the PoV of the
>> router*. I.e., they need to all be actively transmitting data at the
>> same time; even with lots of users with active connections as seen from
>> the endpoint, the number of active flows in the router should be way
>> smaller (there's a paper discussing this that I can't find right now).
>
> Maybe Luca's "Evaluating the Number of Active Flows in a Scheduler
> Realizing Fair Statistical Bandwidth Sharing" from 2013?
> https://team.inria.fr/rap/files/2013/12/KMOR05b.pdf
Yup, that was exactly the one I was thinking about - thank you for
digging up the link!
(Adding back the list to Cc so others can see it too).
-Toke
next prev parent reply other threads:[~2025-09-30 9:23 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-28 11:06 [Cake] " David Lang
2025-09-28 12:10 ` [Cake] " Sebastian Moeller
2025-09-28 12:17 ` David Lang
2025-09-30 9:04 ` Toke Høiland-Jørgensen
[not found] ` <837EA4ED-26D3-4D83-84D9-5C0C75CFB80D@gmx.de>
2025-09-30 9:23 ` Toke Høiland-Jørgensen [this message]
2025-09-28 12:12 ` Jaap de Vos
2025-09-28 12:38 ` David Lang
2025-09-28 12:56 ` Frantisek Borsik
2025-09-28 17:07 ` dave seddon
2025-09-28 17:26 ` David Lang
2025-09-30 5:18 ` Jonathan Morton
2025-09-30 6:09 ` Sebastian Moeller
2025-09-30 8:59 ` Jonathan Morton
2025-09-30 9:00 ` Sebastian Moeller
2025-09-30 3:55 ` Jonathan Morton
2025-09-30 4:30 ` dave seddon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/cake.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87v7l0mgzf.fsf@toke.dk \
--to=toke@toke.dk \
--cc=cake@lists.bufferbloat.net \
--cc=moeller0@gmx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox