From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com [IPv6:2a00:1450:4864:20::12e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id E31A83B2A4 for ; Wed, 3 Mar 2021 21:47:34 -0500 (EST) Received: by mail-lf1-x12e.google.com with SMTP id 18so32071768lff.6 for ; Wed, 03 Mar 2021 18:47:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=4JAzGvL0QXmay+1YU6yuCqqlIqMYR4vJqiNmsoQetG8=; b=bCdxrmcAxAiN7aTe1Uq06WSuZOJ4DLFNBNEYJ88++kGX3xnPUqaZfZB+M5eXtHZHOy +98MEBCPNTGne4eYHfF3VsTATVREVM//KAMttRf96Aq+SeQ8cCGIw0HVQPSugLR20GcK DT1aaQ5UJWEAXISDW0VHYS2bNgggheCIemV0sMooe0b2rabHVM/wSFPb06yuWqh2JHhu lqoSZ4CGmQ2Hd1krH9GLzxE2kXWBVVrj5s7gxB6AxnxR7RAxcqtLObH9OlyFCQvYKzE3 Vyx/8jVm2d845QnP2HOP0HOO5WpmA5Jm9+NIYJGKsKtRXxNpVZl8nNbm90hu/zbQJUfl eqwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=4JAzGvL0QXmay+1YU6yuCqqlIqMYR4vJqiNmsoQetG8=; b=X/du9oOoGnmeW8yGUaaGx6LikNbayeVTdGWYAeTBtfhTQRSicaZLxHyIrsKxUQ9shp jyW1sFp03lFRqv092cCXCaHYrdGLgVCnwdkXitaii4c9+EeoY30FVfUssFxJ5KaXb45n C6j9NSJYyK4Kknr940J5n7j7yIUQyvEyrhlJ5pUFBGDHWH788eV3sEqokGJJULljoEb/ ffTVufp+XWNlF8myEUWLnEctBR4BMR3avShbmPjgtHAL+rq7APOzBZRvB6j9y+nkVcWf Weii4ru2srXr4cDNd4oEgtli4DeCznHZVvXLGE0rYG6DLKsfXHz9JMH3WFjwPOdUMjVA cQOA== X-Gm-Message-State: AOAM533sWxL9yMemsSHrsaZ7lATsbxe6jZLu/yk5KCr8uuTQpWtNa6Tt gwfydF54dGWeqN6Gbpns7kNedfRADhk= X-Google-Smtp-Source: ABdhPJz7mxL+S2Lab2T5hMDwwgPoay70ctPNaSp6QD8Sq6WYdXwlFakekfvdwMxKN1TtTbGm1PpuAw== X-Received: by 2002:a19:c350:: with SMTP id t77mr970817lff.240.1614826053517; Wed, 03 Mar 2021 18:47:33 -0800 (PST) Received: from jonathartonsmbp.lan (176-93-29-60.bb.dnainternet.fi. [176.93.29.60]) by smtp.gmail.com with ESMTPSA id u12sm3198039lff.250.2021.03.03.18.47.32 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 03 Mar 2021 18:47:32 -0800 (PST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.7\)) From: Jonathan Morton In-Reply-To: Date: Thu, 4 Mar 2021 04:47:31 +0200 Cc: Cake@lists.bufferbloat.net Content-Transfer-Encoding: quoted-printable Message-Id: <9C791D21-7FC9-425E-9167-EC7660BF38F9@gmail.com> References: To: Thomas Croghan X-Mailer: Apple Mail (2.3445.9.7) Subject: Re: [Cake] ISP Implementation X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Mar 2021 02:47:35 -0000 > On 4 Mar, 2021, at 3:54 am, Thomas Croghan = wrote: >=20 > So, a beta of Mikrotik's RouterOS was released some time ago which = finally has Cake built into it.=20 >=20 > In testing everything seems to be working, I just am coming up with = some questions that I haven't been able to answer.=20 >=20 > Should there be any special considerations when Cake is being used in = a setting where it's by far the most significant limiting factor to a = connection? For example: --10 Gbps Fiber -- --10 = Gbps Fiber -- [ISP Switch] -- 1 Gbps Fiber -- <500 Mbps Customer> > In this situation very frequently the "" could be running = Cake and do the bandwidth limiting of the customer down to 1/2 (or even = less) of the physical connectivity. A lot of the conversations here = revolve around Cake being set up just below the Bandwidth limits of the = ISP, but that's not really going to be the case in a lot of the ISP = world. There shouldn't be any problems with that. Indeed, Cake is *best* used = as the bottleneck inducer with effectively unlimited inbound bandwidth, = as is typically the case when debloating a customer's upstream link at = the CPE. In my own setup, I currently have GigE LAN feeding into a = 2Mbps Cake instance in that direction, to deal with a decidedly variable = LTE last-mile; this is good enough to permit reliable videoconferencing. All you should ned to do here is to filter each subscriber's traffic = into a separate Cake instance, configured to the appropriate rate, and = ensure that the underlying hardware has enough throughput to keep up. > Another question would be based on the above: >=20 > How well does Cake do with stacking instances? In some cases our above = example could look more like this: -- [Some sort of = limitation to 100 Mbps] -- -- 1 Gbps connection- <25 Mbps = Customer X 10>=20 >=20 > In this situation, would it be helpful to Cake to have a "Parent = Queue" that limits the total throughput of all customer traffic to = 99-100 Mbps then "Child Queues" that respectively limit customers to = their 25 Mbps? Or would it be better to just setup each customer Queue = at their limit and let Cake handle the times when the oversubscription = has reared it's ugly head? Cake is not specifically designed to handle this case. It is designed = around the assumption that there is one bottleneck link to manage, = though there may be several hosts who have equal rights to use as much = of it as is available. Ideally you would put one Cake or fq_codel = instance immediately upstream of every link that may become saturated; = in practice you might not have access to do so. With that said, for the above topology you could use an ingress Cake = instance to manage the backhaul bottleneck (using the "dual-dsthost" = mode to more-or-less fairly share this bandwidth between subscribers), = then a per-subscriber array of Cake instances on egress to handle that = side, as above. In the reverse direction you could invert this, with a = per-subscriber tree on ingress and a backhaul-generic instance (using = "dual-srchost" mode) on egress. The actual location where queuing and = ECN marking occurs would shift dynamically depending on where the limit = exists, and that can be monitored via the qdisc stats. This sort of question has come up before, which sort-of suggests that = there's room for a qdisc specifically designed for this family of use = cases. Indeed I think HTB is designed with stuff like this in mind, = though it uses markedly inferior shaping algorithms. At this precise = moment I'm occupied with the upcoming IETF (and my current project, Some = Congestion Experienced), but there is a possibility I could adapt some = of Cake's technology to a HTB-like structure, later on. - Jonathan Morton