* [Cake] ISP Implementation @ 2021-03-04 1:54 Thomas Croghan 2021-03-04 2:47 ` Jonathan Morton 0 siblings, 1 reply; 8+ messages in thread From: Thomas Croghan @ 2021-03-04 1:54 UTC (permalink / raw) To: Cake [-- Attachment #1: Type: text/plain, Size: 1682 bytes --] So, a beta of Mikrotik's RouterOS was released some time ago which finally has Cake built into it. In testing everything seems to be working, I just am coming up with some questions that I haven't been able to answer. Should there be any special considerations when Cake is being used in a setting where it's by far the most significant limiting factor to a connection? For example: <internet> --10 Gbps Fiber -- <ISP Router> --10 Gbps Fiber -- [ISP Switch] -- 1 Gbps Fiber -- <500 Mbps Customer> In this situation very frequently the "<ISP Router>" could be running Cake and do the bandwidth limiting of the customer down to 1/2 (or even less) of the physical connectivity. A lot of the conversations here revolve around Cake being set up just below the Bandwidth limits of the ISP, but that's not really going to be the case in a lot of the ISP world. Another question would be based on the above: How well does Cake do with stacking instances? In some cases our above example could look more like this: <Internet> -- [Some sort of limitation to 100 Mbps] -- <ISP Router> -- 1 Gbps connection- <25 Mbps Customer X 10> In this situation, would it be helpful to Cake to have a "Parent Queue" that limits the total throughput of all customer traffic to 99-100 Mbps then "Child Queues" that respectively limit customers to their 25 Mbps? Or would it be better to just setup each customer Queue at their limit and let Cake handle the times when the oversubscription has reared it's ugly head? To be honest I have a few more questions, but I don't think many people want to read pages and pages of my ignorance. If my question isn't too stupid, I would love to ask a few others. [-- Attachment #2: Type: text/html, Size: 2163 bytes --] ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cake] ISP Implementation 2021-03-04 1:54 [Cake] ISP Implementation Thomas Croghan @ 2021-03-04 2:47 ` Jonathan Morton 2021-03-04 2:51 ` Dave Taht 0 siblings, 1 reply; 8+ messages in thread From: Jonathan Morton @ 2021-03-04 2:47 UTC (permalink / raw) To: Thomas Croghan; +Cc: Cake > On 4 Mar, 2021, at 3:54 am, Thomas Croghan <tcroghan@lostcreek.tech> wrote: > > So, a beta of Mikrotik's RouterOS was released some time ago which finally has Cake built into it. > > In testing everything seems to be working, I just am coming up with some questions that I haven't been able to answer. > > Should there be any special considerations when Cake is being used in a setting where it's by far the most significant limiting factor to a connection? For example: <internet> --10 Gbps Fiber -- <ISP Router> --10 Gbps Fiber -- [ISP Switch] -- 1 Gbps Fiber -- <500 Mbps Customer> > In this situation very frequently the "<ISP Router>" could be running Cake and do the bandwidth limiting of the customer down to 1/2 (or even less) of the physical connectivity. A lot of the conversations here revolve around Cake being set up just below the Bandwidth limits of the ISP, but that's not really going to be the case in a lot of the ISP world. There shouldn't be any problems with that. Indeed, Cake is *best* used as the bottleneck inducer with effectively unlimited inbound bandwidth, as is typically the case when debloating a customer's upstream link at the CPE. In my own setup, I currently have GigE LAN feeding into a 2Mbps Cake instance in that direction, to deal with a decidedly variable LTE last-mile; this is good enough to permit reliable videoconferencing. All you should ned to do here is to filter each subscriber's traffic into a separate Cake instance, configured to the appropriate rate, and ensure that the underlying hardware has enough throughput to keep up. > Another question would be based on the above: > > How well does Cake do with stacking instances? In some cases our above example could look more like this: <Internet> -- [Some sort of limitation to 100 Mbps] -- <ISP Router> -- 1 Gbps connection- <25 Mbps Customer X 10> > > In this situation, would it be helpful to Cake to have a "Parent Queue" that limits the total throughput of all customer traffic to 99-100 Mbps then "Child Queues" that respectively limit customers to their 25 Mbps? Or would it be better to just setup each customer Queue at their limit and let Cake handle the times when the oversubscription has reared it's ugly head? Cake is not specifically designed to handle this case. It is designed around the assumption that there is one bottleneck link to manage, though there may be several hosts who have equal rights to use as much of it as is available. Ideally you would put one Cake or fq_codel instance immediately upstream of every link that may become saturated; in practice you might not have access to do so. With that said, for the above topology you could use an ingress Cake instance to manage the backhaul bottleneck (using the "dual-dsthost" mode to more-or-less fairly share this bandwidth between subscribers), then a per-subscriber array of Cake instances on egress to handle that side, as above. In the reverse direction you could invert this, with a per-subscriber tree on ingress and a backhaul-generic instance (using "dual-srchost" mode) on egress. The actual location where queuing and ECN marking occurs would shift dynamically depending on where the limit exists, and that can be monitored via the qdisc stats. This sort of question has come up before, which sort-of suggests that there's room for a qdisc specifically designed for this family of use cases. Indeed I think HTB is designed with stuff like this in mind, though it uses markedly inferior shaping algorithms. At this precise moment I'm occupied with the upcoming IETF (and my current project, Some Congestion Experienced), but there is a possibility I could adapt some of Cake's technology to a HTB-like structure, later on. - Jonathan Morton ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cake] ISP Implementation 2021-03-04 2:47 ` Jonathan Morton @ 2021-03-04 2:51 ` Dave Taht 2021-03-04 2:55 ` Jonathan Morton 0 siblings, 1 reply; 8+ messages in thread From: Dave Taht @ 2021-03-04 2:51 UTC (permalink / raw) To: Jonathan Morton; +Cc: Thomas Croghan, Cake List recently there was a thread on another bufferbloat list about a very interesting ISP approach using massively hashed tc filters + fq_codel or cake that has code in github. I cannot for the life of me remember the name of the thread or the github right. On Wed, Mar 3, 2021 at 6:47 PM Jonathan Morton <chromatix99@gmail.com> wrote: > > > On 4 Mar, 2021, at 3:54 am, Thomas Croghan <tcroghan@lostcreek.tech> wrote: > > > > So, a beta of Mikrotik's RouterOS was released some time ago which finally has Cake built into it. > > > > In testing everything seems to be working, I just am coming up with some questions that I haven't been able to answer. > > > > Should there be any special considerations when Cake is being used in a setting where it's by far the most significant limiting factor to a connection? For example: <internet> --10 Gbps Fiber -- <ISP Router> --10 Gbps Fiber -- [ISP Switch] -- 1 Gbps Fiber -- <500 Mbps Customer> > > In this situation very frequently the "<ISP Router>" could be running Cake and do the bandwidth limiting of the customer down to 1/2 (or even less) of the physical connectivity. A lot of the conversations here revolve around Cake being set up just below the Bandwidth limits of the ISP, but that's not really going to be the case in a lot of the ISP world. > > There shouldn't be any problems with that. Indeed, Cake is *best* used as the bottleneck inducer with effectively unlimited inbound bandwidth, as is typically the case when debloating a customer's upstream link at the CPE. In my own setup, I currently have GigE LAN feeding into a 2Mbps Cake instance in that direction, to deal with a decidedly variable LTE last-mile; this is good enough to permit reliable videoconferencing. > > All you should ned to do here is to filter each subscriber's traffic into a separate Cake instance, configured to the appropriate rate, and ensure that the underlying hardware has enough throughput to keep up. > > > Another question would be based on the above: > > > > How well does Cake do with stacking instances? In some cases our above example could look more like this: <Internet> -- [Some sort of limitation to 100 Mbps] -- <ISP Router> -- 1 Gbps connection- <25 Mbps Customer X 10> > > > > In this situation, would it be helpful to Cake to have a "Parent Queue" that limits the total throughput of all customer traffic to 99-100 Mbps then "Child Queues" that respectively limit customers to their 25 Mbps? Or would it be better to just setup each customer Queue at their limit and let Cake handle the times when the oversubscription has reared it's ugly head? > > Cake is not specifically designed to handle this case. It is designed around the assumption that there is one bottleneck link to manage, though there may be several hosts who have equal rights to use as much of it as is available. Ideally you would put one Cake or fq_codel instance immediately upstream of every link that may become saturated; in practice you might not have access to do so. > > With that said, for the above topology you could use an ingress Cake instance to manage the backhaul bottleneck (using the "dual-dsthost" mode to more-or-less fairly share this bandwidth between subscribers), then a per-subscriber array of Cake instances on egress to handle that side, as above. In the reverse direction you could invert this, with a per-subscriber tree on ingress and a backhaul-generic instance (using "dual-srchost" mode) on egress. The actual location where queuing and ECN marking occurs would shift dynamically depending on where the limit exists, and that can be monitored via the qdisc stats. > > This sort of question has come up before, which sort-of suggests that there's room for a qdisc specifically designed for this family of use cases. Indeed I think HTB is designed with stuff like this in mind, though it uses markedly inferior shaping algorithms. At this precise moment I'm occupied with the upcoming IETF (and my current project, Some Congestion Experienced), but there is a possibility I could adapt some of Cake's technology to a HTB-like structure, later on. > > - Jonathan Morton > > _______________________________________________ > Cake mailing list > Cake@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cake -- "For a successful technology, reality must take precedence over public relations, for Mother Nature cannot be fooled" - Richard Feynman dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729 ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cake] ISP Implementation 2021-03-04 2:51 ` Dave Taht @ 2021-03-04 2:55 ` Jonathan Morton 2021-03-04 3:14 ` Dave Taht 0 siblings, 1 reply; 8+ messages in thread From: Jonathan Morton @ 2021-03-04 2:55 UTC (permalink / raw) To: Dave Taht; +Cc: Thomas Croghan, Cake List > On 4 Mar, 2021, at 4:51 am, Dave Taht <dave.taht@gmail.com> wrote: > > recently there was a thread on another bufferbloat list about a very > interesting ISP approach using massively hashed tc filters + fq_codel > or cake that has code in github. I cannot for the life of me remember > the name of the thread or the github right. This, surely? https://github.com/rchac/LibreQoS/ - Jonathan Morton ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cake] ISP Implementation 2021-03-04 2:55 ` Jonathan Morton @ 2021-03-04 3:14 ` Dave Taht 2021-03-04 3:18 ` Jonathan Morton 0 siblings, 1 reply; 8+ messages in thread From: Dave Taht @ 2021-03-04 3:14 UTC (permalink / raw) To: Jonathan Morton; +Cc: Thomas Croghan, Cake List yes, that. can it be made to work with cake? On Wed, Mar 3, 2021 at 6:55 PM Jonathan Morton <chromatix99@gmail.com> wrote: > > > On 4 Mar, 2021, at 4:51 am, Dave Taht <dave.taht@gmail.com> wrote: > > > > recently there was a thread on another bufferbloat list about a very > > interesting ISP approach using massively hashed tc filters + fq_codel > > or cake that has code in github. I cannot for the life of me remember > > the name of the thread or the github right. > > This, surely? > > https://github.com/rchac/LibreQoS/ > > - Jonathan Morton -- "For a successful technology, reality must take precedence over public relations, for Mother Nature cannot be fooled" - Richard Feynman dave@taht.net <Dave Täht> CTO, TekLibre, LLC Tel: 1-831-435-0729 ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cake] ISP Implementation 2021-03-04 3:14 ` Dave Taht @ 2021-03-04 3:18 ` Jonathan Morton 2021-03-04 6:31 ` Thomas Croghan 0 siblings, 1 reply; 8+ messages in thread From: Jonathan Morton @ 2021-03-04 3:18 UTC (permalink / raw) To: Dave Taht; +Cc: Thomas Croghan, Cake List > On 4 Mar, 2021, at 5:14 am, Dave Taht <dave.taht@gmail.com> wrote: > > yes, that. can it be made to work with cake? The README says there is experimental support. I haven't looked at it closely. - Jonathan Morton ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cake] ISP Implementation 2021-03-04 3:18 ` Jonathan Morton @ 2021-03-04 6:31 ` Thomas Croghan 2021-03-04 8:14 ` Jonathan Morton 0 siblings, 1 reply; 8+ messages in thread From: Thomas Croghan @ 2021-03-04 6:31 UTC (permalink / raw) To: Cake List [-- Attachment #1: Type: text/plain, Size: 1711 bytes --] >Cake is *best* used as the bottleneck inducer with effectively unlimited inbound bandwidth, I kind of figured that Cake was designed to be the bottleneck, but I don't want to be telling people the wrong things. I'll have to take another look LibreQoS, maybe there's a way to duplicate their work, though I like the processor efficiency I have seen on Cake. (It could be a Mikrotik implementation or my poor configuration of FQ_Codel though...) The issue I had with the LibreQoS model is that you are distancing yourself from the customer with the bandwidth limiter. In theory you want a bandwidth limiter limiting the upload traffic from your customer and a bandwidth limiter right at your upstream connection to limit each customer's download bandwidth so that your internal network infrastructure get's efficiently used and prevents your equipment from being the source of bufferbloat. At least that's the running theory with HTB Bandwidth limiters that most people are running right now. So in my second question it's probably going to be best to have a Cake instance on either side of the limitation. So this would be preferable right? <Theoretically unlimited bandwidth> -- <Cake Instance Limiting bandwidth going left to right> -- <Some sort of limit to 100 Mbps> -- <Cake Instance Limiting bandwidth going right to left> -- <10 x 25 Mbps Customers> On Wed, Mar 3, 2021 at 8:18 PM Jonathan Morton <chromatix99@gmail.com> wrote: > > On 4 Mar, 2021, at 5:14 am, Dave Taht <dave.taht@gmail.com> wrote: > > > > yes, that. can it be made to work with cake? > > The README says there is experimental support. I haven't looked at it > closely. > > - Jonathan Morton -- Tommy Croghan Lost Creek Tech [-- Attachment #2: Type: text/html, Size: 2398 bytes --] ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cake] ISP Implementation 2021-03-04 6:31 ` Thomas Croghan @ 2021-03-04 8:14 ` Jonathan Morton 0 siblings, 0 replies; 8+ messages in thread From: Jonathan Morton @ 2021-03-04 8:14 UTC (permalink / raw) To: Thomas Croghan; +Cc: Cake List > On 4 Mar, 2021, at 8:31 am, Thomas Croghan <tcroghan@lostcreek.tech> wrote: > > So this would be preferable right? <Theoretically unlimited bandwidth> -- <Cake Instance Limiting bandwidth going left to right> -- <Some sort of limit to 100 Mbps> -- <Cake Instance Limiting bandwidth going right to left> -- <10 x 25 Mbps Customers> Yes, putting the Cake instances associated with the backhaul link upstream of the link in both directions like that is better for a number of reasons. You can still have the instances managing individual customers on the right-hand side, or even further to the right. If the customer links are physically wider in the upstream direction than is made available to the customer, then there's no problem in doing all the per-customer work in an aggregated position. The difference (in the long run) between the traffic transmitted by the customer and that released to traverse the backhaul is limited to AQM activity on Not-ECT traffic, which will be small, unless they start flooding in which case the overload protection will kick in and start dropping a lot of packets. This is also what you'd expect to see with a well-behaved policer in the same position. - Jonathan Morton ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-03-04 8:14 UTC | newest] Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-03-04 1:54 [Cake] ISP Implementation Thomas Croghan 2021-03-04 2:47 ` Jonathan Morton 2021-03-04 2:51 ` Dave Taht 2021-03-04 2:55 ` Jonathan Morton 2021-03-04 3:14 ` Dave Taht 2021-03-04 3:18 ` Jonathan Morton 2021-03-04 6:31 ` Thomas Croghan 2021-03-04 8:14 ` Jonathan Morton
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox