* [Bloat] Comcast & L4S @ 2025-01-31 13:20 Rich Brown 2025-01-31 13:27 ` Sebastian Moeller 0 siblings, 1 reply; 14+ messages in thread From: Rich Brown @ 2025-01-31 13:20 UTC (permalink / raw) To: Rich Brown via Bloat Google Alerts sent me this: https://www.webpronews.com/comcasts-latency-leap-a-game-changer-in-network-performance/ Key quote: "Compatibility and Ecosystem: For L4S to have a significant impact, it requires an ecosystem where both the network infrastructure and the end-user devices support the standard..." Can anyone spell "boil the ocean"? :-) Or am I missing someting? ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-01-31 13:20 [Bloat] Comcast & L4S Rich Brown @ 2025-01-31 13:27 ` Sebastian Moeller 2025-01-31 23:57 ` Dave Taht 0 siblings, 1 reply; 14+ messages in thread From: Sebastian Moeller @ 2025-01-31 13:27 UTC (permalink / raw) To: Rich Brown; +Cc: Rich Brown via Bloat Hi Rich, > On 31. Jan 2025, at 14:20, Rich Brown via Bloat <bloat@lists.bufferbloat.net> wrote: > > Google Alerts sent me this: https://www.webpronews.com/comcasts-latency-leap-a-game-changer-in-network-performance/ > > Key quote: "Compatibility and Ecosystem: For L4S to have a significant impact, it requires an ecosystem where both the network infrastructure and the end-user devices support the standard..." > > Can anyone spell "boil the ocean"? :-) > > Or am I missing someting? Well, the whole safety mechanisms in L4$ are laughably inadequate... this "design" essentially exposes a priority scheduler* without meaningful admission control to the open internet. This is so optimistically naive that it almost is funny again. I wish all the effort and hard work to make L4$ happen, would have been put in a reasonable design... but at least I learned one of the IETF's failure modes, and that is at least something valuable ;) *) Just because something is not a strict preempting priority scheduler does not make it a good idea to expose it blindly... a conditional priority scheduler with e.g. L4$' weight share of 10:1 already can do a lot of harm. > > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-01-31 13:27 ` Sebastian Moeller @ 2025-01-31 23:57 ` Dave Taht 2025-02-01 0:40 ` David Collier-Brown 2025-02-01 13:35 ` Sebastian Moeller 0 siblings, 2 replies; 14+ messages in thread From: Dave Taht @ 2025-01-31 23:57 UTC (permalink / raw) To: Sebastian Moeller; +Cc: Rich Brown, Rich Brown via Bloat Here are the positives: For the first time, a major ISP has deployed the PIE AQM on all traffic. Before now Comcast was only doing that on the upstream. That´s 99.99% of all current comcast traffic getting an AQM on it. WIN. The L4S side being enabled will also result in some applications actually trying to use it for cloud gaming. There is a partnership with valve, meta, and apple, that implies that we will perhaps see some VR and AR applications trying to use it. I look forward to a killer app. Negatives include explicit marking and potential DOS vectors as often discussed. I do feel that in order to keep up with the jonesies, we will have to add optional l4s marking to CAKE, which should outperform pie (mark-head), I just wish I knew what the right level was - at 100Mbit it seemed at 2ms was best. We also need to remove classic RFC3168 style marking and drop instead when the L4S bit is present - across the entire linux and BSD ecosystem. There was an abortive attempt last year to get dualpi, accecn, and prague into mainstream linux, but it stumbled over GSO handing, and has not been resubmitted. ACCECN seems to be making some progress. This makes it really hard to fool with this stuff. On Fri, Jan 31, 2025 at 5:27 AM Sebastian Moeller via Bloat <bloat@lists.bufferbloat.net> wrote: > > Hi Rich, > > > > On 31. Jan 2025, at 14:20, Rich Brown via Bloat <bloat@lists.bufferbloat.net> wrote: > > > > Google Alerts sent me this: https://www.webpronews.com/comcasts-latency-leap-a-game-changer-in-network-performance/ > > > > Key quote: "Compatibility and Ecosystem: For L4S to have a significant impact, it requires an ecosystem where both the network infrastructure and the end-user devices support the standard..." > > > > Can anyone spell "boil the ocean"? :-) > > > > Or am I missing someting? > > Well, the whole safety mechanisms in L4$ are laughably inadequate... this "design" essentially exposes a priority scheduler* without meaningful admission control to the open internet. This is so optimistically naive that it almost is funny again. I wish all the effort and hard work to make L4$ happen, would have been put in a reasonable design... but at least I learned one of the IETF's failure modes, and that is at least something valuable ;) > > > *) Just because something is not a strict preempting priority scheduler does not make it a good idea to expose it blindly... a conditional priority scheduler with e.g. L4$' weight share of 10:1 already can do a lot of harm. > > > > > > > > _______________________________________________ > > Bloat mailing list > > Bloat@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/bloat > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat -- Dave Täht CSO, LibreQos ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-01-31 23:57 ` Dave Taht @ 2025-02-01 0:40 ` David Collier-Brown 2025-02-01 0:49 ` Dave Taht 2025-02-01 13:35 ` Sebastian Moeller 1 sibling, 1 reply; 14+ messages in thread From: David Collier-Brown @ 2025-02-01 0:40 UTC (permalink / raw) To: bloat What Comcast/L4S is doing was once called, by a Polish colleague, "Peeing in the soup, so it smells more like me." --dave On 1/31/25 18:57, Dave Taht via Bloat wrote: > Here are the positives: > > For the first time, a major ISP has deployed the PIE AQM on all > traffic. Before now Comcast was only doing that on the upstream. > That´s 99.99% of all current comcast traffic getting an AQM on it. WIN. > > The L4S side being enabled will also result in some applications > actually trying to use it for cloud gaming. There is a partnership > with valve, > meta, and apple, that implies that we will perhaps see some VR and AR > applications trying to use it. I look forward to a killer app. > > Negatives include explicit marking and potential DOS vectors as often > discussed. I do feel that in order to keep up with the jonesies, > we will have to add optional l4s marking to CAKE, which should > outperform pie (mark-head), I just wish I knew what the right > level was - at 100Mbit it seemed at 2ms was best. We also need to > remove classic RFC3168 style marking and drop instead when the L4S bit > is present - across the entire linux and BSD ecosystem. > > There was an abortive attempt last year to get dualpi, accecn, and > prague into mainstream linux, but it stumbled over GSO handing, and > has not been resubmitted. ACCECN seems to be making some progress. > This makes it really hard to fool with this stuff. > > > > > > > > > On Fri, Jan 31, 2025 at 5:27 AM Sebastian Moeller via Bloat > <bloat@lists.bufferbloat.net> wrote: >> Hi Rich, >> >> >>> On 31. Jan 2025, at 14:20, Rich Brown via Bloat <bloat@lists.bufferbloat.net> wrote: >>> >>> Google Alerts sent me this: https://www.webpronews.com/comcasts-latency-leap-a-game-changer-in-network-performance/ >>> >>> Key quote: "Compatibility and Ecosystem: For L4S to have a significant impact, it requires an ecosystem where both the network infrastructure and the end-user devices support the standard..." >>> >>> Can anyone spell "boil the ocean"? :-) >>> >>> Or am I missing someting? >> Well, the whole safety mechanisms in L4$ are laughably inadequate... this "design" essentially exposes a priority scheduler* without meaningful admission control to the open internet. This is so optimistically naive that it almost is funny again. I wish all the effort and hard work to make L4$ happen, would have been put in a reasonable design... but at least I learned one of the IETF's failure modes, and that is at least something valuable ;) >> >> >> *) Just because something is not a strict preempting priority scheduler does not make it a good idea to expose it blindly... a conditional priority scheduler with e.g. L4$' weight share of 10:1 already can do a lot of harm. >> >> >>> >>> _______________________________________________ >>> Bloat mailing list >>> Bloat@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/bloat >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat > > -- David Collier-Brown, | Always do right. This will gratify System Programmer and Author | some people and astonish the rest davecb@spamcop.net | -- Mark Twain ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-02-01 0:40 ` David Collier-Brown @ 2025-02-01 0:49 ` Dave Taht 2025-02-01 14:33 ` Sebastian Moeller 0 siblings, 1 reply; 14+ messages in thread From: Dave Taht @ 2025-02-01 0:49 UTC (permalink / raw) To: David Collier-Brown; +Cc: bloat https://www.lightreading.com/cable-technology/comcast-wields-low-latency-as-broadband-differentiator On Fri, Jan 31, 2025 at 4:40 PM David Collier-Brown via Bloat <bloat@lists.bufferbloat.net> wrote: > > What Comcast/L4S is doing was once called, by a Polish colleague, > "Peeing in the soup, so it smells more like me." > > --dave > > On 1/31/25 18:57, Dave Taht via Bloat wrote: > > Here are the positives: > > > > For the first time, a major ISP has deployed the PIE AQM on all > > traffic. Before now Comcast was only doing that on the upstream. > > That´s 99.99% of all current comcast traffic getting an AQM on it. WIN. > > > > The L4S side being enabled will also result in some applications > > actually trying to use it for cloud gaming. There is a partnership > > with valve, > > meta, and apple, that implies that we will perhaps see some VR and AR > > applications trying to use it. I look forward to a killer app. > > > > Negatives include explicit marking and potential DOS vectors as often > > discussed. I do feel that in order to keep up with the jonesies, > > we will have to add optional l4s marking to CAKE, which should > > outperform pie (mark-head), I just wish I knew what the right > > level was - at 100Mbit it seemed at 2ms was best. We also need to > > remove classic RFC3168 style marking and drop instead when the L4S bit > > is present - across the entire linux and BSD ecosystem. > > > > There was an abortive attempt last year to get dualpi, accecn, and > > prague into mainstream linux, but it stumbled over GSO handing, and > > has not been resubmitted. ACCECN seems to be making some progress. > > This makes it really hard to fool with this stuff. > > > > > > > > > > > > > > > > > > On Fri, Jan 31, 2025 at 5:27 AM Sebastian Moeller via Bloat > > <bloat@lists.bufferbloat.net> wrote: > >> Hi Rich, > >> > >> > >>> On 31. Jan 2025, at 14:20, Rich Brown via Bloat <bloat@lists.bufferbloat.net> wrote: > >>> > >>> Google Alerts sent me this: https://www.webpronews.com/comcasts-latency-leap-a-game-changer-in-network-performance/ > >>> > >>> Key quote: "Compatibility and Ecosystem: For L4S to have a significant impact, it requires an ecosystem where both the network infrastructure and the end-user devices support the standard..." > >>> > >>> Can anyone spell "boil the ocean"? :-) > >>> > >>> Or am I missing someting? > >> Well, the whole safety mechanisms in L4$ are laughably inadequate... this "design" essentially exposes a priority scheduler* without meaningful admission control to the open internet. This is so optimistically naive that it almost is funny again. I wish all the effort and hard work to make L4$ happen, would have been put in a reasonable design... but at least I learned one of the IETF's failure modes, and that is at least something valuable ;) > >> > >> > >> *) Just because something is not a strict preempting priority scheduler does not make it a good idea to expose it blindly... a conditional priority scheduler with e.g. L4$' weight share of 10:1 already can do a lot of harm. > >> > >> > >>> > >>> _______________________________________________ > >>> Bloat mailing list > >>> Bloat@lists.bufferbloat.net > >>> https://lists.bufferbloat.net/listinfo/bloat > >> _______________________________________________ > >> Bloat mailing list > >> Bloat@lists.bufferbloat.net > >> https://lists.bufferbloat.net/listinfo/bloat > > > > > -- > David Collier-Brown, | Always do right. This will gratify > System Programmer and Author | some people and astonish the rest > davecb@spamcop.net | -- Mark Twain > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat -- Dave Täht CSO, LibreQos ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-02-01 0:49 ` Dave Taht @ 2025-02-01 14:33 ` Sebastian Moeller 2025-02-01 14:51 ` Jonathan Morton 0 siblings, 1 reply; 14+ messages in thread From: Sebastian Moeller @ 2025-02-01 14:33 UTC (permalink / raw) To: Dave Täht; +Cc: David Collier-Brown, bloat > On 1. Feb 2025, at 01:49, Dave Taht via Bloat <bloat@lists.bufferbloat.net> wrote: > > https://www.lightreading.com/cable-technology/comcast-wields-low-latency-as-broadband-differentiator And... the press invariably gets it wrong: "Notable is Comcast's use of the Internet Engineering Task Force's Low Latency Low Loss Scalable Throughput (L4S) standards, which are used to process and support latency-sensitive traffic. LLD, which is part of the Low Latency DOCSIS specs, works by separating small, delay-sensitive, non-queue-building traffic (such as key clicks for an online game) from the primary (and much heavier) queue-building traffic that, for example, might carry a video stream or a large file upload or download." As far as I can tell what LLD does is put both ECT(1) and NQB marked traffic into the same and single L-queue of its conditional priority scheduler (cable labs terminology), so if a video stream uses ECT(1) (essentially cloud gaming sends video streams) LLD will NOT separating it from NQB-marked gaming traffic. Regards Sebastian > > > On Fri, Jan 31, 2025 at 4:40 PM David Collier-Brown via Bloat > <bloat@lists.bufferbloat.net> wrote: >> >> What Comcast/L4S is doing was once called, by a Polish colleague, >> "Peeing in the soup, so it smells more like me." >> >> --dave >> >> On 1/31/25 18:57, Dave Taht via Bloat wrote: >>> Here are the positives: >>> >>> For the first time, a major ISP has deployed the PIE AQM on all >>> traffic. Before now Comcast was only doing that on the upstream. >>> That´s 99.99% of all current comcast traffic getting an AQM on it. WIN. >>> >>> The L4S side being enabled will also result in some applications >>> actually trying to use it for cloud gaming. There is a partnership >>> with valve, >>> meta, and apple, that implies that we will perhaps see some VR and AR >>> applications trying to use it. I look forward to a killer app. >>> >>> Negatives include explicit marking and potential DOS vectors as often >>> discussed. I do feel that in order to keep up with the jonesies, >>> we will have to add optional l4s marking to CAKE, which should >>> outperform pie (mark-head), I just wish I knew what the right >>> level was - at 100Mbit it seemed at 2ms was best. We also need to >>> remove classic RFC3168 style marking and drop instead when the L4S bit >>> is present - across the entire linux and BSD ecosystem. >>> >>> There was an abortive attempt last year to get dualpi, accecn, and >>> prague into mainstream linux, but it stumbled over GSO handing, and >>> has not been resubmitted. ACCECN seems to be making some progress. >>> This makes it really hard to fool with this stuff. >>> >>> >>> >>> >>> >>> >>> >>> >>> On Fri, Jan 31, 2025 at 5:27 AM Sebastian Moeller via Bloat >>> <bloat@lists.bufferbloat.net> wrote: >>>> Hi Rich, >>>> >>>> >>>>> On 31. Jan 2025, at 14:20, Rich Brown via Bloat <bloat@lists.bufferbloat.net> wrote: >>>>> >>>>> Google Alerts sent me this: https://www.webpronews.com/comcasts-latency-leap-a-game-changer-in-network-performance/ >>>>> >>>>> Key quote: "Compatibility and Ecosystem: For L4S to have a significant impact, it requires an ecosystem where both the network infrastructure and the end-user devices support the standard..." >>>>> >>>>> Can anyone spell "boil the ocean"? :-) >>>>> >>>>> Or am I missing someting? >>>> Well, the whole safety mechanisms in L4$ are laughably inadequate... this "design" essentially exposes a priority scheduler* without meaningful admission control to the open internet. This is so optimistically naive that it almost is funny again. I wish all the effort and hard work to make L4$ happen, would have been put in a reasonable design... but at least I learned one of the IETF's failure modes, and that is at least something valuable ;) >>>> >>>> >>>> *) Just because something is not a strict preempting priority scheduler does not make it a good idea to expose it blindly... a conditional priority scheduler with e.g. L4$' weight share of 10:1 already can do a lot of harm. >>>> >>>> >>>>> >>>>> _______________________________________________ >>>>> Bloat mailing list >>>>> Bloat@lists.bufferbloat.net >>>>> https://lists.bufferbloat.net/listinfo/bloat >>>> _______________________________________________ >>>> Bloat mailing list >>>> Bloat@lists.bufferbloat.net >>>> https://lists.bufferbloat.net/listinfo/bloat >>> >>> >> -- >> David Collier-Brown, | Always do right. This will gratify >> System Programmer and Author | some people and astonish the rest >> davecb@spamcop.net | -- Mark Twain >> >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat > > > > -- > Dave Täht CSO, LibreQos > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-02-01 14:33 ` Sebastian Moeller @ 2025-02-01 14:51 ` Jonathan Morton 2025-02-01 17:06 ` Sebastian Moeller 0 siblings, 1 reply; 14+ messages in thread From: Jonathan Morton @ 2025-02-01 14:51 UTC (permalink / raw) To: Sebastian Moeller; +Cc: Dave Täht, David Collier-Brown, bloat > On 1 Feb, 2025, at 4:33 pm, Sebastian Moeller via Bloat <bloat@lists.bufferbloat.net> wrote: > > …Comcast's use of the Internet Engineering Task Force's Low Latency Low Loss Scalable Throughput (L4S) standards… That's the tail wagging the dog - but precisely the kind of non-specialist misunderstanding that L4S' hijacking of the IETF process was designed to foster. It's an EXPERIMENT that the IETF has been BROWBEATEN by Comcast into PERMITTING to occur. It is NOT a STANDARD, and it was NOT IETF-led. Every single design suggestion that IETF proposed, to improve coexistence with other schemes that ARE IETF standards, was resisted or outright ignored. As with NQB, Cake already does essentially what L4S requires, except for default-configured Codel being less than ideal as an AQM for producing congestion signals for a DCTCP-type response. I have no intention of modifying Cake to *specifically* accommodate L4S in any way. If their crap doesn't work properly in a standards-compliant environment, that's THEIR problem. - Jonathan Morton ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-02-01 14:51 ` Jonathan Morton @ 2025-02-01 17:06 ` Sebastian Moeller 2025-02-01 17:26 ` Jonathan Morton 0 siblings, 1 reply; 14+ messages in thread From: Sebastian Moeller @ 2025-02-01 17:06 UTC (permalink / raw) To: Jonathan Morton; +Cc: Dave Täht, David Collier-Brown, Rich Brown via Bloat Hi Jonathan, > On 1. Feb 2025, at 15:51, Jonathan Morton <chromatix99@gmail.com> wrote: > >> On 1 Feb, 2025, at 4:33 pm, Sebastian Moeller via Bloat <bloat@lists.bufferbloat.net> wrote: >> >> …Comcast's use of the Internet Engineering Task Force's Low Latency Low Loss Scalable Throughput (L4S) standards… > > That's the tail wagging the dog - but precisely the kind of non-specialist misunderstanding that L4S' hijacking of the IETF process was designed to foster. To me it made fully clear that the IETF process is really a mess, I witnessed it going splendidly in other WGs, but that required good-faith and gentlemanly sportsmanship from all sides. L4S/NQB showed how easily this process can be derailed... > It's an EXPERIMENT that the IETF has been BROWBEATEN by Comcast into PERMITTING to occur. It is NOT a STANDARD, and it was NOT IETF-led. Every single design suggestion that IETF proposed, to improve coexistence with other schemes that ARE IETF standards, was resisted or outright ignored. Well, my take is, low latency docsis had already been mostly or even fully specified by that time, and so the only changes accepted were those that had zero implications for LLD... so mostly shuffling verbiage around. But yeah the whole ram this though the IETF smells party as a method to create plausible deniability once things turn out not to be working all that well (and pessimistically assuming there is no real fix for the failure modes). > As with NQB, Cake already does essentially what L4S requires, except for default-configured Codel being less than ideal as an AQM for producing congestion signals for a DCTCP-type response. I have no intention of modifying Cake to *specifically* accommodate L4S in any way. If their crap doesn't work properly in a standards-compliant environment, that's THEIR problem. Now, as advocatus diabolical, the way CoDel works we have interval and/or target as configurable parameters and a trade-off between maintaining utilisation over the wider internet and keeping the signalling reactive for closer by flows, maybe we could teach cake to allow a second set of interval/(automatically calculated) target to optimise for local and non local traffic, and use a proper (configurable and maskable) DSCP/TOS to steer packets into this? Maybe CS7 would do to signal its intent for local delivery? > - Jonathan Morton ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-02-01 17:06 ` Sebastian Moeller @ 2025-02-01 17:26 ` Jonathan Morton 2025-02-01 18:05 ` Sebastian Moeller 0 siblings, 1 reply; 14+ messages in thread From: Jonathan Morton @ 2025-02-01 17:26 UTC (permalink / raw) To: Sebastian Moeller Cc: Dave Täht, David Collier-Brown, Rich Brown via Bloat > On 1 Feb, 2025, at 7:06 pm, Sebastian Moeller <moeller0@gmx.de> wrote: > >> As with NQB, Cake already does essentially what L4S requires, except for default-configured Codel being less than ideal as an AQM for producing congestion signals for a DCTCP-type response. I have no intention of modifying Cake to *specifically* accommodate L4S in any way. If their crap doesn't work properly in a standards-compliant environment, that's THEIR problem. > > Now, as advocatus diabolical, the way CoDel works we have interval and/or target as configurable parameters and a trade-off between maintaining utilisation over the wider internet and keeping the signalling reactive for closer by flows, maybe we could teach cake to allow a second set of interval/(automatically calculated) target to optimise for local and non local traffic, and use a proper (configurable and maskable) DSCP/TOS to steer packets into this? Maybe CS7 would do to signal its intent for local delivery? Codel's default 5ms target is already pretty tight, about as tight as you can reasonably make it while still accommodating typical levels of link-level jitter. And COBALT does already find and maintain the appropriate marking rate for DCTCP when required - it just takes a little while to ramp up, so there is a noticeable hump in the delay curve during flow startup. I don't see any low-hanging fruit there; Codel is simply not designed for that congestion response style. DelTiC is a bit more flexible in this respect. I don't however plan to add DelTiC to Cake. Rather, I'm building a new qdisc that does some of the same things as Cake, but using more advanced technology and generally learning some object lessons from the experience. - Jonathan Morton ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-02-01 17:26 ` Jonathan Morton @ 2025-02-01 18:05 ` Sebastian Moeller 2025-02-02 0:09 ` Jonathan Morton 0 siblings, 1 reply; 14+ messages in thread From: Sebastian Moeller @ 2025-02-01 18:05 UTC (permalink / raw) To: Jonathan Morton; +Cc: Dave Täht, David Collier-Brown, Rich Brown via Bloat Hi Jonathan... > On 1. Feb 2025, at 18:26, Jonathan Morton <chromatix99@gmail.com> wrote: > >> On 1 Feb, 2025, at 7:06 pm, Sebastian Moeller <moeller0@gmx.de> wrote: >> >>> As with NQB, Cake already does essentially what L4S requires, except for default-configured Codel being less than ideal as an AQM for producing congestion signals for a DCTCP-type response. I have no intention of modifying Cake to *specifically* accommodate L4S in any way. If their crap doesn't work properly in a standards-compliant environment, that's THEIR problem. >> >> Now, as advocatus diabolical, the way CoDel works we have interval and/or target as configurable parameters and a trade-off between maintaining utilisation over the wider internet and keeping the signalling reactive for closer by flows, maybe we could teach cake to allow a second set of interval/(automatically calculated) target to optimise for local and non local traffic, and use a proper (configurable and maskable) DSCP/TOS to steer packets into this? Maybe CS7 would do to signal its intent for local delivery? > > Codel's default 5ms target is already pretty tight, I am more concerned about the 100ms interval (target is linked to that), waiting 100ms before engaging is not great if the true RTT is in the low single digits... > about as tight as you can reasonably make it while still accommodating typical levels of link-level jitter. Not sure, in a LAN with proper back pressure I would guess lower than 5ms to be achievable. This does not need to go crazy low, so 1 ms would likely do well, with an interval of 10ms... or if 5 ms is truly a sweet spot, maybe decouple interval and target so these can be configured independently (in spite of the theory that recommends target to be 5-10% of interval). > And COBALT does already find and maintain the appropriate marking rate for DCTCP when required - it just takes a little while to ramp up, so there is a noticeable hump in the delay curve during flow startup. I don't see any low-hanging fruit there; Codel is simply not designed for that congestion response style. Fair, and I am not after DCTCP style here (L4S would be) but simply allowing a parallel codel for a tighter target. > DelTiC is a bit more flexible in this respect. I don't however plan to add DelTiC to Cake. Rather, I'm building a new qdisc that does some of the same things as Cake, but using more advanced technology and generally learning some object lessons from the experience. Great! May I propose something for you to ponder, assuming DelTic will also include a traffic shaper? One thing great with cake is the built-in traffic shaper, making setting it up a breeze. However that shaper tends to be relatively CPU-hungry (as shapers tend to be) and at the same time once it runs itself out of CPU cycles tends to not honor its latency target as well HTB+fq_codel tend to do. IIRC with HTB+fq_codel if you are CPI limited latency stays low, throughput takes a hit, with cake it is more that latency increases (not atrociously, even in that mode having cake is IMHO better than no cake) while thropughput takes a smaller hit. Not sure that this is something that can be easily addressed, but IMHO I prefer HTB+fq_codels behaviour in that regards. > > - Jonathan Morton ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-02-01 18:05 ` Sebastian Moeller @ 2025-02-02 0:09 ` Jonathan Morton 2025-02-02 11:39 ` Sebastian Moeller 0 siblings, 1 reply; 14+ messages in thread From: Jonathan Morton @ 2025-02-02 0:09 UTC (permalink / raw) To: Sebastian Moeller Cc: Dave Täht, David Collier-Brown, Rich Brown via Bloat [-- Attachment #1: Type: text/plain, Size: 1536 bytes --] > On 1 Feb, 2025, at 8:05 pm, Sebastian Moeller <moeller0@gmx.de> wrote: > >> about as tight as you can reasonably make it while still accommodating typical levels of link-level jitter. > > Not sure, in a LAN with proper back pressure I would guess lower than 5ms to be achievable. This does not need to go crazy low, so 1 ms would likely do well, with an interval of 10ms... or if 5 ms is truly a sweet spot, maybe decouple interval and target so these can be configured independently (in spite of the theory that recommends target to be 5-10% of interval). Actually, the 5ms target is already too tight for efficient TCP operation on typical Internet paths - unless there is significant statistical multiplexing on the bottleneck link, which is rarely the case in a domestic context. Short RTTs on a LAN allow for achieving full throughput with the queue held this small, but remember that the concept of "LAN" also includes WiFi links whose median latency is orders of magnitude greater than that of switched Ethernet. That's why I don't want to encourage going below 5ms too much. DelTiC actually reverts to the 25ms queue target that has historically been typical for AQMs targeting conventional TCP. It adopts 5ms only for SCE marking. This configuration works very well in testing so far: As for CPU efficiency, that is indeed something to keep in mind. The scheduling logic in Cake got very complex in the end, and there are undoubtedly ways to avoid that with a fresh design. - Jonathan Morton [-- Attachment #2.1: Type: text/html, Size: 2624 bytes --] [-- Attachment #2.2: Screenshot 2024-12-06 at 9.36.25 pm.png --] [-- Type: image/png, Size: 338114 bytes --] ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-02-02 0:09 ` Jonathan Morton @ 2025-02-02 11:39 ` Sebastian Moeller 2025-02-03 14:04 ` Jonathan Morton 0 siblings, 1 reply; 14+ messages in thread From: Sebastian Moeller @ 2025-02-02 11:39 UTC (permalink / raw) To: Jonathan Morton; +Cc: Dave Täht, David Collier-Brown, Rich Brown via Bloat Hi Jonathan, thanks for the graphs... > On 2. Feb 2025, at 01:09, Jonathan Morton <chromatix99@gmail.com> wrote: > >> On 1 Feb, 2025, at 8:05 pm, Sebastian Moeller <moeller0@gmx.de> wrote: >> >>> about as tight as you can reasonably make it while still accommodating typical levels of link-level jitter. >> >> Not sure, in a LAN with proper back pressure I would guess lower than 5ms to be achievable. This does not need to go crazy low, so 1 ms would likely do well, with an interval of 10ms... or if 5 ms is truly a sweet spot, maybe decouple interval and target so these can be configured independently (in spite of the theory that recommends target to be 5-10% of interval). > > Actually, the 5ms target is already too tight for efficient TCP operation on typical Internet paths - unless there is significant statistical multiplexing on the bottleneck link, which is rarely the case in a domestic context. I respectfully disagree, even at 1 Gbps we only go down to 85% utilisation with a single flow I assume. That is a trade-off I am happy to make... > Short RTTs on a LAN allow for achieving full throughput with the queue held this small, but remember that the concept of "LAN" also includes WiFi links whose median latency is orders of magnitude greater than that of switched Ethernet. That's why I don't want to encourage going below 5ms too much. Not wanting top be contrarian, but here I believe fixing WiFi is the better path forward. > DelTiC actually reverts to the 25ms queue target that has historically been typical for AQMs targeting conventional TCP. Not doubting one bit that 25ms makes a ton of sense for DelTic, but where do these historical 25ms come from and how was this number selected? > It adopts 5ms only for SCE marking. This configuration works very well in testing so far: > > <Screenshot 2024-12-06 at 9.36.25 pm.png> > As for CPU efficiency, that is indeed something to keep in mind. The scheduling logic in Cake got very complex in the end, and there are undoubtedly ways to avoid that with a fresh design. Ah, that was not my main focus here, with 1600 Gbps ethernet already in the horizon, I assume a shaper running out of CPU is not really avoidable, I am more interested in that shaper having a graceful latency-conserving failure mode when running out of timely CPU access. Making scheduling more efficient is something that I am fully behind, but I consider these two mostly orthogonal issues. > > - Jonathan Morton > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-02-02 11:39 ` Sebastian Moeller @ 2025-02-03 14:04 ` Jonathan Morton 0 siblings, 0 replies; 14+ messages in thread From: Jonathan Morton @ 2025-02-03 14:04 UTC (permalink / raw) To: Sebastian Moeller Cc: Dave Täht, David Collier-Brown, Rich Brown via Bloat >> Actually, the 5ms target is already too tight for efficient TCP operation on typical Internet paths - unless there is significant statistical multiplexing on the bottleneck link, which is rarely the case in a domestic context. > > I respectfully disagree, even at 1 Gbps we only go down to 85% utilisation with a single flow I assume. That is a trade-off I am happy to make... At 100ms RTT, yes. But you can see that Codel has disproportionately more trouble when the RTT increases a little more than that, and such paths are not uncommon when you look outside of our usual stomping grounds of Europe and North America. This happens because each congestion episode starts to lead to more than one Multiplicative Decrease due to congestion signalling, so the average cwnd falls below what would normally be expected. This would not typically occur at high statistical multiplexing. >> Short RTTs on a LAN allow for achieving full throughput with the queue held this small, but remember that the concept of "LAN" also includes WiFi links whose median latency is orders of magnitude greater than that of switched Ethernet. That's why I don't want to encourage going below 5ms too much. > > Not wanting top be contrarian, but here I believe fixing WiFi is the better path forward. Perhaps, but you'll need to change the fundamental collision-avoidance MAC design of WiFi to do that. Until someone (and it will take more than mere *individual* contributions) gets around to that and existing WiFi hardware mostly drops out of use, we have to design for its current behaviour. I'm not talking about the bufferbloat of some specific WiFi hardware here - we've already done all the technical work we can to fix that. It's the fundamental link protocol. >> DelTiC actually reverts to the 25ms queue target that has historically been typical for AQMs targeting conventional TCP. > > Not doubting one bit that 25ms makes a ton of sense for DelTic, but where do these historical 25ms come from and how was this number selected? Perhaps "historical" is putting it too strongly - it's only quite recently that AQM has used a time-based delay target at all. It is, however, the delay target that PIE uses. The graphs I attached arise from an effort to decide what "rightsize" actually means for a dumb FIFO buffer, in which it proved convenient to also test some AQMs. The classical rule is based on Reno behaviour, and in the absence of statistical multiplexing reduces to "buffer depth equal to baseline path length" to obtain 100% throughput. Updating this for CUBIC yields a rule of "buffer depth 3/7ths of baseline path length", which for a 100ms path would be around 40ms buffer. This is, again, for 100% throughput at steady state. Examining the detailed behaviour of CUBIC, we realised that approximately halving this would still yield reasonably good throughput, due to CUBIC's designed-in decelerating approach to the previous highest cwnd and, particularly, its intermittent use of "fast convergence" cycles in which the inflection point is placed halfway between the peak and trough of the sawtooth. That yields a buffer size of 3/14ths of the baseline RTT. On a 100ms path, 25ms gives a reasonable engineering margin on top of this rule, and is also small enough for VoIP to easily accommodate the jitter induced by a competing traffic load. Thus, in the graphs, you can see DelTiC staying consistently above 95% throughput at 100ms, and falling off relatively gracefully above that. Codel requires a path of 32ms or shorter to achieve that. Even PIE, with the same delay target as DelTiC, doesn't do as well - but that is due to its incorrect marking behaviour, which we have discussed at length before. >> As for CPU efficiency, that is indeed something to keep in mind. The scheduling logic in Cake got very complex in the end, and there are undoubtedly ways to avoid that with a fresh design. > > Ah, that was not my main focus here, with 1600 Gbps ethernet already in the horizon, I assume a shaper running out of CPU is not really avoidable, I am more interested in that shaper having a graceful latency-conserving failure mode when running out of timely CPU access. Making scheduling more efficient is something that I am fully behind, but I consider these two mostly orthogonal issues. I suppose there are two distinct meanings of "scheduling". One is deciding which packet to send next. The other is deciding WHEN the next packet can be sent. It's the latter that might be more complicated than necessary in Cake, and that complexity could easily result in exercising the kernel timer infrastructure more than required. However, I would also note that this behaviour is only seen on certain specific classes of hardware, and on that particular hardware I think there is another mechanism contributing to poor throughput. Cake's shaper architecture quite deliberately "pushes harder" when the throughput goes below the configured rate, and that manifests as higher CPU utilisation. HTB isn't as good at that. But the underlying reason may be a bottleneck in the I/O infrastructure between the CPU and the network. When a qdisc is not attached, this I/O bottleneck is bypassed. - Jonathan Morton ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Bloat] Comcast & L4S 2025-01-31 23:57 ` Dave Taht 2025-02-01 0:40 ` David Collier-Brown @ 2025-02-01 13:35 ` Sebastian Moeller 1 sibling, 0 replies; 14+ messages in thread From: Sebastian Moeller @ 2025-02-01 13:35 UTC (permalink / raw) To: Dave Täht; +Cc: Rich Brown via Bloat Hi Dave, > On 1. Feb 2025, at 00:57, Dave Taht <dave.taht@gmail.com> wrote: > > Here are the positives: > > For the first time, a major ISP has deployed the PIE AQM on all > traffic. Before now Comcast was only doing that on the upstream. > That´s 99.99% of all current comcast traffic getting an AQM on it. WIN. Yes, that might be the lasting positive of the L4$ experiment, but boy do I wish they had used a better AQM than their gimped version of PIE... (I seem to recall they ripped out the part of pie that introduced some burst tolerance)... > > The L4S side being enabled will also result in some applications > actually trying to use it for cloud gaming. I own no stock of cloud gaming companies nor use their services so not sure whether that is something the whole internet has been waiting for. > There is a partnership > with valve, > meta, and apple, that implies that we will perhaps see some VR and AR > applications trying to use it. I look forward to a killer app. I would not hold my breath though... AR/VR has a hard enough time with local applications to gain meaningful marketshare (partly due to being priicy and cumbersome I would guess) so I am not confident adding the whole remote latency sensitive computations challenge on top is going to help. > Negatives include explicit marking and potential DOS vectors as often > discussed. Even more subtle, the way the L-queue is sensitive to bursts, I bet I can construct attack traffic that disrupts the low latency/low jitter promises of L4S for well-bahaving traffic without sticking out like a sore thumb on any traffic monitoring... this thing is engineered based on the principle of hopes and prayers, and an absurd notion of "incentives" team L4S always fudged the way convenient for a given argument. > I do feel that in order to keep up with the jonesies, Mmmh, do we really need to do this before the #L4S experiment has run its course? After all my expectation is it will peter out with a fizzle. > we will have to add optional l4s marking to CAKE, which should > outperform pie (mark-head), I just wish I knew what the right > level was - at 100Mbit it seemed at 2ms was best. This needs to be configurable.... I would assume just like it already is in fq_codel. > We also need to > remove classic RFC3168 style marking and drop instead when the L4S bit > is present - across the entire linux and BSD ecosystem. IFF then like in fq_codel, where this is immensely configurable, let's not hardcode any special behaviour for ECT(1) at least not before we have solid evidence that this new ECT(1) response has staying power, no? > > There was an abortive attempt last year to get dualpi, accecn, and > prague into mainstream linux, but it stumbled over GSO handing, and > has not been resubmitted. I bet nobody really cares, all the shakers and movers of L4S development will shop their own SDKs anyway. But could you elaborate how they stumbled over GSO? I thought cake gives a decent blueprint of how to do this (make it configurable). > ACCECN seems to be making some progress. I doubt that... my gut feeling is the reappropriate the ACE flags as ACK counter might have some legs, but the AccECN options I really see these as paper-ware mostly. > This makes it really hard to fool with this stuff. What kind of fooling do you have in mind? By virtue of L4S defaultiung to a single L-queue to disturb it, all an attacker needs to be able to is get traffic into that queue (or even just the coupled c-queue, the joy of coupling) to cause mischief. Regards Sebastian P.S.: I really wish the laudable effort in deploying L4S with the right things like organised plug-fests, staged monitoreds introduction and even the accompanying PR-efforts would have been coupled with a better engineered solution then it would feel less of a "making pigs fly" exercise... > > > > > > > > > On Fri, Jan 31, 2025 at 5:27 AM Sebastian Moeller via Bloat > <bloat@lists.bufferbloat.net> wrote: >> >> Hi Rich, >> >> >>> On 31. Jan 2025, at 14:20, Rich Brown via Bloat <bloat@lists.bufferbloat.net> wrote: >>> >>> Google Alerts sent me this: https://www.webpronews.com/comcasts-latency-leap-a-game-changer-in-network-performance/ >>> >>> Key quote: "Compatibility and Ecosystem: For L4S to have a significant impact, it requires an ecosystem where both the network infrastructure and the end-user devices support the standard..." >>> >>> Can anyone spell "boil the ocean"? :-) >>> >>> Or am I missing someting? >> >> Well, the whole safety mechanisms in L4$ are laughably inadequate... this "design" essentially exposes a priority scheduler* without meaningful admission control to the open internet. This is so optimistically naive that it almost is funny again. I wish all the effort and hard work to make L4$ happen, would have been put in a reasonable design... but at least I learned one of the IETF's failure modes, and that is at least something valuable ;) >> >> >> *) Just because something is not a strict preempting priority scheduler does not make it a good idea to expose it blindly... a conditional priority scheduler with e.g. L4$' weight share of 10:1 already can do a lot of harm. >> >> >>> >>> >>> _______________________________________________ >>> Bloat mailing list >>> Bloat@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/bloat >> >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat > > > > -- > Dave Täht CSO, LibreQos ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-02-03 14:05 UTC | newest] Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2025-01-31 13:20 [Bloat] Comcast & L4S Rich Brown 2025-01-31 13:27 ` Sebastian Moeller 2025-01-31 23:57 ` Dave Taht 2025-02-01 0:40 ` David Collier-Brown 2025-02-01 0:49 ` Dave Taht 2025-02-01 14:33 ` Sebastian Moeller 2025-02-01 14:51 ` Jonathan Morton 2025-02-01 17:06 ` Sebastian Moeller 2025-02-01 17:26 ` Jonathan Morton 2025-02-01 18:05 ` Sebastian Moeller 2025-02-02 0:09 ` Jonathan Morton 2025-02-02 11:39 ` Sebastian Moeller 2025-02-03 14:04 ` Jonathan Morton 2025-02-01 13:35 ` Sebastian Moeller
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox