[Bloat] number of home routers with ingress AQM
Sebastian Moeller
moeller0 at gmx.de
Wed Apr 3 04:16:42 EDT 2019
Hi Ryan,
thanks for your insights, more below in-line.
> On Apr 3, 2019, at 01:23, Ryan Mounce <ryan at mounce.com.au> wrote:
>
> On Wed, 3 Apr 2019 at 00:05, Sebastian Moeller <moeller0 at gmx.de> wrote:
>>
>>
>>
>>> On Apr 2, 2019, at 15:15, Ryan Mounce <ryan at mounce.com.au> wrote:
>>>
>>> On Tue, 2 Apr 2019 at 22:08, Sebastian Moeller <moeller0 at gmx.de> wrote:
>>>>
>>>> I just wondered if anybody has any reasonable estimate how many end-users actually employ fair-queueing AQMs with active ECN-marking for ingress traffic @home? I am trying to understand whether L4S approach to simply declare these as insignificant in number is justifiable?
>>>
>>> L4S people are concerned by RFC 3168 / "classic" ECN bottlenecks
>>> *without* fq.
>>
>> I know, but I believe that they misunderstand the issues resulting from post-bottleneck shaping, like ingress shaping on the remote side of the true bottleneck. The idea seems that sending at too high a rate is unproblematic if the AQM can simply queue up these packets and delay them accordingly. But in the ingress shaper case these packets already traversed the bottleneck and aone has payed the bandwidth price to hoist them to the home, delaying or even dropping on the AQM side will not magically get the time back the packets took traversing the link.
>
> My understanding is that with an AQM performing RFC 3168 style ECN
> marking, the L4S flows will build a standing queue within their flow
> queue before the AQM starts ECN marking.
I agree, so far so normal.
> At this point the L4S flow
> will be less responsive to marks with a linear (?) decrease rather
> than treating it like loss (multiplicative decrease), however the AQM
> will just keep on marking to keep in in check.
Again agreed, except an AQM expecting a quadratic response will not mark rapidly enough to slow down an L4S flow as fast as other flows.
> The fq shaper will
> continue to isolate it from other flows at the bottleneck and enforce
> fairness between flows (or in the case of an ingress shaper after the
> true bottleneck, continue to selectively apply drops/marks that
> effectively 'nudge' flows towards fairness).
Again agreed, except the L4S flow will not respond as the AQM predicts/expects and will keep sending at a higher rate (at least the CE mark should stop the flow from increasing the tx bandwidth...) and hence will keep occupying the true bottleneck. In other words the fq-isolation will work considerably worse than the rash judgment "misunderstanding the meaning of a CE-mark does not matter for fq-AQMs" assumes.
>
> If there's no fq and there is RFC 3168 style ECN marking, the L4S
> flows assume that they're receiving aggressive DCTCP-style marking,
> respond less aggressively to each mark, and will starve conventional
> Reno friendly flows.
Yes, except in the above case effectively the same is going to happen, the L4S flow is still not throttling back hard enough and will hence occupy more bandwidth of the bottleneck than intended.
> L4S people are relying on there being almost no
> such bottlenecks on the internet, and to be honest I think this is a
> fair assumption.
I disagree, on whether they a) have a robust and reliable estimate of the number of such beasts and b) I would like to see less assumption and more measurements here ;)
> The most deployed single queue AQM may be PIE as
> mandated by DOCSIS 3.1, which had ECN marking disabled before the
> whole DualQ/L4S thing. Bob Briscoe thinks the main concern is people
> that have found old Cisco routers with RED and re-commissioned them...
Again thinks and assumes does not fill me with warm and fuzzy feelings, I want to see hard cold data.
>
> As L4S flows would still be still be getting ECN marked in either case,
> there would be no dropping of packets that have already traversed the
> bottleneck in order to signal congestion. So long as there is fq to
> enforce fairness, I can't see any problem.
The problem is that fairness is enforced too late here, as the under-responsive L4S flow already hogged more bottleneck bandwidth than intended.
>
> Sure, signalling congestion without loss doesn't mean that the packets
> haven't already spent more than their fair share of time at the
> previous bottleneck.
And that is my point.
> This is a more general problem with shaping
> ingress further downstream from the true bottleneck - and I don't see
> that it's any worse here than with any other TCP overshooting during
> slow start.
Just because ingress shaping has challenges I see no reason to make these worse by design... The point I am making is neither against L4S in general or its quest for a different TCP-CC with better congestion signaling, my concern is about the half-backed attempt to press a single codepoint into service where actually two codepoints (or rather an independent full bit) are required. I detest the fact that the project intends to simply argue away the issues this incomplete isolation causes instead of figuring out how to solve the issue properly. IMHO that either means make it possible to share L4S and non-L4S flows _fairly_ on TCP-friendly ECN-marking AQMs, or find a way to properly identify the handiwork of an L4S-complient AQM from a TCP-friendly one.
>
> There are certainly more real threats such as Akamai's FAST TCP.
I typically do not accept the argument, that one rule-violation justifies ore rule-violations ;)
> It's
> like BBR in that it attempts to detect and respond to congestion based
> on queuing latency,
Somebody should give CC designers the Codel paper and make them understand that this approach will need to be sensitive to changes in the 5-10ms range to account for the presence of competent AQM in the path... At the current time I see no real justification for pushing this same broken idea time and time again, it never worked well, see TCP-Vegas, LEDBAT, ...
> and tries to ignore low levels of random packet
> loss that don't occur due to congestion.
Yes, that is just as unfortunate a misdesign, but it is encouraging that BBRv2 actually tries to look harder at this issue and come up with a "loss-model" to somehow disambiguate between different losses.
> Their implementation
> definitely presents issues for ingress shaping:
> https://forums.whirlpool.net.au/thread/2530363
This is why I call this mis-designed, the underlaying model of the network in BBR obviously does not match reality, in my world that means the model needs changing ;)
>
> I saw that thread years ago, and then eventually saw the issue for
> myself after changing ISP. Suddenly Windows Update was downloading
> from my ISP's embedded Akamai cache using FAST TCP, and was
> effectively unresponsive to cake's ingress shaping, instead filling
> the queue in my ISP's BNG.
I note that FAST TCP in all likelihood is not TCP-friendly.
>
> Fact is... ingress shaping is a hack.
That is harsh, but I give you sub-optimal and approximate, it also works amazingly well given all theoretical reasons why it should not work.
> The real problem needs to be
> solved in the BNG, in the CMTS, in the DSLAM, etc.
But realistically it is not going to be solved in the way end-users would like, simply due to the economic incentives not being aligned for ISPs and end-users.
In addition the remote end might not even have enough information to perform the kind of shaping a user might desire, e.g. I use cake's isolation options to have approximate per-internal-IP-fairness, but for IPv4 that requires having access to the internal IP addresses, which simply are unkown at the upstream end.
> L4S is just one
> proposal and of course it deserves scrutiny before gobbling up the
> precious ECT(1) codepoint,
I could live with it eating ECT(1), as much as I prefer SCE's conceptually purity and seeming simplicity, my issue is that it gobbles ECT(1) even though ECT(1) does not solve the issue L4S is using it for; and they know this. but try to just argue the issue away instead of at least trying to empirically quantify it.
> however it seems to have some traction with
> vendors and a chance at wide deployment with a view to address address
> exactly this problem *at* the bottleneck.
Let's see how this plays out.
> For that, it also deserves
> consideration.
>
>> Why do I care, because that ingress shaping setup is what I use at home, and I have zero confidence that ISPs will come up with a solution to my latency desires that I am going to be happy with... And what I see from the L4S mixes light with a lot of shadows.
>>
>>
>>> I don't think there would be any such ingress shapers
>>> configured on home gateways. Certainly not by anyone on this list...
>>> anyone running non-fq codel or flowblind cake for ingress shaping?
>>
>> As stated above, I believe fq to not be a reliable safety valve for the ingress shaping case.
>
> -Ryan
More information about the Bloat
mailing list