[Ecn-sane] [tsvwg] ECN CE that was ECT(0) incorrectly classified as L4S
Sebastian Moeller
moeller0 at gmx.de
Mon Aug 5 09:47:34 EDT 2019
Hello Ruediger,
> On Aug 5, 2019, at 13:47, <Ruediger.Geib at telekom.de> <Ruediger.Geib at telekom.de> wrote:
>
> Hi Sebastian,
>
> thanks. Three more remarks:
>
> I'm happy for any AQM design which comes at low implementation cost and allows me to add value to network operation (be it saving cost, be it enabling value added services). And I think the representatives of other operators are so too.
Yes, that is my premise too. The only cost saving opportunity I can see in both proposals would be if they allowed to run a fully saturated network without the adverse effects on latency and loss. Value added services, sure for the few users sensitive to latency under load. Maybe with the PR the 5G roll-out is getting in regards to low-latency it might be possible to convince more consumers that this is actually valuable?
>
> For most consumers, streaming is the most bandwidth hungry application.
I have no statistical numbers on this topic, but on the list of things that cause issues for latency sensitive home networks the following items come up repeatedly:
A) Streaming in (Youtube, Netflix, Amazon, ...)
B) Streamin out (aka twitch and friends)
C) File sharing with bittorrent (a slight challenge for FQ-AQMs die to lots of parallel flows and a misdesigned back-off mechanism (react to 100ms induced latency under load?))
D) OS updates, (especially windows update when lveraging P2P technolgy from a close by CDN was/is notorious for being a tad too aggressive)
How these stack up proportionally to each other, I have no clue. Typically the reports are that perceived interactivity in FPS games goes down the drain if and combination of A-D are concurrently active.
> I think someone now working for Google published research, that at the time when Internet access bandwidth no longer had an impact on the streaming quality, consumers started to lose interest in "access speed" as an important measure of quality of their Internet access. I think it takes a while until John Doe requires a n*100 Mbit/s Internet access, because any access below 100 Mbit/s causes congestion for the services consumed by John.
That is a good question, as far as I see it, ISPs in Germany still seem to leverage access-rates as their main attraction (giga this and giga that), even though as you note higher rates have diminishing returns for most use-cases.
>
> I see many people around me conveniently use their smartphone to access the Internet.
So do I, but I realize how laggy this feels even for "simple" browsing duty (but I accept that for the ease of use and immediacy)... And it is not guaranteed that the smartphone uses the mobile network, it might just as well use wifi. But the latency/bottleneck issue also exists with smartphones, where the variable bandwidth nature of the radios and the opaqueness of 2-4G modems makes things even less enjoyable than on fixed networks (I see multi second stalls when browsing on a phone, versus 300ms on the fixed line without my AQM.
> A handheld display likely requires less bandwidth for an acceptable display quality, than a large screen.
For browsing, I am not sure that is a real point, given that smartphone display resolution crossed from high to ridiculous some years ago (at least without glasses).
> That doesn't say, the latter disappear. But maybe only one or two of them need to be served per consumer access. That will work with 100 Mbit/s or less for a while
I agree, I run a family of 5 on a 50/10 link including concurrent streaming (in SD) and thanks to employing a competent FQ-AQM on my router (ingress & egress) this works quite well even with interactive sessions. But without that AQM system the link feels noticeably worse... (and this is the reason why I want to see data, that L4S senders will not invalidate the effectiveness of my setup)
Best Regards
Sebastian
> (other bandwidth hungry applications will arrive some day; I prefer the copper access lines in the ground to be replaced by fiber ones)
>
> Regards, Ruediger
>
> -----Ursprüngliche Nachricht-----
> Von: Sebastian Moeller <moeller0 at gmx.de>
> Gesendet: Montag, 5. August 2019 13:00
> An: Geib, Rüdiger <Ruediger.Geib at telekom.de>
> Cc: tcpm at ietf.org; ECN-Sane <ecn-sane at lists.bufferbloat.net>; tsvwg at ietf.org
> Betreff: Re: [Ecn-sane] [tsvwg] ECN CE that was ECT(0) incorrectly classified as L4S
>
> Hi Ruediger,
>
>
>> On Aug 5, 2019, at 09:26, <Ruediger.Geib at telekom.de> <Ruediger.Geib at telekom.de> wrote:
>>
>> Hi Sebastian,
>>
>> the access link is the bottleneck, that's what's to be expected.
>
> Mostly, then again there are situations with 1Gbps Plans where it is not the actual access link, but rather the CPEs Gigabit ethernet LAN ports that are the true bottleneck, but that does not substantially change the issue, it is still the upstream shaper/policer that needs to be worked around.
>
>> As far as I know, in the operator world shapers here by and large removed policers.
>
> Good to know, shapers are somewhat nicer to user traffic than hard policers, at least that is my interpretation.
>
>>
>> A consecutive chain of narrower links results, if the Home Gateway runs with an additional ingress or egress shaper operating below the access bandwidth, if I get you right.
>
> Yes, as you state below, this only is true for the ingress direction, egress shaping works reliably and is typically not suffering from this. That said, if the egress link bandwidth is larger than a servers connection this issue can appear also for the egress direction. For example overly hot peering/transit links can cause downstream bottlenecks considerably narrower than the internet access link's upload direction, but that, while unfortunate, is not at the core of my issue.
>
>>
>> I understand that you aren't interested in having 300ms buffer delay and may be some jitter for a phone conversation using best effort transport.
>
> +1
>
>> A main driver for changes in consumer IP access features in Germany are publications of journals and regulators comparing IP access performance of different providers.
>
> Good to know,
>
>> Should one provider have an advantage over the others by deploying a solution as you (and Bob's team) work on, it likely will be generally deployed.
>
> I do not believe that these mechanisms are actually in play in the German market, as an example for roughly a decade the DOCSIS-ISPs offer higher bandwidth for same or less money than the incumbent telco and yet only managed to get 30% of the customers of their ~75% of possible customers, so only 75*0.3 = 22.5 % market share, with the incumbent only reaching 250/40 for the masses while the DOCSIS ISPs offer 1000/50. And unlike latency, bandwidth (or rather rate) is the number that consumers understand intuitively.
> If anything will expedite the roll-out of L4S style AQMs it is the capability to use those to implement the "special services" that the EU net neutrality regulation explicitly allows, as that is a product that can be actually sold to customers, but I might be too pessimistic here.
>
>>
>> As far as I can see, latency aware consumers still are a minority and gamers seem to be a big group belonging here. Interest in well performing gaming seems to be growing, I guess (for me at least it's an impression rather than a clear trend).
>
> Put that way, I see a way for ISPs to distinguish themselves from the rest by being gaming friendly, but unless this results in gamers paying more I fail to see the business case that management probably needs before green-lighting the funds required to implement this. This is where cablelabs approach to mandate this in the specs is brilliant.
>
>>
>> I'd personally prefer an easy to deploy and operate standard solution offering Best Effort based transport being TCP friendly and at the same time congestion free for other flows at a BNG for traffic in access direction (and for similar devices in other architectures of course).
>>
>> Fighting bufferbloat in the upstream direction the way you describe it doesn't construct a chain of links which are consecutively narrower than the bottleneck link, I think.
>
> Yes, fully agreed, that said, and ISPs CPE should implement an AQM to really solve the latency issues for end-users. The initial L4S paper side-stepped that requirement by making sure the uplinks were not saturated during the test, and state that that needs a real solution for proper roll-out. In theory the ISP could do the uplink shaping on its end (and to constrain users to their contracted rates, ISPs do this already) but as in the downstream case, running an AQM in front of a bottleneck as opposed to behind it makes everything much easier. Also with uplinks typically << downlinks, the typically weak CPE CPUs will still be able to AQM the uplink, nicely distributing that computation load away from the BNG/BRAS big iron ....
>
>
> Best Regards
> Sebastian
>
>>
>> Regards,
>>
>> Ruediger
>>
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Sebastian Moeller <moeller0 at gmx.de>
>> Gesendet: Freitag, 2. August 2019 15:15
>> An: Geib, Rüdiger <Ruediger.Geib at telekom.de>
>> Cc: Jonathan Morton <chromatix99 at gmail.com>; tcpm at ietf.org; ECN-Sane <ecn-sane at lists.bufferbloat.net>; tsvwg at ietf.org
>> Betreff: Re: [Ecn-sane] [tsvwg] ECN CE that was ECT(0) incorrectly classified as L4S
>>
>> Hi Ruediger,
>>
>>
>>
>>> On Aug 2, 2019, at 10:29, <Ruediger.Geib at telekom.de> <Ruediger.Geib at telekom.de> wrote:
>>>
>>> Hi Jonathan,
>>>
>>> could you provide a real world example of links which are consecutively narrower than sender access links?
>>
>> Just an example from a network you might be comfortable with, in DTAGs internet access network there typically are traffic limiting elements at the BNGs (or at the BRAS for the legacy network), I am not 100% sure whether these are implemented as policers or shapers, but they tended to come with >= 300ms buffering. Since recently, the BNG/BRAS traffic shaper's use the message field of the PPPoE Auth ACK to transfer information about the TCP/IPv4 Goodput endusers can expect on their link as a consequence of the BNG/BRAS"s traffic limiter. In DOCSIS and GPON networks the traffic shaper seems mandated by the standards, in DSL networks it seems optional (but there even without a shaper the limited bandwidth of the access link would be a natural traffic choke point).
>> Fritzbox home router's now use this information to automatically set egress (and I believe also) ingress traffic shaping on the CPE to reduce the bufferbloat users experience. I have no insight in what Telekom's own Speedport routers do, but I would not be surprised if they would do the same (at least for egress).
>> As Jonathan and Dave mentioned, quite a number of end-users, especially the latency sensitive ones, employ their own ingress and egress traffic shapers at their home routers as the 300ms buffers of the BNG's are just not acceptable for any real-timish uses (VoIP, on-line twitch gaming, even for interactive sessions like ssh 300ms delay are undesirable). E.g. personally, I use an OpenWrt router with an FQ AQM for both ingress and egress (based on Jonathan's excellent cake qdisc) that allows a family of 5 to happily share a 50/10 connection between video streaming and interactive use with very little interference between the users, the same link with out the FQ-AQM active makes interactive applications feel like submerged in molasses once the link gets saturated...
>> As far as I can tell there is a number of different solutions that offer home-router based bi-directional traffic shaping to solve bufferbloat" from home (well, not fully solve it, but remedy its consequences), including commercial options like evenroute's iqrouter, and open-source options like OpenWrt (with sqm-scripts as shaper packet).
>> It is exactly this use case and the fact that latency-sensitive users often opt for this solution, that causes me to ask the L4S crowd to actually measure the effect of L4S on RFC3168-FQ-AQMs in the exact configuration it is actually used today, to remedy the same issue L4S wants to tackle.
>>
>> Best Regards
>> Sebastian
>>
>>
>>>
>>> I could figure out a small campus network which has a bottleneck at the Internet access and a second one connecting the terminal equipment. But in a small campus network, the individual terminal could very well have a higher LAN access bandwidth, than the campus - Internet connection (and then there's only one bottleneck again).
>>>
>>> There may be a tradeoff between simplicity and general applicability. Awareness of that tradeoff is important. To me, simplicity is the design aim.
>>>
>>> Regards,
>>>
>>> Ruediger
>>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: tsvwg <tsvwg-bounces at ietf.org> Im Auftrag von Jonathan Morton
>>> Gesendet: Dienstag, 9. Juli 2019 17:41
>>> An: Bob Briscoe <ietf at bobbriscoe.net>
>>> Cc: tcpm IETF list <tcpm at ietf.org>; ecn-sane at lists.bufferbloat.net; tsvwg IETF list <tsvwg at ietf.org>
>>> Betreff: Re: [tsvwg] [Ecn-sane] ECN CE that was ECT(0) incorrectly classified as L4S
>>>
>>>> On 13 Jun, 2019, at 7:48 pm, Bob Briscoe <ietf at bobbriscoe.net> wrote:
>>>>
>>>> 1. It is quite unusual to experience queuing at more than one
>>>> bottleneck on the same path (the available capacities have to
>>>> be identical).
>>>
>>> Following up on David Black's comments, I'd just like to note that the above is not the true criterion for multiple sequential queuing.
>>>
>>> Many existing TCP senders are unpaced (aside from ack-clocking), including FreeBSD, resulting in potentially large line-rate bursts at the origin - especially during slow-start. Even in congestion avoidance, each ack will trigger a closely spaced packet pair (or sometimes a triplet). It is then easy to imagine, or to build a testbed containing, an arbitrarily long sequence of consecutively narrower links; upon entering each, the burst of packets will briefly collect in a queue and then be paced out at the new rate.
>>>
>>> TCP pacing does largely eliminate these bursts when implemented correctly. However, Linux' pacing and IW is specifically (and apparently deliberately) set up to issue a 10-packet line-rate burst on startup. This effect has shown up in SCE tests to the point where we had to patch this behaviour out of the sending kernel to prevent an instant exit from slow-start.
>>>
>>> - Jonathan Morton
>>>
>>> _______________________________________________
>>> Ecn-sane mailing list
>>> Ecn-sane at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/ecn-sane
>>
>
More information about the Ecn-sane
mailing list