[LibreQoS] [tsvwg] FYI: draft-livingood-low-latency-deployment-00.txt

Dave Taht dave.taht at gmail.com
Thu Dec 1 10:53:01 EST 2022


The below framing is extremely good. The caveats are key phrases, like
L4S "implemented right", and the conflation of NQB's behaviors with
L4S's other behaviors. I don't have any real issue with the NQB
codepoint! but...

L4S has been shown to be extremely RTT unfair, with issues in slow
start, path jitter, latecomer disadvantages, the need for "policers",
and so on. The only implementation I've seen of an L4S AQM allocates
9/10s of the bandwidth to it...

Over here, in the network neutrality section, is a pretty good set of
links to existing law in the USA: https://libreqos.io/features/
It's a mess! A big legal problem is there doesn't seem to be agreed
upon definitions for what prioritization OR deprioritization mean when
it comes to packets vs a vs applications.

On Thu, Dec 1, 2022 at 3:36 AM Yiannis Yiakoumis <gyiakoumis at gmail.com> wrote:
>
> Hi Jason,
>
> Some comments below.
>
> Section 2: ... NQB flows tend to be limited in their capacity needs - for example a DNS lookup will not need to consume the full capacity of an end user's connection - but the end user is highly sensitive to any delays.
>
>
> There is *at least* one popular application where this is not the case: interactive video streaming of content rendered at the edge/server (eventually a high-quality, 60FPS video stream is transmitted).
>
> This can be read as  that there is a significant trade-off between throughput and latency, and therefore other apps won’t be incentivized to use a low-latency service. See below comment on user-centric access control for more context on this.
>
> Maybe remove it all together?
>
>
> Section 3 : Network Neutrality and Low Latency Networking
>
>
> The document frames the discussion on net neutrality as “L4S is not a fast-lane”, and “it doesn’t affect other apps throughput”. This is risky and somewhat flawed. Zero-Rating services had the exact same characteristics (new capability, orthogonal to throughput) and ended-up regulated/banned both in US and Europe.
>
>
> An alternative is “implemented right, L4S is fully-aligned with NN and has no impact on user choice and competition”. In case you find it useful, here is a rewrite of Section 3 that roughly captures this.
>
>
> Network Neutrality (a.k.a.  Net Neutrality, and NN hereafter) is a concept that can mean a variety of things within a country, as well as between different countries, based on different opinions, market structures, business practices, laws, and regulations.  Generally speaking, NN means that Internet Service Providers (ISPs) should not limit user choice or affect competition between application providers. In the context of the United States' marketplace, it has come to mean that ISPs should not block, throttle, or deprioritize lawful application traffic, and should not engage in paid prioritization, among other things.  The meaning of NN can be complex and ever changing, so the specific details are out of scope for this document.  Despite that, NN concerns certainly bear on the deployment of new technologies by ISPs, at least in the US, and so should be taken into account in making deployment design decisions.
>
>
> The principles below describe guidelines for a user-centric, application-agnostic, and monetizable L4S that we believe is aligned with NN frameworks and interpretations, at least in the U.S. and Europe.
>
>
> Application-Agnostic L4S
>
>
> NN demands that all applications should be treated the same by ISPs. As such, any application should be able to request access to an L4S service using the available marking techniques, and the network should forward packets through an L4S queue only based on such markings, without inferring or taking into consideration which application packets originate from.
>
>
> User-Centric Monetization
>
>
> To incentivize L4S deployments, ISPs should be able to monetize it. This can be controversial and perceived as paid prioritization. To avoid such conflicts, providers should charge users (and not application providers) for access to L4S, and follow common charging regimes used for best-effort services. For example, different  price-points may be achieved by adjusting the throughput, monthly data allowance, or maximum latency of an L4S service. Providers should not limit the number or types of applications that can access L4S, as this would eventually conflict with the application-agnostic requirement described above.
>
>
> User-Centric Access Control
>
>
> In some cases, an end-user might want to control which applications have access to their L4S service. For example, they might have a 1GB monthly data cap for L4S, and opt to use it only for a gaming application instead of video calls. Or they may want to prevent a legacy device that uses DSCP markings from accessing L4S, as this could possibly degrade its operation. When needed, access control should be user-centric, and respect the application-agnostic requirement described above.
>
>
> Access control that depends on application signature detection from a network router would violate the application-agnostic requirement, as it will be coarse, inaccurate, and will only support a limited number of popular applications supported by the network vendor (as explained in Section X.X). In contrast, access control that uses an Operating System’s permission capabilities is compatible with the application-agnostic requirement, as any application can trivially request access to such a permission and users can manage the decision. Similarly, a home WiFi router that allows users to control which devices can potentially access L4S provides an application-agnostic mechanism to deal with legacy devices.
>
>
> On Thu, Dec 1, 2022 at 3:26 AM Yiannis Yiakoumis <yiannis at selfienetworks.com> wrote:
>>
>>
>>
>> ---------- Forwarded message ---------
>> From: Bob Briscoe <in at bobbriscoe.net>
>> Date: Tue, Nov 8, 2022 at 4:19 AM
>> Subject: Re: [tsvwg] FYI: draft-livingood-low-latency-deployment-00.txt
>> To: Livingood, Jason <Jason_Livingood at comcast.com>, tsvwg at ietf.org <tsvwg at ietf.org>
>>
>>
>> Jason,
>>
>> Indeed, I didn't qualify my earlier statement properly; sry about that.
>> Latency done the L4S way is not a zero sum game.
>> But Latency done the 'traditional' way by prioritizing bandwidth (e.g. as with Diffserv) is zero-sum.
>>
>>
>> Bob
>>
>>
>> On 08/11/2022 10:42, Livingood, Jason wrote:
>>
>> Thanks for the feedback, Bob! I will put it in my queue to consider for the next update. I did want to say, however, that the sentence to which you object below was taken nearly verbatim from your email to me on August 18th “Unlike bandwidth priority, low latency is not a zero sum game - everyone can have lower latency at no-one else's expense.” – so you are essentially objecting to your own text. ;-) But I’ll go back and re-read that email and this one below, as perhaps I misunderstood your August suggestion (it’s been known to happen!).
>>
>>
>>
>> Thanks as always!
>> Jason
>>
>>
>>
>> From: tsvwg <tsvwg-bounces at ietf.org> on behalf of Bob Briscoe <in at bobbriscoe.net>
>> Date: Monday, November 7, 2022 at 17:39
>> To: "Livingood, Jason" <Jason_Livingood=40comcast.com at dmarc.ietf.org>, "tsvwg at ietf.org" <tsvwg at ietf.org>
>> Subject: Re: [tsvwg] FYI: draft-livingood-low-latency-deployment-00.txt
>>
>>
>>
>> Jason,
>>
>> From my experience, many experienced network engineers (and public policy people who have made the effort to learn how QoS really works) disagree with statements like:
>>
>>    unlike with
>>
>>    bandwidth priority on a highly/fully utilized link, low latency is
>>
>>    not a zero sum game - everyone can potentially have lower latency at
>>
>>    no one else's expense.
>>
>> This is because, until L4S, low latency was generally provided by giving priority to a partition of the bandwidth of a link (and policing the rate into that partition).
>>
>> With such experts (and therefore in this draft), I have found it's necessary to start an explanation with a strongly negative heads up: "L4S does /not/ provide low latency in the same way as previous technologies like Diffserv." Otherwise, they have no reason to believe they need to understand something new here, and they assume it's just working like they think Diffserv works. Then, when they read statements like that quoted above, they assume it's just political word-games, and walk away muttering "Move along, nothing new here - same old technology; same old word-games".
>>
>> This is particularly hard to understand, because the L4S DualQ /does/ use prioritization internally. The difference is that the prioritization only applies over sub-RTT timescales, while on longer timescales, it is balanced by an equal and opposite force: congestion signals that cause the sender to yield to other traffic as if it had no priority.
>>
>> Then, that gets further confused, because that back-pressure relies on congestion control behaviour that is ultimately voluntary. Then one gets into discussions about the necessity of policing, and most people have forgotten what the original question was by this stage.
>>
>> And it's even more confused, because NQB and L4S are often lumped together, and NQB /does/ work like Diffserv (because it lacks the L4S congestion signals). However, NQB is intended not to be asserted by a network operator, because it's conditional on the sender's intended behaviour, which only the sender knows (altho the network could police it).
>>
>> In summary, it's all very subtle. But I believe it will help for this draft to explain how low latency has been done in the past (in a zero-sum way), and what is different here.
>>
>>
>> Bob
>>
>> On 24/10/2022 17:13, Livingood, Jason wrote:
>>
>> FYI - may be of interest to this group. Right now, this is an individual submission. Feel free to email me 1:1 if you have feedback or wish to suggest changes/additions.
>>
>>
>>
>> Jason
>>
>>
>>
>> On 10/24/22, 12:09, "internet-drafts at ietf.org" <internet-drafts at ietf.org> wrote:
>>
>>
>>
>>
>>
>>     A new version of I-D, draft-livingood-low-latency-deployment-00.txt
>>
>>     has been successfully submitted by Jason Livingood and posted to the
>>
>>     IETF repository.
>>
>>
>>
>>     Name:        draft-livingood-low-latency-deployment
>>
>>     Revision:    00
>>
>>     Title:               Comcast ISP Low Latency Deployment Design Recommendations
>>
>>     Document date:       2022-10-24
>>
>>     Group:               Individual Submission
>>
>>     Pages:               10
>>
>>     URL:            https://datatracker.ietf.org/doc/draft-livingood-low-latency-deployment/
>>
>>
>>
>>     Abstract:
>>
>>        The IETF's Transport Area Working Group (TSVWG) has finalized
>>
>>        experimental RFCs for Low Latency, Low Loss, Scalable Throughput
>>
>>        (L4S) and new Non-Queue-Building (NQB) per hop behavior.  These
>>
>>        documents do a good job of describing a new architecture and protocol
>>
>>        for deploying low latency networking.  But as is normal for many such
>>
>>        standards, especially those in experimental status, certain design
>>
>>        decisions are ultimately left to implementers.  This document
>>
>>        explores the potential implications of key deployment design
>>
>>        decisions and makes recommendations for those decisions that may help
>>
>>        drive adoption.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> --
>>
>> ________________________________________________________________
>>
>> Bob Briscoe                               http://bobbriscoe.net/
>>
>>
>> --
>> ________________________________________________________________
>> Bob Briscoe                               http://bobbriscoe.net/
>>
>>


-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC


More information about the LibreQoS mailing list