On Tue, Feb 27, 2024 at 2:17 PM Jack Haverty via Nnagain <nnagain@lists.bufferbloat.net> wrote:
Has any ISP or regulatory body set a standard for latency necessary to support interactive uses?

I think we can safely say no. Unfortunately, no.
  
It seems to me that a 2+ second delay is way too high, and even if it happens only occasionally, users may set up their systems to assume it may happen and compensate for it by ading their own buffering at the endpoints and thereby reduce embarassing glitches.  Maybe this explains those long awkward pauses you commonly see when TV interviewers are trying to have a conversation with someone at a remote site via Zoom, Skype, et al.

2 second delays happen more often than you'd think on "untreated" connections. I have seen fiber connections with 15 seconds of induced latency (latency due to buffering, not end-to-end distance.) I have seen cable connections with 5 or 6 seconds of latency under *normal load*. This is in the last year alone.
 

In the early Internet days we assumed there would be a need for multiple types of service, such as a "bulk transfer" and "interactive", similar to analogs in the non-electronic transport systems (e.g., Air Freight versus Container Ship).   The "Type Of Service" field was put in the IP header as a placeholder for such mechanisms to be added to networks in the future,

In many other cases, high latency is a result of buffering at *every* change in link speed. As Preseem and LibreQoS have validated, even dynamic home and last-mile RF environments benefit significantly from flow isolation, better drops and packet pacing, no matter the ToS field.

 

Of course if network capacity is truly unlimited there would be no need now to provide different types of service.   But these latency numbers suggest that users' traffic demands are still sometimes exceeding network capacities.  Some of the network traffic is associated with interactive uses, and other traffic is doing tasks such as backups to some cloud.  Treating them uniformly seems like bad engineering as well as bad policy.

It's not quite as simple as "traffic demands… exceeding network capacities" when you take into account dynamic link rates. Packets are either on the wire or they are not, and "capacity" is an emergent phenomenon rather than guaranteed end-to-end. Microbursts guarantee that packet rates will occasionally exceed link rates on even a high-capacity end-user connection fed by even faster core and interchange links. Treating types of traffic non-uniformly (when obeying other, voluntary, traffic- or offnet-generated signals) is susceptible to the tragedy of the commons. So far we have decent compromises, such as treating traffic according to its behavior. If it walks and quacks like a duck…
 

I'm still not sure whether or not "network neutrality" regulations would preclude offering different types of service, if the technical mechanisms even implement such functionality.


Theoretically L4S could be a "paid add-on", so to speak, but at this point, the overall market is the primary differentiator — as an end user, I will happily spend my dollars with an ISP that serves smaller plans that have better-managed latency-under-load ("Working Latency" per Stuart Cheshire, I believe), than one that gives me gigabit or multi-gigabit that falls on its face when under load. It will take a long time before everyone has an option to choose — and to your original question, better standardized metrics are needed.

Our customers so far have not pressed us to productize good-latency-as-a-service; they regard it as essential to customer satisfaction and retention.

Jack
Nnagain mailing list
Nnagain@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/nnagain

These are largely my opinions, not necessarily my employer's.
--
--
Jeremy Austin
Sr. Product Manager
Preseem | Aterlo Networks