<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Feb 27, 2024 at 2:17 PM Jack Haverty via Nnagain <<a href="mailto:nnagain@lists.bufferbloat.net">nnagain@lists.bufferbloat.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><u></u>
<div>
Has any ISP or regulatory body set a standard for latency necessary
to support interactive uses?</div></blockquote><div><br></div><div>I think we can safely say no. Unfortunately, no.</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div> <br>
It seems to me that a 2+ second delay is way too high, and even if
it happens only occasionally, users may set up their systems to
assume it may happen and compensate for it by ading their own
buffering at the endpoints and thereby reduce embarassing glitches.
Maybe this explains those long awkward pauses you commonly see when
TV interviewers are trying to have a conversation with someone at a
remote site via Zoom, Skype, et al.<br></div></blockquote><div><br></div><div>2 second delays happen more often than you'd think on "untreated" connections. I have seen fiber connections with 15 seconds of induced latency (latency due to buffering, not end-to-end distance.) I have seen cable connections with 5 or 6 seconds of latency under *normal load*. This is in the last year alone.</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
<br>
In the early Internet days we assumed there would be a need for
multiple types of service, such as a "bulk transfer" and
"interactive", similar to analogs in the non-electronic transport
systems (e.g., Air Freight versus Container Ship). The "Type Of
Service" field was put in the IP header as a placeholder for such
mechanisms to be added to networks in the future,<br></div></blockquote><div><br></div><div>In many other cases, high latency is a result of buffering at *every* change in link speed. As Preseem and LibreQoS have validated, even dynamic home and last-mile RF environments benefit significantly from flow isolation, better drops and packet pacing, no matter the ToS field.</div><br class="gmail-Apple-interchange-newline"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
<br>
Of course if network capacity is truly unlimited there would be no
need now to provide different types of service. But these latency
numbers suggest that users' traffic demands are still sometimes
exceeding network capacities. Some of the network traffic is
associated with interactive uses, and other traffic is doing tasks
such as backups to some cloud. Treating them uniformly seems like
bad engineering as well as bad policy.<br></div></blockquote><div><br></div><div>It's not quite as simple as "traffic demands… exceeding network capacities" when you take into account dynamic link rates. Packets are either on the wire or they are not, and "capacity" is an emergent phenomenon rather than guaranteed end-to-end. Microbursts guarantee that packet rates will occasionally exceed link rates on even a high-capacity end-user connection fed by even faster core and interchange links. Treating types of traffic non-uniformly (when obeying other, voluntary, traffic- or offnet-generated signals) is susceptible to the tragedy of the commons. So far we have decent compromises, such as treating traffic according to its behavior. If it walks and quacks like a duck…</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
<br>
I'm still not sure whether or not "network neutrality" regulations
would preclude offering different types of service, if the technical
mechanisms even implement such functionality.<br>
<br></div></blockquote><div><br></div><div>Theoretically L4S could be a "paid add-on", so to speak, but at this point, the overall market is the primary differentiator — as an end user, I will happily spend my dollars with an ISP that serves smaller plans that have better-managed latency-under-load ("Working Latency" per Stuart Cheshire, I believe), than one that gives me gigabit or multi-gigabit that falls on its face when under load. It will take a long time before everyone has an option to choose — and to your original question, better standardized metrics are needed.</div><div><br></div><div>Our customers so far have not pressed us to productize good-latency-as-a-service; they regard it as essential to customer satisfaction and retention.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
Jack<br></div>
Nnagain mailing list<br>
<a href="mailto:Nnagain@lists.bufferbloat.net" target="_blank">Nnagain@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/nnagain" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/nnagain</a><br>
</blockquote></div><br clear="all"><div>These are largely my opinions, not necessarily my employer's.</div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div>--</div><div>Jeremy Austin</div><div>Sr. Product Manager</div><div>Preseem | Aterlo Networks</div></div></div></div>