Network Neutrality is back! Let´s make the technical aspects heard this time!
 help / color / mirror / Atom feed
* [NNagain] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
@ 2024-02-26 15:06 Dave Taht
  2024-02-26 19:24 ` Jack Haverty
  2024-02-26 20:02 ` [NNagain] [M-Lab-Discuss] " rjmcmahon
  0 siblings, 2 replies; 15+ messages in thread
From: Dave Taht @ 2024-02-26 15:06 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!,
	Dave Taht via Starlink, Rpm, discuss,
	National Broadband Mapping Coalition

And...

Our bufferbloat.net submittal was cited multiple times! Thank you all
for participating in that process!

https://docs.fcc.gov/public/attachments/DOC-400675A1.pdf

It is a long read, and does still start off on the wrong feet (IMHO),
in particular not understanding the difference between idle and
working latency.

It is my hope that by widening awareness of more of the real problems
with latency under load to policymakers and other submitters
downstream from this new FCC document, and more reading what we had to
say, that we will begin to make serious progress towards finally
fixing bufferbloat in the USA.

I do keep hoping that somewhere along the way in the future, the costs
of IPv4 address exhaustion and the IPv6 transition, will also get
raised to the national level. [1]

We are still collecting signatures for what the bufferbloat project
members wrote, and have 1200 bucks in the kitty for further articles
and/or publicity. Thoughts appreciated as to where we can go next with
shifting the national debate about bandwidth in a better direction!
Next up would be trying to get a meeting, and to do an ex-parte
filing, I think, and I wish we could do a live demonstration on
television about it as good as feynman did here:

https://www.youtube.com/watch?v=raMmRKGkGD4

Our original posting is here:
https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit

Larry's wonderful post is here:
https://circleid.com/posts/20231211-its-the-latency-fcc

[1] How can we get more talking about IPv4 and IPv6, too? Will we have
to wait another year?

https://hackaday.com/2024/02/14/floss-weekly-episode-769-10-more-internet/

-- 
https://blog.cerowrt.org/post/2024_predictions/
Dave Täht CSO, LibreQos

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-26 15:06 [NNagain] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out! Dave Taht
@ 2024-02-26 19:24 ` Jack Haverty
  2024-02-26 20:02 ` [NNagain] [M-Lab-Discuss] " rjmcmahon
  1 sibling, 0 replies; 15+ messages in thread
From: Jack Haverty @ 2024-02-26 19:24 UTC (permalink / raw)
  To: nnagain


[-- Attachment #1.1.1.1: Type: text/plain, Size: 1795 bytes --]

On 2/26/24 07:06, Dave Taht via Nnagain wrote:
> I wish we could do a live demonstration on
> television

It's been years, but when I talked with non-techie people to help them 
understand the difference between bandwidth and latency I used a 
familiar human-human task to get the ideas across.  E.g., take a group 
of people at the end of a work session, who are trying to reach a 
consensus about where to go for lunch.  When everyone's sitting around a 
table, a quick discussion can happen and a decision reached.   Then make 
them all go to separate offices scattered around the building and 
achieve a similar consensus, communicating only by sending short notes 
to each other carried by volunteer couriers (or perhaps 140-character 
SMS texts).   The difference in "latency" and its effect on the time it 
takes to finish the task becomes clear quickly.

It seems like one could orchestrate a similar demonstration targetted 
toward a non-technical audience -- e.g., members of some government 
committee meeting in a room versus the same meeting with participants 
scattered across the building and interactions performed by staffers 
running around as "the network".   Even if the staffers can convey huge 
stacks of information (high bandwidth), the time it takes for them to 
get from one member to others (latency) quickly becomes the primary 
constraint.

This was also a good way to illustrate to non-techies how web pages 
work, and why it sometimes takes a long time to get the entire page 
loaded.   Go to the library to get the text.   Now go to the art 
department to get the banners.   Now go to the photo archives to get the 
pictures.   Now go to that customer to get the ads you promised to 
show.  .....

Jack Haverty


[-- Attachment #1.1.1.2: Type: text/html, Size: 2237 bytes --]

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 2469 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 665 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] [M-Lab-Discuss] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-26 15:06 [NNagain] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out! Dave Taht
  2024-02-26 19:24 ` Jack Haverty
@ 2024-02-26 20:02 ` rjmcmahon
  2024-02-26 20:59   ` Jack Haverty
  2024-02-28 19:11   ` Fenwick Mckelvey
  1 sibling, 2 replies; 15+ messages in thread
From: rjmcmahon @ 2024-02-26 20:02 UTC (permalink / raw)
  To: Dave Taht
  Cc: Network Neutrality is back! Let´s make the technical
	aspects heard this time!,
	Dave Taht via Starlink, Rpm, discuss,
	National Broadband Mapping Coalition

Thanks for sharing this. I'm trying to find out what are the key metrics 
that will be used for this monitoring. I want to make sure iperf 2 can 
cover the technical, traffic related ones that make sense to a skilled 
network operator, including a WiFi BSS manager. I didn't read all 327 
pages though, from what I did read, I didn't see anything obvious. I 
assume these types of KPIs may be in reference docs or something.

Thanks in advance for any help on this.
Bob
> And...
> 
> Our bufferbloat.net submittal was cited multiple times! Thank you all
> for participating in that process!
> 
> https://docs.fcc.gov/public/attachments/DOC-400675A1.pdf
> 
> It is a long read, and does still start off on the wrong feet (IMHO),
> in particular not understanding the difference between idle and
> working latency.
> 
> It is my hope that by widening awareness of more of the real problems
> with latency under load to policymakers and other submitters
> downstream from this new FCC document, and more reading what we had to
> say, that we will begin to make serious progress towards finally
> fixing bufferbloat in the USA.
> 
> I do keep hoping that somewhere along the way in the future, the costs
> of IPv4 address exhaustion and the IPv6 transition, will also get
> raised to the national level. [1]
> 
> We are still collecting signatures for what the bufferbloat project
> members wrote, and have 1200 bucks in the kitty for further articles
> and/or publicity. Thoughts appreciated as to where we can go next with
> shifting the national debate about bandwidth in a better direction!
> Next up would be trying to get a meeting, and to do an ex-parte
> filing, I think, and I wish we could do a live demonstration on
> television about it as good as feynman did here:
> 
> https://www.youtube.com/watch?v=raMmRKGkGD4
> 
> Our original posting is here:
> https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit
> 
> Larry's wonderful post is here:
> https://circleid.com/posts/20231211-its-the-latency-fcc
> 
> [1] How can we get more talking about IPv4 and IPv6, too? Will we have
> to wait another year?
> 
> https://hackaday.com/2024/02/14/floss-weekly-episode-769-10-more-internet/
> 
> --
> https://blog.cerowrt.org/post/2024_predictions/
> Dave Täht CSO, LibreQos

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] [M-Lab-Discuss] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-26 20:02 ` [NNagain] [M-Lab-Discuss] " rjmcmahon
@ 2024-02-26 20:59   ` Jack Haverty
  2024-02-27  0:25     ` rjmcmahon
  2024-02-28 19:11   ` Fenwick Mckelvey
  1 sibling, 1 reply; 15+ messages in thread
From: Jack Haverty @ 2024-02-26 20:59 UTC (permalink / raw)
  To: nnagain


[-- Attachment #1.1.1.1: Type: text/plain, Size: 4639 bytes --]

I didn't study the whole report, but I didn't notice any metrics 
associated with *variance* of latency or bandwidth.  It's common for 
vendors to play games ("Lies, damn lies, and statistics!") to make their 
metrics look good.   A metric of latency that says something like "99% 
less than N milliseconds" doesn't necessarily translate into an 
acceptable user performance.

It's also important to look at the specific techniques used for taking 
measurements.  For example, if a measurement is performed every fifteen 
minutes, extrapolating the metric as representative of all the time 
between measurements can also lead to a metric judgement which doesn't 
reflect the reality of what the user actually experiences.

In addition, there's a lot of mechanism between the ISPs' handling of 
datagrams and the end-user.   The users' experience is affected by how 
all of that mechanism interacts as underlying network behavior changes.  
When a TCP running in some host decides it needs to retransmit, or an 
interactive audio/video session discards datagrams because they arrive 
too late to be useful, the user sees unacceptable performance even 
though the network operators may think everything is running fine.   
Measurements from the end-users' perspective might indicate performance 
is quite different from what measurements at the ISP level suggest.

Gamers are especially sensitive to variance, but it will also apply to 
interactive uses such as might occur in telemedicine or remote 
operations.  A few years ago I helped a friend do some tests for a 
gaming situation and we discovered that the average latency was 
reasonably low, but occasionally, perhaps a few times per hour, latency 
would increase to 10s of seconds.

In a game, that often means the player loses.  In a remote surgery it 
may mean horrendous outcomes.  As more functionality is performed "in 
the cloud" such situations will become increasingly common.

Jack Haverty


On 2/26/24 12:02, rjmcmahon via Nnagain wrote:
> Thanks for sharing this. I'm trying to find out what are the key 
> metrics that will be used for this monitoring. I want to make sure 
> iperf 2 can cover the technical, traffic related ones that make sense 
> to a skilled network operator, including a WiFi BSS manager. I didn't 
> read all 327 pages though, from what I did read, I didn't see anything 
> obvious. I assume these types of KPIs may be in reference docs or 
> something.
>
> Thanks in advance for any help on this.
> Bob
>> And...
>>
>> Our bufferbloat.net submittal was cited multiple times! Thank you all
>> for participating in that process!
>>
>> https://docs.fcc.gov/public/attachments/DOC-400675A1.pdf
>>
>> It is a long read, and does still start off on the wrong feet (IMHO),
>> in particular not understanding the difference between idle and
>> working latency.
>>
>> It is my hope that by widening awareness of more of the real problems
>> with latency under load to policymakers and other submitters
>> downstream from this new FCC document, and more reading what we had to
>> say, that we will begin to make serious progress towards finally
>> fixing bufferbloat in the USA.
>>
>> I do keep hoping that somewhere along the way in the future, the costs
>> of IPv4 address exhaustion and the IPv6 transition, will also get
>> raised to the national level. [1]
>>
>> We are still collecting signatures for what the bufferbloat project
>> members wrote, and have 1200 bucks in the kitty for further articles
>> and/or publicity. Thoughts appreciated as to where we can go next with
>> shifting the national debate about bandwidth in a better direction!
>> Next up would be trying to get a meeting, and to do an ex-parte
>> filing, I think, and I wish we could do a live demonstration on
>> television about it as good as feynman did here:
>>
>> https://www.youtube.com/watch?v=raMmRKGkGD4
>>
>> Our original posting is here:
>> https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit 
>>
>>
>> Larry's wonderful post is here:
>> https://circleid.com/posts/20231211-its-the-latency-fcc
>>
>> [1] How can we get more talking about IPv4 and IPv6, too? Will we have
>> to wait another year?
>>
>> https://hackaday.com/2024/02/14/floss-weekly-episode-769-10-more-internet/ 
>>
>>
>> -- 
>> https://blog.cerowrt.org/post/2024_predictions/
>> Dave Täht CSO, LibreQos
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain


[-- Attachment #1.1.1.2: Type: text/html, Size: 6959 bytes --]

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 2469 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 665 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] [M-Lab-Discuss] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-26 20:59   ` Jack Haverty
@ 2024-02-27  0:25     ` rjmcmahon
  2024-02-27  2:06       ` Jack Haverty
  0 siblings, 1 reply; 15+ messages in thread
From: rjmcmahon @ 2024-02-27  0:25 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

[-- Attachment #1: Type: text/plain, Size: 5223 bytes --]

On top of all that, the latency responses tend to be non parametric and 
may need full pdfs/cdfs along with non-parametric statistical process 
controls. Attached is an example from many years ago which was a 
firmware bug that sometimes delayed packet processing, creating a second 
node in the pdf.

Engineers and their algorithms can be this way it seems.

Bob
> I didn't study the whole report, but I didn't notice any metrics
> associated with *variance* of latency or bandwidth.  It's common for
> vendors to play games ("Lies, damn lies, and statistics!") to make
> their metrics look good.   A metric of latency that says something
> like "99% less than N milliseconds" doesn't necessarily translate into
> an acceptable user performance.
> 
> It's also important to look at the specific techniques used for taking
> measurements.  For example, if a measurement is performed every
> fifteen minutes, extrapolating the metric as representative of all the
> time between measurements can also lead to a metric judgement which
> doesn't reflect the reality of what the user actually experiences.
> 
> In addition, there's a lot of mechanism between the ISPs' handling of
> datagrams and the end-user.   The users' experience is affected by how
> all of that mechanism interacts as underlying network behavior
> changes.  When a TCP running in some host decides it needs to
> retransmit, or an interactive audio/video session discards datagrams
> because they arrive too late to be useful, the user sees unacceptable
> performance even though the network operators may think everything is
> running fine.   Measurements from the end-users' perspective might
> indicate performance is quite different from what measurements at the
> ISP level suggest.
> 
> Gamers are especially sensitive to variance, but it will also apply to
> interactive uses such as might occur in telemedicine or remote
> operations.  A few years ago I helped a friend do some tests for a
> gaming situation and we discovered that the average latency was
> reasonably low, but occasionally, perhaps a few times per hour,
> latency would increase to 10s of seconds.
> 
> In a game, that often means the player loses.  In a remote surgery it
> may mean horrendous outcomes.  As more functionality is performed "in
> the cloud" such situations will become increasingly common.
> 
> Jack Haverty
> 
> On 2/26/24 12:02, rjmcmahon via Nnagain wrote:
> 
>> Thanks for sharing this. I'm trying to find out what are the key
>> metrics that will be used for this monitoring. I want to make sure
>> iperf 2 can cover the technical, traffic related ones that make
>> sense to a skilled network operator, including a WiFi BSS manager. I
>> didn't read all 327 pages though, from what I did read, I didn't see
>> anything obvious. I assume these types of KPIs may be in reference
>> docs or something.
>> 
>> Thanks in advance for any help on this.
>> Bob
>> 
>>> And...
>>> 
>>> Our bufferbloat.net submittal was cited multiple times! Thank you
>>> all
>>> for participating in that process!
>>> 
>>> https://docs.fcc.gov/public/attachments/DOC-400675A1.pdf
>>> 
>>> It is a long read, and does still start off on the wrong feet
>>> (IMHO),
>>> in particular not understanding the difference between idle and
>>> working latency.
>>> 
>>> It is my hope that by widening awareness of more of the real
>>> problems
>>> with latency under load to policymakers and other submitters
>>> downstream from this new FCC document, and more reading what we
>>> had to
>>> say, that we will begin to make serious progress towards finally
>>> fixing bufferbloat in the USA.
>>> 
>>> I do keep hoping that somewhere along the way in the future, the
>>> costs
>>> of IPv4 address exhaustion and the IPv6 transition, will also get
>>> raised to the national level. [1]
>>> 
>>> We are still collecting signatures for what the bufferbloat
>>> project
>>> members wrote, and have 1200 bucks in the kitty for further
>>> articles
>>> and/or publicity. Thoughts appreciated as to where we can go next
>>> with
>>> shifting the national debate about bandwidth in a better
>>> direction!
>>> Next up would be trying to get a meeting, and to do an ex-parte
>>> filing, I think, and I wish we could do a live demonstration on
>>> television about it as good as feynman did here:
>>> 
>>> https://www.youtube.com/watch?v=raMmRKGkGD4
>>> 
>>> Our original posting is here:
>>> 
>> 
> https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit
>>> 
>>> 
>>> Larry's wonderful post is here:
>>> https://circleid.com/posts/20231211-its-the-latency-fcc
>>> 
>>> [1] How can we get more talking about IPv4 and IPv6, too? Will we
>>> have
>>> to wait another year?
>>> 
>>> 
>> 
> https://hackaday.com/2024/02/14/floss-weekly-episode-769-10-more-internet/
>>> 
>>> 
>>> --
>>> https://blog.cerowrt.org/post/2024_predictions/
>>> Dave Täht CSO, LibreQos
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

[-- Attachment #2: Screenshot Capture - 2024-02-26 - 16-20-43.png --]
[-- Type: image/png, Size: 116007 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] [M-Lab-Discuss] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-27  0:25     ` rjmcmahon
@ 2024-02-27  2:06       ` Jack Haverty
  2024-02-27 17:48         ` rjmcmahon
  0 siblings, 1 reply; 15+ messages in thread
From: Jack Haverty @ 2024-02-27  2:06 UTC (permalink / raw)
  To: rjmcmahon,
	Network Neutrality is back! Let´s make the technical
	aspects heard this time!


[-- Attachment #1.1.1.1: Type: text/plain, Size: 7055 bytes --]

Yes, latency is complicated....  Back when I was involved in the early 
Internet (early 1980s), we knew that latency was an issue requiring much 
further research, but we figured that meanwhile problems could be 
avoided by keeping traffic loads well below capacity while the 
appropriate algorithms could be discovered by the engineers (I was 
one...).  Forty years later, it seems like it's still a research topic.

Years later in the 90s I was involved in operating an international 
corporate intranet.  We quickly learned that keeping the human users 
happy required looking at more than the routers and circuits between 
them.  With much of the "reliability mechanisms" of TCP et al now 
located in the users' computers rather than the network switches, 
evaluating users' experience with "the net" required measurements from 
the users' perspective.

To do that, we created a policy whereby every LAN attached to the 
long-haul backbone had to have a computer on that LAN to which we had 
remote access.   That enabled us to perform "ping" tests and also 
collect data about TCP behavior (duplicates, retransmissions, etc.) 
using SNMP, etherwatch, et al.   It was not unusual for the users' data 
to indicate that "the net", as they saw it, was misbehaving while the 
network data, as seen by the operators, indicated that all the routers 
and circuits were working just fine.

If the government regulators want to keep the users happy, IMHO they 
need to understand this.

Jack Haverty


On 2/26/24 16:25, rjmcmahon wrote:
> On top of all that, the latency responses tend to be non parametric 
> and may need full pdfs/cdfs along with non-parametric statistical 
> process controls. Attached is an example from many years ago which was 
> a firmware bug that sometimes delayed packet processing, creating a 
> second node in the pdf.
>
> Engineers and their algorithms can be this way it seems.
>
> Bob
>> I didn't study the whole report, but I didn't notice any metrics
>> associated with *variance* of latency or bandwidth.  It's common for
>> vendors to play games ("Lies, damn lies, and statistics!") to make
>> their metrics look good.   A metric of latency that says something
>> like "99% less than N milliseconds" doesn't necessarily translate into
>> an acceptable user performance.
>>
>> It's also important to look at the specific techniques used for taking
>> measurements.  For example, if a measurement is performed every
>> fifteen minutes, extrapolating the metric as representative of all the
>> time between measurements can also lead to a metric judgement which
>> doesn't reflect the reality of what the user actually experiences.
>>
>> In addition, there's a lot of mechanism between the ISPs' handling of
>> datagrams and the end-user.   The users' experience is affected by how
>> all of that mechanism interacts as underlying network behavior
>> changes.  When a TCP running in some host decides it needs to
>> retransmit, or an interactive audio/video session discards datagrams
>> because they arrive too late to be useful, the user sees unacceptable
>> performance even though the network operators may think everything is
>> running fine.   Measurements from the end-users' perspective might
>> indicate performance is quite different from what measurements at the
>> ISP level suggest.
>>
>> Gamers are especially sensitive to variance, but it will also apply to
>> interactive uses such as might occur in telemedicine or remote
>> operations.  A few years ago I helped a friend do some tests for a
>> gaming situation and we discovered that the average latency was
>> reasonably low, but occasionally, perhaps a few times per hour,
>> latency would increase to 10s of seconds.
>>
>> In a game, that often means the player loses.  In a remote surgery it
>> may mean horrendous outcomes.  As more functionality is performed "in
>> the cloud" such situations will become increasingly common.
>>
>> Jack Haverty
>>
>> On 2/26/24 12:02, rjmcmahon via Nnagain wrote:
>>
>>> Thanks for sharing this. I'm trying to find out what are the key
>>> metrics that will be used for this monitoring. I want to make sure
>>> iperf 2 can cover the technical, traffic related ones that make
>>> sense to a skilled network operator, including a WiFi BSS manager. I
>>> didn't read all 327 pages though, from what I did read, I didn't see
>>> anything obvious. I assume these types of KPIs may be in reference
>>> docs or something.
>>>
>>> Thanks in advance for any help on this.
>>> Bob
>>>
>>>> And...
>>>>
>>>> Our bufferbloat.net submittal was cited multiple times! Thank you
>>>> all
>>>> for participating in that process!
>>>>
>>>> https://docs.fcc.gov/public/attachments/DOC-400675A1.pdf
>>>>
>>>> It is a long read, and does still start off on the wrong feet
>>>> (IMHO),
>>>> in particular not understanding the difference between idle and
>>>> working latency.
>>>>
>>>> It is my hope that by widening awareness of more of the real
>>>> problems
>>>> with latency under load to policymakers and other submitters
>>>> downstream from this new FCC document, and more reading what we
>>>> had to
>>>> say, that we will begin to make serious progress towards finally
>>>> fixing bufferbloat in the USA.
>>>>
>>>> I do keep hoping that somewhere along the way in the future, the
>>>> costs
>>>> of IPv4 address exhaustion and the IPv6 transition, will also get
>>>> raised to the national level. [1]
>>>>
>>>> We are still collecting signatures for what the bufferbloat
>>>> project
>>>> members wrote, and have 1200 bucks in the kitty for further
>>>> articles
>>>> and/or publicity. Thoughts appreciated as to where we can go next
>>>> with
>>>> shifting the national debate about bandwidth in a better
>>>> direction!
>>>> Next up would be trying to get a meeting, and to do an ex-parte
>>>> filing, I think, and I wish we could do a live demonstration on
>>>> television about it as good as feynman did here:
>>>>
>>>> https://www.youtube.com/watch?v=raMmRKGkGD4
>>>>
>>>> Our original posting is here:
>>>>
>>>
>> https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit 
>>
>>>>
>>>>
>>>> Larry's wonderful post is here:
>>>> https://circleid.com/posts/20231211-its-the-latency-fcc
>>>>
>>>> [1] How can we get more talking about IPv4 and IPv6, too? Will we
>>>> have
>>>> to wait another year?
>>>>
>>>>
>>>
>> https://hackaday.com/2024/02/14/floss-weekly-episode-769-10-more-internet/ 
>>
>>>>
>>>>
>>>> -- 
>>>> https://blog.cerowrt.org/post/2024_predictions/
>>>> Dave Täht CSO, LibreQos
>>> _______________________________________________
>>> Nnagain mailing list
>>> Nnagain@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/nnagain
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain


[-- Attachment #1.1.1.2: Type: text/html, Size: 11658 bytes --]

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 2469 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 665 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] [M-Lab-Discuss] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-27  2:06       ` Jack Haverty
@ 2024-02-27 17:48         ` rjmcmahon
  2024-02-27 20:11           ` Jack Haverty
  0 siblings, 1 reply; 15+ messages in thread
From: rjmcmahon @ 2024-02-27 17:48 UTC (permalink / raw)
  To: Jack Haverty
  Cc: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

Hi Jack,

On LAN probes & monitors; I've been told that 90% of users devices are 
now wirelessly connected so the concept of connecting to a common wave 
guide to measure or observe user information & flow state isn't viable. 
A WiFi AP could provide its end state but wireless channels' states are 
non-trivial and the APs prioritize packet forwarding at L2 over state 
collection. I suspect a fully capable AP that could record per quintuple 
and RF channels' states would be too expensive. This is part of the 
reason why our industry and policy makers need to define the key 
performance metrics well.

Bob
> Yes, latency is complicated....  Back when I was involved in the
> early Internet (early 1980s), we knew that latency was an issue
> requiring much further research, but we figured that meanwhile
> problems could be avoided by keeping traffic loads well below capacity
> while the appropriate algorithms could be discovered by the engineers
> (I was one...).  Forty years later, it seems like it's still a
> research topic.
> 
> Years later in the 90s I was involved in operating an international
> corporate intranet.  We quickly learned that keeping the human users
> happy required looking at more than the routers and circuits between
> them.  With much of the "reliability mechanisms" of TCP et al now
> located in the users' computers rather than the network switches,
> evaluating users' experience with "the net" required measurements from
> the users' perspective.
> 
> To do that, we created a policy whereby every LAN attached to the
> long-haul backbone had to have a computer on that LAN to which we had
> remote access.   That enabled us to perform "ping" tests and also
> collect data about TCP behavior (duplicates, retransmissions, etc.)
> using SNMP, etherwatch, et al.   It was not unusual for the users'
> data to indicate that "the net", as they saw it, was misbehaving while
> the network data, as seen by the operators, indicated that all the
> routers and circuits were working just fine.
> 
> If the government regulators want to keep the users happy, IMHO they
> need to understand this.
> 
> Jack Haverty
> 
> On 2/26/24 16:25, rjmcmahon wrote:
> 
>> On top of all that, the latency responses tend to be non parametric
>> and may need full pdfs/cdfs along with non-parametric statistical
>> process controls. Attached is an example from many years ago which
>> was a firmware bug that sometimes delayed packet processing,
>> creating a second node in the pdf.
>> 
>> Engineers and their algorithms can be this way it seems.
>> 
>> Bob
>> I didn't study the whole report, but I didn't notice any metrics
>> associated with *variance* of latency or bandwidth.  It's common for
>> 
>> vendors to play games ("Lies, damn lies, and statistics!") to make
>> their metrics look good.   A metric of latency that says something
>> like "99% less than N milliseconds" doesn't necessarily translate
>> into
>> an acceptable user performance.
>> 
>> It's also important to look at the specific techniques used for
>> taking
>> measurements.  For example, if a measurement is performed every
>> fifteen minutes, extrapolating the metric as representative of all
>> the
>> time between measurements can also lead to a metric judgement which
>> doesn't reflect the reality of what the user actually experiences.
>> 
>> In addition, there's a lot of mechanism between the ISPs' handling
>> of
>> datagrams and the end-user.   The users' experience is affected by
>> how
>> all of that mechanism interacts as underlying network behavior
>> changes.  When a TCP running in some host decides it needs to
>> retransmit, or an interactive audio/video session discards datagrams
>> 
>> because they arrive too late to be useful, the user sees
>> unacceptable
>> performance even though the network operators may think everything
>> is
>> running fine.   Measurements from the end-users' perspective might
>> indicate performance is quite different from what measurements at
>> the
>> ISP level suggest.
>> 
>> Gamers are especially sensitive to variance, but it will also apply
>> to
>> interactive uses such as might occur in telemedicine or remote
>> operations.  A few years ago I helped a friend do some tests for a
>> gaming situation and we discovered that the average latency was
>> reasonably low, but occasionally, perhaps a few times per hour,
>> latency would increase to 10s of seconds.
>> 
>> In a game, that often means the player loses.  In a remote surgery
>> it
>> may mean horrendous outcomes.  As more functionality is performed
>> "in
>> the cloud" such situations will become increasingly common.
>> 
>> Jack Haverty
>> 
>> On 2/26/24 12:02, rjmcmahon via Nnagain wrote:
>> 
>> Thanks for sharing this. I'm trying to find out what are the key
>> metrics that will be used for this monitoring. I want to make sure
>> iperf 2 can cover the technical, traffic related ones that make
>> sense to a skilled network operator, including a WiFi BSS manager. I
>> 
>> didn't read all 327 pages though, from what I did read, I didn't see
>> 
>> anything obvious. I assume these types of KPIs may be in reference
>> docs or something.
>> 
>> Thanks in advance for any help on this.
>> Bob
>> 
>> And...
>> 
>> Our bufferbloat.net submittal was cited multiple times! Thank you
>> all
>> for participating in that process!
>> 
>> https://docs.fcc.gov/public/attachments/DOC-400675A1.pdf
>> 
>> It is a long read, and does still start off on the wrong feet
>> (IMHO),
>> in particular not understanding the difference between idle and
>> working latency.
>> 
>> It is my hope that by widening awareness of more of the real
>> problems
>> with latency under load to policymakers and other submitters
>> downstream from this new FCC document, and more reading what we
>> had to
>> say, that we will begin to make serious progress towards finally
>> fixing bufferbloat in the USA.
>> 
>> I do keep hoping that somewhere along the way in the future, the
>> costs
>> of IPv4 address exhaustion and the IPv6 transition, will also get
>> raised to the national level. [1]
>> 
>> We are still collecting signatures for what the bufferbloat
>> project
>> members wrote, and have 1200 bucks in the kitty for further
>> articles
>> and/or publicity. Thoughts appreciated as to where we can go next
>> with
>> shifting the national debate about bandwidth in a better
>> direction!
>> Next up would be trying to get a meeting, and to do an ex-parte
>> filing, I think, and I wish we could do a live demonstration on
>> television about it as good as feynman did here:
>> 
>> https://www.youtube.com/watch?v=raMmRKGkGD4
>> 
>> Our original posting is here:
> 
> https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit
> 
> 
>>> Larry's wonderful post is here:
>>> https://circleid.com/posts/20231211-its-the-latency-fcc
>>> 
>>> [1] How can we get more talking about IPv4 and IPv6, too? Will we
>>> have
>>> to wait another year?
>  
> https://hackaday.com/2024/02/14/floss-weekly-episode-769-10-more-internet/
> 
> 
>>> --
>>> https://blog.cerowrt.org/post/2024_predictions/
>>> Dave Täht CSO, LibreQos
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>  _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] [M-Lab-Discuss] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-27 17:48         ` rjmcmahon
@ 2024-02-27 20:11           ` Jack Haverty
       [not found]             ` <223D4AB0-DBA9-4DF2-AEEE-876C7B994E89@gmx.de>
  2024-02-27 21:29             ` rjmcmahon
  0 siblings, 2 replies; 15+ messages in thread
From: Jack Haverty @ 2024-02-27 20:11 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Network Neutrality is back! Let´s make the technical
	aspects heard this time!


[-- Attachment #1.1.1.1: Type: text/plain, Size: 11096 bytes --]

Hi Bob,

Measuring and monitoring Wifi behavior isn't necessary or sufficient.  
Same with Starlink or whatever else comes along in the future.

The architecture of the Internet places different mechanisms, that in 
past times were contained in the switching equipment, now at many 
different places along a data path.  Much of the mechanism is even in 
the users' devices themselves, which make all sorts of decisions about 
datagram size, acknowledgements, retransmission, discarding duplicates, 
et al.  Those mechanisms interact with the decisions being made in 
network equipment such as switches.   The overall behavior dictates what 
the end users see as behavior and reliability of "the net" as they 
experience it.   The performance of the overall system is influenced by 
the interaction of its many pieces.

My point was that to manage network service ("network" being defined by 
the users), you have to monitor and measure performance as seen by the 
users, as close to the keyboard/mouse/screen/whatever as you can get.  
That's why we decided to require a computer of some kind on each users' 
LAN environment, so we could experience and measure what they were 
likely experiencing, and use our measurements of switches, circuits, 
etc. to analyze and fix problems.   It was also helpful to have a 
database of the metrics captured during previous "normal" network 
activity, to use as comparisons.

As one example, I remember one event when a momentary glitch on a 
transpacific circuit would cause a flurry of activity as TCPs in the 
users' computers compensated, and would settle back to a steady state 
after a few minutes.  But users complained that their file transfers 
were now taking much longer than usual.  After our poking and prodding, 
using those remote computers as tools to see what the users were 
experiencing, we discovered that everything was operating as expected, 
except that every datagram was being transmitted twice and the 
duplicates discarded at the destination.   The TCP retransmission 
mechanisms had settled into a new stable state.

To the network switches, the datagrams all seeemed OK, but there was 
significantly more traffic than usual.  No one was monitoring all those 
user devices out on the LANs so no one except the users noticed anything 
wrong.   Eventually another glitch on the circuit would cause another 
flurry of activity and perhaps settle back into the desired state where 
datagrams only got sent once.

We monitored whatever we could using SNMP to the routers and computers 
that had implemented such things, and we used our remote computers to 
also collect data from the users' perspective.   Often we could tell a 
LAN manager that some particular deviceat his/her site was having 
problems, by looking for behavior that differed from the "normal" 
historical behavior from a week or so earlier.

It would be interesting for example to collect metrics from switches 
about "buffer occupancy" and "transit time" (I don't recall if any MIB 
in SNMP had such metrics), and correlate that with TCP metrics such as 
retransmission behavior and duplicate detection.

Jack



On 2/27/24 09:48, rjmcmahon wrote:
> Hi Jack,
>
> On LAN probes & monitors; I've been told that 90% of users devices are 
> now wirelessly connected so the concept of connecting to a common wave 
> guide to measure or observe user information & flow state isn't 
> viable. A WiFi AP could provide its end state but wireless channels' 
> states are non-trivial and the APs prioritize packet forwarding at L2 
> over state collection. I suspect a fully capable AP that could record 
> per quintuple and RF channels' states would be too expensive. This is 
> part of the reason why our industry and policy makers need to define 
> the key performance metrics well.
>
> Bob
>> Yes, latency is complicated....  Back when I was involved in the
>> early Internet (early 1980s), we knew that latency was an issue
>> requiring much further research, but we figured that meanwhile
>> problems could be avoided by keeping traffic loads well below capacity
>> while the appropriate algorithms could be discovered by the engineers
>> (I was one...).  Forty years later, it seems like it's still a
>> research topic.
>>
>> Years later in the 90s I was involved in operating an international
>> corporate intranet.  We quickly learned that keeping the human users
>> happy required looking at more than the routers and circuits between
>> them.  With much of the "reliability mechanisms" of TCP et al now
>> located in the users' computers rather than the network switches,
>> evaluating users' experience with "the net" required measurements from
>> the users' perspective.
>>
>> To do that, we created a policy whereby every LAN attached to the
>> long-haul backbone had to have a computer on that LAN to which we had
>> remote access.   That enabled us to perform "ping" tests and also
>> collect data about TCP behavior (duplicates, retransmissions, etc.)
>> using SNMP, etherwatch, et al.   It was not unusual for the users'
>> data to indicate that "the net", as they saw it, was misbehaving while
>> the network data, as seen by the operators, indicated that all the
>> routers and circuits were working just fine.
>>
>> If the government regulators want to keep the users happy, IMHO they
>> need to understand this.
>>
>> Jack Haverty
>>
>> On 2/26/24 16:25, rjmcmahon wrote:
>>
>>> On top of all that, the latency responses tend to be non parametric
>>> and may need full pdfs/cdfs along with non-parametric statistical
>>> process controls. Attached is an example from many years ago which
>>> was a firmware bug that sometimes delayed packet processing,
>>> creating a second node in the pdf.
>>>
>>> Engineers and their algorithms can be this way it seems.
>>>
>>> Bob
>>> I didn't study the whole report, but I didn't notice any metrics
>>> associated with *variance* of latency or bandwidth.  It's common for
>>>
>>> vendors to play games ("Lies, damn lies, and statistics!") to make
>>> their metrics look good.   A metric of latency that says something
>>> like "99% less than N milliseconds" doesn't necessarily translate
>>> into
>>> an acceptable user performance.
>>>
>>> It's also important to look at the specific techniques used for
>>> taking
>>> measurements.  For example, if a measurement is performed every
>>> fifteen minutes, extrapolating the metric as representative of all
>>> the
>>> time between measurements can also lead to a metric judgement which
>>> doesn't reflect the reality of what the user actually experiences.
>>>
>>> In addition, there's a lot of mechanism between the ISPs' handling
>>> of
>>> datagrams and the end-user.   The users' experience is affected by
>>> how
>>> all of that mechanism interacts as underlying network behavior
>>> changes.  When a TCP running in some host decides it needs to
>>> retransmit, or an interactive audio/video session discards datagrams
>>>
>>> because they arrive too late to be useful, the user sees
>>> unacceptable
>>> performance even though the network operators may think everything
>>> is
>>> running fine.   Measurements from the end-users' perspective might
>>> indicate performance is quite different from what measurements at
>>> the
>>> ISP level suggest.
>>>
>>> Gamers are especially sensitive to variance, but it will also apply
>>> to
>>> interactive uses such as might occur in telemedicine or remote
>>> operations.  A few years ago I helped a friend do some tests for a
>>> gaming situation and we discovered that the average latency was
>>> reasonably low, but occasionally, perhaps a few times per hour,
>>> latency would increase to 10s of seconds.
>>>
>>> In a game, that often means the player loses.  In a remote surgery
>>> it
>>> may mean horrendous outcomes.  As more functionality is performed
>>> "in
>>> the cloud" such situations will become increasingly common.
>>>
>>> Jack Haverty
>>>
>>> On 2/26/24 12:02, rjmcmahon via Nnagain wrote:
>>>
>>> Thanks for sharing this. I'm trying to find out what are the key
>>> metrics that will be used for this monitoring. I want to make sure
>>> iperf 2 can cover the technical, traffic related ones that make
>>> sense to a skilled network operator, including a WiFi BSS manager. I
>>>
>>> didn't read all 327 pages though, from what I did read, I didn't see
>>>
>>> anything obvious. I assume these types of KPIs may be in reference
>>> docs or something.
>>>
>>> Thanks in advance for any help on this.
>>> Bob
>>>
>>> And...
>>>
>>> Our bufferbloat.net submittal was cited multiple times! Thank you
>>> all
>>> for participating in that process!
>>>
>>> https://docs.fcc.gov/public/attachments/DOC-400675A1.pdf
>>>
>>> It is a long read, and does still start off on the wrong feet
>>> (IMHO),
>>> in particular not understanding the difference between idle and
>>> working latency.
>>>
>>> It is my hope that by widening awareness of more of the real
>>> problems
>>> with latency under load to policymakers and other submitters
>>> downstream from this new FCC document, and more reading what we
>>> had to
>>> say, that we will begin to make serious progress towards finally
>>> fixing bufferbloat in the USA.
>>>
>>> I do keep hoping that somewhere along the way in the future, the
>>> costs
>>> of IPv4 address exhaustion and the IPv6 transition, will also get
>>> raised to the national level. [1]
>>>
>>> We are still collecting signatures for what the bufferbloat
>>> project
>>> members wrote, and have 1200 bucks in the kitty for further
>>> articles
>>> and/or publicity. Thoughts appreciated as to where we can go next
>>> with
>>> shifting the national debate about bandwidth in a better
>>> direction!
>>> Next up would be trying to get a meeting, and to do an ex-parte
>>> filing, I think, and I wish we could do a live demonstration on
>>> television about it as good as feynman did here:
>>>
>>> https://www.youtube.com/watch?v=raMmRKGkGD4
>>>
>>> Our original posting is here:
>>
>> https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit 
>>
>>
>>
>>>> Larry's wonderful post is here:
>>>> https://circleid.com/posts/20231211-its-the-latency-fcc
>>>>
>>>> [1] How can we get more talking about IPv4 and IPv6, too? Will we
>>>> have
>>>> to wait another year?
>>
>> https://hackaday.com/2024/02/14/floss-weekly-episode-769-10-more-internet/ 
>>
>>
>>
>>>> -- 
>>>> https://blog.cerowrt.org/post/2024_predictions/
>>>> Dave Täht CSO, LibreQos
>>> _______________________________________________
>>> Nnagain mailing list
>>> Nnagain@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/nnagain
>>  _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain


[-- Attachment #1.1.1.2: Type: text/html, Size: 16970 bytes --]

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 2469 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 665 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] [M-Lab-Discuss] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
       [not found]             ` <223D4AB0-DBA9-4DF2-AEEE-876C7B994E89@gmx.de>
@ 2024-02-27 21:16               ` Jack Haverty
  0 siblings, 0 replies; 15+ messages in thread
From: Jack Haverty @ 2024-02-27 21:16 UTC (permalink / raw)
  To: Sebastian Moeller,
	Network Neutrality is back! Let´s make the technical
	aspects heard this time!


[-- Attachment #1.1.1.1: Type: text/plain, Size: 13087 bytes --]

Hi Sebastian,

Actually, "passing information on to endpoints" is an idea much older 
than 2019.  It was in the original TCP V4 specification in 1981.   See 
RFC792 ( https://www.rfc-editor.org/rfc/rfc792 ) pages 10-11.

I remember the discussions when the "Source Quench" (SQ) mechanism was 
being defined in the Internet Meetings around 1980.   Personally I never 
thought SQ was a viable feedback technique, but it was OK as a 
placeholder for some future protocol that switches could use to exert 
back-pressure on the users' traffic, once a more effective mechanism was 
invented.

Instead of "slowing down", some implementors of TCP circa 1981 designed 
their systems to immediately retransmit a datagram that had caused a SQ 
to come back, figuring that the SQ meant that their original datagram 
had been discarded.

Jack

On 2/27/24 12:45, Sebastian Moeller wrote:
> Hi Jack,
>
>> On 27. Feb 2024, at 21:11, Jack Haverty via Nnagain<nnagain@lists.bufferbloat.net>  wrote:
>>
>> Hi Bob,
>>
>> Measuring and monitoring Wifi behavior isn't necessary or sufficient.  Same with Starlink or whatever else comes along in the future.
>>
>> The architecture of the Internet places different mechanisms, that in past times were contained in the switching equipment, now at many different places along a data path.  Much of the mechanism is even in the users' devices themselves, which make all sorts of decisions about datagram size, acknowledgements, retransmission, discarding duplicates, et al.  Those mechanisms interact with the decisions being made in network equipment such as switches.   The overall behavior dictates what the end users see as behavior and reliability of "the net" as they experience it.   The performance of the overall system is influenced by the interaction of its many pieces.
>>
>> My point was that to manage network service ("network" being defined by the users), you have to monitor and measure performance as seen by the users, as close to the keyboard/mouse/screen/whatever as you can get.  That's why we decided to require a computer of some kind on each users' LAN environment, so we could experience and measure what they were likely experiencing, and use our measurements of switches, circuits, etc. to analyze and fix problems.   It was also helpful to have a database of the metrics captured during previous "normal" network activity, to use as comparisons.
>>
>> As one example, I remember one event when a momentary glitch on a transpacific circuit would cause a flurry of activity as TCPs in the users' computers compensated, and would settle back to a steady state after a few minutes.  But users complained that their file transfers were now taking much longer than usual.  After our poking and prodding, using those remote computers as tools to see what the users were experiencing, we discovered that everything was operating as expected, except that every datagram was being transmitted twice and the duplicates discarded at the destination.   The TCP retransmission mechanisms had settled into a new stable state.
>>
>> To the network switches, the datagrams all seeemed OK, but there was significantly more traffic than usual.  No one was monitoring all those user devices out on the LANs so no one except the users noticed anything wrong.   Eventually another glitch on the circuit would cause another flurry of activity and perhaps settle back into the desired state where datagrams only got sent once.
>>
>> We monitored whatever we could using SNMP to the routers and computers that had implemented such things, and we used our remote computers to also collect data from the users' perspective.   Often we could tell a LAN manager that some particular deviceat his/her site was having problems, by looking for behavior that differed from the "normal" historical behavior from a week or so earlier.
>>
>> It would be interesting for example to collect metrics from switches about "buffer occupancy" and "transit time" (I don't recall if any MIB in SNMP had such metrics), and correlate that with TCP metrics such as retransmission behavior and duplicate detection.
> 	[SM] We could do even better, pas this information on to endpoints to actually allow them to not only react to overload but also to imminent overload... say by not collecting the absolute buffer occupancy, but say the max(current hop buffer occupancy, buffer occupancy already recorded in packet) (As much as would wish, this is not my idea, but Arslan,, and McKeow's. in  “Switches Know the Exact Amount of Congestion.” In Proceedings of the 2019 Workshop on Buffer Sizing, 1–6, 2019.)
>
> Regards
> 	Sebastian
>
>> Jack
>>
>>
>>
>> On 2/27/24 09:48, rjmcmahon wrote:
>>> Hi Jack,
>>>
>>> On LAN probes & monitors; I've been told that 90% of users devices are now wirelessly connected so the concept of connecting to a common wave guide to measure or observe user information & flow state isn't viable. A WiFi AP could provide its end state but wireless channels' states are non-trivial and the APs prioritize packet forwarding at L2 over state collection. I suspect a fully capable AP that could record per quintuple and RF channels' states would be too expensive. This is part of the reason why our industry and policy makers need to define the key performance metrics well.
>>>
>>> Bob
>>>> Yes, latency is complicated....  Back when I was involved in the
>>>> early Internet (early 1980s), we knew that latency was an issue
>>>> requiring much further research, but we figured that meanwhile
>>>> problems could be avoided by keeping traffic loads well below capacity
>>>> while the appropriate algorithms could be discovered by the engineers
>>>> (I was one...).  Forty years later, it seems like it's still a
>>>> research topic.
>>>>
>>>> Years later in the 90s I was involved in operating an international
>>>> corporate intranet.  We quickly learned that keeping the human users
>>>> happy required looking at more than the routers and circuits between
>>>> them.  With much of the "reliability mechanisms" of TCP et al now
>>>> located in the users' computers rather than the network switches,
>>>> evaluating users' experience with "the net" required measurements from
>>>> the users' perspective.
>>>>
>>>> To do that, we created a policy whereby every LAN attached to the
>>>> long-haul backbone had to have a computer on that LAN to which we had
>>>> remote access.   That enabled us to perform "ping" tests and also
>>>> collect data about TCP behavior (duplicates, retransmissions, etc.)
>>>> using SNMP, etherwatch, et al.   It was not unusual for the users'
>>>> data to indicate that "the net", as they saw it, was misbehaving while
>>>> the network data, as seen by the operators, indicated that all the
>>>> routers and circuits were working just fine.
>>>>
>>>> If the government regulators want to keep the users happy, IMHO they
>>>> need to understand this.
>>>>
>>>> Jack Haverty
>>>>
>>>> On 2/26/24 16:25, rjmcmahon wrote:
>>>>
>>>>> On top of all that, the latency responses tend to be non parametric
>>>>> and may need full pdfs/cdfs along with non-parametric statistical
>>>>> process controls. Attached is an example from many years ago which
>>>>> was a firmware bug that sometimes delayed packet processing,
>>>>> creating a second node in the pdf.
>>>>>
>>>>> Engineers and their algorithms can be this way it seems.
>>>>>
>>>>> Bob
>>>>> I didn't study the whole report, but I didn't notice any metrics
>>>>> associated with *variance* of latency or bandwidth.  It's common for
>>>>>
>>>>> vendors to play games ("Lies, damn lies, and statistics!") to make
>>>>> their metrics look good.   A metric of latency that says something
>>>>> like "99% less than N milliseconds" doesn't necessarily translate
>>>>> into
>>>>> an acceptable user performance.
>>>>>
>>>>> It's also important to look at the specific techniques used for
>>>>> taking
>>>>> measurements.  For example, if a measurement is performed every
>>>>> fifteen minutes, extrapolating the metric as representative of all
>>>>> the
>>>>> time between measurements can also lead to a metric judgement which
>>>>> doesn't reflect the reality of what the user actually experiences.
>>>>>
>>>>> In addition, there's a lot of mechanism between the ISPs' handling
>>>>> of
>>>>> datagrams and the end-user.   The users' experience is affected by
>>>>> how
>>>>> all of that mechanism interacts as underlying network behavior
>>>>> changes.  When a TCP running in some host decides it needs to
>>>>> retransmit, or an interactive audio/video session discards datagrams
>>>>>
>>>>> because they arrive too late to be useful, the user sees
>>>>> unacceptable
>>>>> performance even though the network operators may think everything
>>>>> is
>>>>> running fine.   Measurements from the end-users' perspective might
>>>>> indicate performance is quite different from what measurements at
>>>>> the
>>>>> ISP level suggest.
>>>>>
>>>>> Gamers are especially sensitive to variance, but it will also apply
>>>>> to
>>>>> interactive uses such as might occur in telemedicine or remote
>>>>> operations.  A few years ago I helped a friend do some tests for a
>>>>> gaming situation and we discovered that the average latency was
>>>>> reasonably low, but occasionally, perhaps a few times per hour,
>>>>> latency would increase to 10s of seconds.
>>>>>
>>>>> In a game, that often means the player loses.  In a remote surgery
>>>>> it
>>>>> may mean horrendous outcomes.  As more functionality is performed
>>>>> "in
>>>>> the cloud" such situations will become increasingly common.
>>>>>
>>>>> Jack Haverty
>>>>>
>>>>> On 2/26/24 12:02, rjmcmahon via Nnagain wrote:
>>>>>
>>>>> Thanks for sharing this. I'm trying to find out what are the key
>>>>> metrics that will be used for this monitoring. I want to make sure
>>>>> iperf 2 can cover the technical, traffic related ones that make
>>>>> sense to a skilled network operator, including a WiFi BSS manager. I
>>>>>
>>>>> didn't read all 327 pages though, from what I did read, I didn't see
>>>>>
>>>>> anything obvious. I assume these types of KPIs may be in reference
>>>>> docs or something.
>>>>>
>>>>> Thanks in advance for any help on this.
>>>>> Bob
>>>>>
>>>>> And...
>>>>>
>>>>> Our bufferbloat.net submittal was cited multiple times! Thank you
>>>>> all
>>>>> for participating in that process!
>>>>>
>>>>> https://docs.fcc.gov/public/attachments/DOC-400675A1.pdf  
>>>>>
>>>>> It is a long read, and does still start off on the wrong feet
>>>>> (IMHO),
>>>>> in particular not understanding the difference between idle and
>>>>> working latency.
>>>>>
>>>>> It is my hope that by widening awareness of more of the real
>>>>> problems
>>>>> with latency under load to policymakers and other submitters
>>>>> downstream from this new FCC document, and more reading what we
>>>>> had to
>>>>> say, that we will begin to make serious progress towards finally
>>>>> fixing bufferbloat in the USA.
>>>>>
>>>>> I do keep hoping that somewhere along the way in the future, the
>>>>> costs
>>>>> of IPv4 address exhaustion and the IPv6 transition, will also get
>>>>> raised to the national level. [1]
>>>>>
>>>>> We are still collecting signatures for what the bufferbloat
>>>>> project
>>>>> members wrote, and have 1200 bucks in the kitty for further
>>>>> articles
>>>>> and/or publicity. Thoughts appreciated as to where we can go next
>>>>> with
>>>>> shifting the national debate about bandwidth in a better
>>>>> direction!
>>>>> Next up would be trying to get a meeting, and to do an ex-parte
>>>>> filing, I think, and I wish we could do a live demonstration on
>>>>> television about it as good as feynman did here:
>>>>>
>>>>> https://www.youtube.com/watch?v=raMmRKGkGD4  
>>>>>
>>>>> Our original posting is here:
>>>> https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit  
>>>>
>>>>
>>>>>> Larry's wonderful post is here:
>>>>>> https://circleid.com/posts/20231211-its-the-latency-fcc  
>>>>>>
>>>>>> [1] How can we get more talking about IPv4 and IPv6, too? Will we
>>>>>> have
>>>>>> to wait another year?
>>>>    
>>>> https://hackaday.com/2024/02/14/floss-weekly-episode-769-10-more-internet/  
>>>>
>>>>
>>>>>> -- 
>>>>>> https://blog.cerowrt.org/post/2024_predictions/  
>>>>>> Dave Täht CSO, LibreQos
>>>>> _______________________________________________
>>>>> Nnagain mailing list
>>>>> Nnagain@lists.bufferbloat.net  
>>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>>   _______________________________________________
>>>> Nnagain mailing list
>>>> Nnagain@lists.bufferbloat.net  
>>>> https://lists.bufferbloat.net/listinfo/nnagain
>> <OpenPGP_0x746CC322403B8E50.asc>_______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain


[-- Attachment #1.1.1.2: Type: text/html, Size: 15296 bytes --]

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 2469 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 665 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] [M-Lab-Discuss] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-27 20:11           ` Jack Haverty
       [not found]             ` <223D4AB0-DBA9-4DF2-AEEE-876C7B994E89@gmx.de>
@ 2024-02-27 21:29             ` rjmcmahon
  1 sibling, 0 replies; 15+ messages in thread
From: rjmcmahon @ 2024-02-27 21:29 UTC (permalink / raw)
  To: Jack Haverty
  Cc: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

Hi Jack,

Thanks for this well written communique.

I think it's a nuanced point that measuring networks has at least two 
components. Measuring things like packets by networking equipment using 
network tools. Equally important is measuring at the applications level, 
e.g. reads and writes made by an application to the underlying operating 
system.

A shameless plug - and sorry for the indulgence - iperf 2 is an 
application tool and not so much a network tool. Its primary interface 
is BSD sockets. There are some network stats too per things like 
tcp_info struct, but fundamentally it's an application-level tool. We 
find this necessary for our WiFi testing as it's that interface that 
directly correlates to user experience. Network and network packets are 
merely a means and not an end, at least not for most WiFi connected 
devices. (Disclaimer: QUIC isn't considered here.)

Bob
> Hi Bob,
> 
> Measuring and monitoring Wifi behavior isn't necessary or sufficient.
> Same with Starlink or whatever else comes along in the future.
> 
> The architecture of the Internet places different mechanisms, that in
> past times were contained in the switching equipment, now at many
> different places along a data path.  Much of the mechanism is even in
> the users' devices themselves, which make all sorts of decisions about
> datagram size, acknowledgements, retransmission, discarding
> duplicates, et al.  Those mechanisms interact with the decisions being
> made in network equipment such as switches.   The overall behavior
> dictates what the end users see as behavior and reliability of "the
> net" as they experience it.   The performance of the overall system is
> influenced by the interaction of its many pieces.
> 
> My point was that to manage network service ("network" being defined
> by the users), you have to monitor and measure performance as seen by
> the users, as close to the keyboard/mouse/screen/whatever as you can
> get.  That's why we decided to require a computer of some kind on each
> users' LAN environment, so we could experience and measure what they
> were likely experiencing, and use our measurements of switches,
> circuits, etc. to analyze and fix problems.   It was also helpful to
> have a database of the metrics captured during previous "normal"
> network activity, to use as comparisons.
> 
> As one example, I remember one event when a momentary glitch on a
> transpacific circuit would cause a flurry of activity as TCPs in the
> users' computers compensated, and would settle back to a steady state
> after a few minutes.  But users complained that their file transfers
> were now taking much longer than usual.  After our poking and
> prodding, using those remote computers as tools to see what the users
> were experiencing, we discovered that everything was operating as
> expected, except that every datagram was being transmitted twice and
> the duplicates discarded at the destination.   The TCP retransmission
> mechanisms had settled into a new stable state.
> 
> To the network switches, the datagrams all seeemed OK, but there was
> significantly more traffic than usual.  No one was monitoring all
> those user devices out on the LANs so no one except the users noticed
> anything wrong.   Eventually another glitch on the circuit would cause
> another flurry of activity and perhaps settle back into the desired
> state where datagrams only got sent once.
> 
> We monitored whatever we could using SNMP to the routers and computers
> that had implemented such things, and we used our remote computers to
> also collect data from the users' perspective.   Often we could tell a
> LAN manager that some particular deviceat his/her site was having
> problems, by looking for behavior that differed from the "normal"
> historical behavior from a week or so earlier.
> 
> It would be interesting for example to collect metrics from switches
> about "buffer occupancy" and "transit time" (I don't recall if any MIB
> in SNMP had such metrics), and correlate that with TCP metrics such as
> retransmission behavior and duplicate detection.
> 
> Jack
> 
> On 2/27/24 09:48, rjmcmahon wrote:
> 
>> Hi Jack,
>> 
>> On LAN probes & monitors; I've been told that 90% of users devices
>> are now wirelessly connected so the concept of connecting to a
>> common wave guide to measure or observe user information & flow
>> state isn't viable. A WiFi AP could provide its end state but
>> wireless channels' states are non-trivial and the APs prioritize
>> packet forwarding at L2 over state collection. I suspect a fully
>> capable AP that could record per quintuple and RF channels' states
>> would be too expensive. This is part of the reason why our industry
>> and policy makers need to define the key performance metrics well.
>> 
>> Bob
>> Yes, latency is complicated....  Back when I was involved in the
>> early Internet (early 1980s), we knew that latency was an issue
>> requiring much further research, but we figured that meanwhile
>> problems could be avoided by keeping traffic loads well below
>> capacity
>> while the appropriate algorithms could be discovered by the
>> engineers
>> (I was one...).  Forty years later, it seems like it's still a
>> research topic.
>> 
>> Years later in the 90s I was involved in operating an international
>> corporate intranet.  We quickly learned that keeping the human users
>> 
>> happy required looking at more than the routers and circuits between
>> 
>> them.  With much of the "reliability mechanisms" of TCP et al now
>> located in the users' computers rather than the network switches,
>> evaluating users' experience with "the net" required measurements
>> from
>> the users' perspective.
>> 
>> To do that, we created a policy whereby every LAN attached to the
>> long-haul backbone had to have a computer on that LAN to which we
>> had
>> remote access.   That enabled us to perform "ping" tests and also
>> collect data about TCP behavior (duplicates, retransmissions, etc.)
>> using SNMP, etherwatch, et al.   It was not unusual for the users'
>> data to indicate that "the net", as they saw it, was misbehaving
>> while
>> the network data, as seen by the operators, indicated that all the
>> routers and circuits were working just fine.
>> 
>> If the government regulators want to keep the users happy, IMHO they
>> 
>> need to understand this.
>> 
>> Jack Haverty
>> 
>> On 2/26/24 16:25, rjmcmahon wrote:
>> 
>> On top of all that, the latency responses tend to be non parametric
>> and may need full pdfs/cdfs along with non-parametric statistical
>> process controls. Attached is an example from many years ago which
>> was a firmware bug that sometimes delayed packet processing,
>> creating a second node in the pdf.
>> 
>> Engineers and their algorithms can be this way it seems.
>> 
>> Bob
>> I didn't study the whole report, but I didn't notice any metrics
>> associated with *variance* of latency or bandwidth.  It's common for
>> 
>> 
>> vendors to play games ("Lies, damn lies, and statistics!") to make
>> their metrics look good.   A metric of latency that says something
>> like "99% less than N milliseconds" doesn't necessarily translate
>> into
>> an acceptable user performance.
>> 
>> It's also important to look at the specific techniques used for
>> taking
>> measurements.  For example, if a measurement is performed every
>> fifteen minutes, extrapolating the metric as representative of all
>> the
>> time between measurements can also lead to a metric judgement which
>> doesn't reflect the reality of what the user actually experiences.
>> 
>> In addition, there's a lot of mechanism between the ISPs' handling
>> of
>> datagrams and the end-user.   The users' experience is affected by
>> how
>> all of that mechanism interacts as underlying network behavior
>> changes.  When a TCP running in some host decides it needs to
>> retransmit, or an interactive audio/video session discards datagrams
>> 
>> 
>> because they arrive too late to be useful, the user sees
>> unacceptable
>> performance even though the network operators may think everything
>> is
>> running fine.   Measurements from the end-users' perspective might
>> indicate performance is quite different from what measurements at
>> the
>> ISP level suggest.
>> 
>> Gamers are especially sensitive to variance, but it will also apply
>> to
>> interactive uses such as might occur in telemedicine or remote
>> operations.  A few years ago I helped a friend do some tests for a
>> gaming situation and we discovered that the average latency was
>> reasonably low, but occasionally, perhaps a few times per hour,
>> latency would increase to 10s of seconds.
>> 
>> In a game, that often means the player loses.  In a remote surgery
>> it
>> may mean horrendous outcomes.  As more functionality is performed
>> "in
>> the cloud" such situations will become increasingly common.
>> 
>> Jack Haverty
>> 
>> On 2/26/24 12:02, rjmcmahon via Nnagain wrote:
>> 
>> Thanks for sharing this. I'm trying to find out what are the key
>> metrics that will be used for this monitoring. I want to make sure
>> iperf 2 can cover the technical, traffic related ones that make
>> sense to a skilled network operator, including a WiFi BSS manager. I
>> 
>> 
>> didn't read all 327 pages though, from what I did read, I didn't see
>> 
>> 
>> anything obvious. I assume these types of KPIs may be in reference
>> docs or something.
>> 
>> Thanks in advance for any help on this.
>> Bob
>> 
>> And...
>> 
>> Our bufferbloat.net submittal was cited multiple times! Thank you
>> all
>> for participating in that process!
>> 
>> https://docs.fcc.gov/public/attachments/DOC-400675A1.pdf
>> 
>> It is a long read, and does still start off on the wrong feet
>> (IMHO),
>> in particular not understanding the difference between idle and
>> working latency.
>> 
>> It is my hope that by widening awareness of more of the real
>> problems
>> with latency under load to policymakers and other submitters
>> downstream from this new FCC document, and more reading what we
>> had to
>> say, that we will begin to make serious progress towards finally
>> fixing bufferbloat in the USA.
>> 
>> I do keep hoping that somewhere along the way in the future, the
>> costs
>> of IPv4 address exhaustion and the IPv6 transition, will also get
>> raised to the national level. [1]
>> 
>> We are still collecting signatures for what the bufferbloat
>> project
>> members wrote, and have 1200 bucks in the kitty for further
>> articles
>> and/or publicity. Thoughts appreciated as to where we can go next
>> with
>> shifting the national debate about bandwidth in a better
>> direction!
>> Next up would be trying to get a meeting, and to do an ex-parte
>> filing, I think, and I wish we could do a live demonstration on
>> television about it as good as feynman did here:
>> 
>> https://www.youtube.com/watch?v=raMmRKGkGD4
>> 
>> Our original posting is here:
>> 
>> 
> https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit
>> 
>> 
>> Larry's wonderful post is here:
>> https://circleid.com/posts/20231211-its-the-latency-fcc
>> 
>> [1] How can we get more talking about IPv4 and IPv6, too? Will we
>> have
>> to wait another year?
> 
> https://hackaday.com/2024/02/14/floss-weekly-episode-769-10-more-internet/
> 
> 
>>> --
>>> https://blog.cerowrt.org/post/2024_predictions/
>>> Dave Täht CSO, LibreQos
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>   _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] [M-Lab-Discuss] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-26 20:02 ` [NNagain] [M-Lab-Discuss] " rjmcmahon
  2024-02-26 20:59   ` Jack Haverty
@ 2024-02-28 19:11   ` Fenwick Mckelvey
  1 sibling, 0 replies; 15+ messages in thread
From: Fenwick Mckelvey @ 2024-02-28 19:11 UTC (permalink / raw)
  To: rjmcmahon
  Cc: Dave Taht,
	Network Neutrality is back! Let´s make the technical
	aspects heard this time!,
	Dave Taht via Starlink, Rpm, discuss,
	National Broadband Mapping Coalition, Reza Rajabiun

[-- Attachment #1: Type: text/plain, Size: 3822 bytes --]

Hello from Canada,
I noticed some discussion about FCC and latency again (here and on hacker
news: https://news.ycombinator.com/item?id=39533800). A few years ago, Reza
and I spent considerable work at our national regulator, CRTC, establishing
a latency and packet loss threshold for a minimum service broadband. We
used M-Lab data to do so and I always hoped to see more work on latency as
a measure, especially because you can calculate what would be minimum
theoretical latency from an off-net IXP to a home.

You can see some of our work here:
https://www.tandfonline.com/doi/pdf/10.1080/01972243.2019.1574533 &
https://crtc.gc.ca/public/cisc/nt/NTRE061.pdf

The final decision: https://crtc.gc.ca/eng/archive/2020/2020-408.htm

Happy to offer any advice here and share some experiences if that helps.

Be good,
Fenwick

On Tue, 27 Feb 2024 at 13:32, 'rjmcmahon' via discuss <
discuss@measurementlab.net> wrote:

> Thanks for sharing this. I'm trying to find out what are the key metrics
> that will be used for this monitoring. I want to make sure iperf 2 can
> cover the technical, traffic related ones that make sense to a skilled
> network operator, including a WiFi BSS manager. I didn't read all 327
> pages though, from what I did read, I didn't see anything obvious. I
> assume these types of KPIs may be in reference docs or something.
>
> Thanks in advance for any help on this.
> Bob
> > And...
> >
> > Our bufferbloat.net submittal was cited multiple times! Thank you all
> > for participating in that process!
> >
> > https://docs.fcc.gov/public/attachments/DOC-400675A1.pdf
> >
> > It is a long read, and does still start off on the wrong feet (IMHO),
> > in particular not understanding the difference between idle and
> > working latency.
> >
> > It is my hope that by widening awareness of more of the real problems
> > with latency under load to policymakers and other submitters
> > downstream from this new FCC document, and more reading what we had to
> > say, that we will begin to make serious progress towards finally
> > fixing bufferbloat in the USA.
> >
> > I do keep hoping that somewhere along the way in the future, the costs
> > of IPv4 address exhaustion and the IPv6 transition, will also get
> > raised to the national level. [1]
> >
> > We are still collecting signatures for what the bufferbloat project
> > members wrote, and have 1200 bucks in the kitty for further articles
> > and/or publicity. Thoughts appreciated as to where we can go next with
> > shifting the national debate about bandwidth in a better direction!
> > Next up would be trying to get a meeting, and to do an ex-parte
> > filing, I think, and I wish we could do a live demonstration on
> > television about it as good as feynman did here:
> >
> > https://www.youtube.com/watch?v=raMmRKGkGD4
> >
> > Our original posting is here:
> >
> https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit
> >
> > Larry's wonderful post is here:
> > https://circleid.com/posts/20231211-its-the-latency-fcc
> >
> > [1] How can we get more talking about IPv4 and IPv6, too? Will we have
> > to wait another year?
> >
> >
> https://hackaday.com/2024/02/14/floss-weekly-episode-769-10-more-internet/
> >
> > --
> > https://blog.cerowrt.org/post/2024_predictions/
> > Dave Täht CSO, LibreQos
>
> --
> You received this message because you are subscribed to the Google Groups
> "discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to discuss+unsubscribe@measurementlab.net.
> To view this discussion on the web visit
> https://groups.google.com/a/measurementlab.net/d/msgid/discuss/3d808d9df1a6929ecfba495e75b4fc1b%40rjmcmahon.com
> .
>


-- 
Be good,
Fen

[-- Attachment #2: Type: text/html, Size: 5871 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-27 23:17   ` Jack Haverty
@ 2024-02-27 23:41     ` Jeremy Austin
  0 siblings, 0 replies; 15+ messages in thread
From: Jeremy Austin @ 2024-02-27 23:41 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

[-- Attachment #1: Type: text/plain, Size: 4052 bytes --]

On Tue, Feb 27, 2024 at 2:17 PM Jack Haverty via Nnagain <
nnagain@lists.bufferbloat.net> wrote:

> Has any ISP or regulatory body set a standard for latency necessary to
> support interactive uses?
>

I think we can safely say no. Unfortunately, no.

>
> It seems to me that a 2+ second delay is way too high, and even if it
> happens only occasionally, users may set up their systems to assume it may
> happen and compensate for it by ading their own buffering at the endpoints
> and thereby reduce embarassing glitches.  Maybe this explains those long
> awkward pauses you commonly see when TV interviewers are trying to have a
> conversation with someone at a remote site via Zoom, Skype, et al.
>

2 second delays happen more often than you'd think on "untreated"
connections. I have seen fiber connections with 15 seconds of induced
latency (latency due to buffering, not end-to-end distance.) I have seen
cable connections with 5 or 6 seconds of latency under *normal load*. This
is in the last year alone.


>
> In the early Internet days we assumed there would be a need for multiple
> types of service, such as a "bulk transfer" and "interactive", similar to
> analogs in the non-electronic transport systems (e.g., Air Freight versus
> Container Ship).   The "Type Of Service" field was put in the IP header as
> a placeholder for such mechanisms to be added to networks in the future,
>

In many other cases, high latency is a result of buffering at *every*
change in link speed. As Preseem and LibreQoS have validated, even dynamic
home and last-mile RF environments benefit significantly from flow
isolation, better drops and packet pacing, no matter the ToS field.



>
> Of course if network capacity is truly unlimited there would be no need
> now to provide different types of service.   But these latency numbers
> suggest that users' traffic demands are still sometimes exceeding network
> capacities.  Some of the network traffic is associated with interactive
> uses, and other traffic is doing tasks such as backups to some cloud.
> Treating them uniformly seems like bad engineering as well as bad policy.
>

It's not quite as simple as "traffic demands… exceeding network capacities"
when you take into account dynamic link rates. Packets are either on the
wire or they are not, and "capacity" is an emergent phenomenon rather than
guaranteed end-to-end. Microbursts guarantee that packet rates will
occasionally exceed link rates on even a high-capacity end-user connection
fed by even faster core and interchange links. Treating types of traffic
non-uniformly (when obeying other, voluntary, traffic- or offnet-generated
signals) is susceptible to the tragedy of the commons. So far we have
decent compromises, such as treating traffic according to its behavior. If
it walks and quacks like a duck…


>
> I'm still not sure whether or not "network neutrality" regulations would
> preclude offering different types of service, if the technical mechanisms
> even implement such functionality.
>
>
Theoretically L4S could be a "paid add-on", so to speak, but at this point,
the overall market is the primary differentiator — as an end user, I will
happily spend my dollars with an ISP that serves smaller plans that have
better-managed latency-under-load ("Working Latency" per Stuart Cheshire, I
believe), than one that gives me gigabit or multi-gigabit that falls on its
face when under load. It will take a long time before everyone has an
option to choose — and to your original question, better standardized
metrics are needed.

Our customers so far have not pressed us to productize
good-latency-as-a-service; they regard it as essential to customer
satisfaction and retention.

Jack
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
>

These are largely my opinions, not necessarily my employer's.
-- 
--
Jeremy Austin
Sr. Product Manager
Preseem | Aterlo Networks

[-- Attachment #2: Type: text/html, Size: 5877 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-27 22:00 ` rjmcmahon
@ 2024-02-27 23:17   ` Jack Haverty
  2024-02-27 23:41     ` Jeremy Austin
  0 siblings, 1 reply; 15+ messages in thread
From: Jack Haverty @ 2024-02-27 23:17 UTC (permalink / raw)
  To: nnagain


[-- Attachment #1.1.1.1: Type: text/plain, Size: 3327 bytes --]

Has any ISP or regulatory body set a standard for latency necessary to 
support interactive uses?

It seems to me that a 2+ second delay is way too high, and even if it 
happens only occasionally, users may set up their systems to assume it 
may happen and compensate for it by ading their own buffering at the 
endpoints and thereby reduce embarassing glitches. Maybe this explains 
those long awkward pauses you commonly see when TV interviewers are 
trying to have a conversation with someone at a remote site via Zoom, 
Skype, et al.

In the early Internet days we assumed there would be a need for multiple 
types of service, such as a "bulk transfer" and "interactive", similar 
to analogs in the non-electronic transport systems (e.g., Air Freight 
versus Container Ship).   The "Type Of Service" field was put in the IP 
header as a placeholder for such mechanisms to be added to networks in 
the future,

Of course if network capacity is truly unlimited there would be no need 
now to provide different types of service.   But these latency numbers 
suggest that users' traffic demands are still sometimes exceeding 
network capacities.  Some of the network traffic is associated with 
interactive uses, and other traffic is doing tasks such as backups to 
some cloud.  Treating them uniformly seems like bad engineering as well 
as bad policy.

I'm still not sure whether or not "network neutrality" regulations would 
preclude offering different types of service, if the technical 
mechanisms even implement such functionality.

Jack

On 2/27/24 14:00, rjmcmahon via Nnagain wrote:
>> Interesting blog post on the latency part at
>> https://broadbandbreakfast.com/untitled-12/.
>>
>> Looking at the FCC draft report, page 73, Figure 24 – I find it sort
>> of ridiculous that the table describes things as “Low Latency
>> Service” available or not. That is because they seem to really
>> misunderstand the notion of working latency. The table instead seems
>> to classify any network with idle latency <100 ms to be low latency
>> – which as Dave and others close to bufferbloat know is silly. Lots
>> of these networks that are in this report classified as low latency
>> would in fact have working latencies of 100s to 1,000s of milliseconds
>> – far from low latency.
>>
>> I looked at FCC MBA platform data from the last 6 months and here are
>> the latency under load stats, 99th percentile for a selection of ten
>> ISPs:
>> ISP A  2470 ms
>>
>> ISP B  2296 ms
>>
>> ISP C 2281 ms
>>
>> ISP D 2203 ms
>>
>> ISP E  2070 ms
>>
>> ISP F  1716 ms
>>
>> ISP G 1468 ms
>>
>> ISP H 965 ms
>>
>> ISP I   909 ms
>>
>> ISP J   896 ms
>>
>> Jason
>
> It does seem like there is a lot of confusion around idle latency vs 
> working latency. Another common error is to conflate round trip time 
> as two "one way delays." OWD & RTT are different metrics and both have 
> utility. (all of this, including working-loads, is supported in iperf 
> 2 - https://iperf2.sourceforge.io/iperf-manpage.html - so there is 
> free tooling out there that can help.)
>
> Bob
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain


[-- Attachment #1.1.1.2: Type: text/html, Size: 5035 bytes --]

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 2469 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 665 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
  2024-02-27 21:06 [NNagain] " Livingood, Jason
@ 2024-02-27 22:00 ` rjmcmahon
  2024-02-27 23:17   ` Jack Haverty
  0 siblings, 1 reply; 15+ messages in thread
From: rjmcmahon @ 2024-02-27 22:00 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

> Interesting blog post on the latency part at
> https://broadbandbreakfast.com/untitled-12/.
> 
> Looking at the FCC draft report, page 73, Figure 24 – I find it sort
> of ridiculous that the table describes things as “Low Latency
> Service” available or not. That is because they seem to really
> misunderstand the notion of working latency. The table instead seems
> to classify any network with idle latency <100 ms to be low latency
> – which as Dave and others close to bufferbloat know is silly. Lots
> of these networks that are in this report classified as low latency
> would in fact have working latencies of 100s to 1,000s of milliseconds
> – far from low latency.
> 
> I looked at FCC MBA platform data from the last 6 months and here are
> the latency under load stats, 99th percentile for a selection of ten
> ISPs:
> ISP A  2470 ms
> 
> ISP B  2296 ms
> 
> ISP C 2281 ms
> 
> ISP D 2203 ms
> 
> ISP E  2070 ms
> 
> ISP F  1716 ms
> 
> ISP G 1468 ms
> 
> ISP H 965 ms
> 
> ISP I   909 ms
> 
> ISP J   896 ms
> 
> Jason

It does seem like there is a lot of confusion around idle latency vs 
working latency. Another common error is to conflate round trip time as 
two "one way delays." OWD & RTT are different metrics and both have 
utility. (all of this, including working-loads, is supported in iperf 2 
- https://iperf2.sourceforge.io/iperf-manpage.html - so there is free 
tooling out there that can help.)

Bob

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out!
@ 2024-02-27 21:06 Livingood, Jason
  2024-02-27 22:00 ` rjmcmahon
  0 siblings, 1 reply; 15+ messages in thread
From: Livingood, Jason @ 2024-02-27 21:06 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

[-- Attachment #1: Type: text/plain, Size: 978 bytes --]

Interesting blog post on the latency part at https://broadbandbreakfast.com/untitled-12/.

Looking at the FCC draft report, page 73, Figure 24 – I find it sort of ridiculous that the table describes things as “Low Latency Service” available or not. That is because they seem to really misunderstand the notion of working latency. The table instead seems to classify any network with idle latency <100 ms to be low latency – which as Dave and others close to bufferbloat know is silly. Lots of these networks that are in this report classified as low latency would in fact have working latencies of 100s to 1,000s of milliseconds – far from low latency.

I looked at FCC MBA platform data from the last 6 months and here are the latency under load stats, 99th percentile for a selection of ten ISPs:
ISP A  2470 ms
ISP B  2296 ms
ISP C 2281 ms
ISP D 2203 ms
ISP E  2070 ms
ISP F  1716 ms
ISP G 1468 ms
ISP H 965 ms
ISP I   909 ms
ISP J   896 ms

Jason

[-- Attachment #2: Type: text/html, Size: 3632 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2024-02-28 19:12 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-26 15:06 [NNagain] The FCC 2024 Section 706 Report, GN Docket No. 22-270 is out! Dave Taht
2024-02-26 19:24 ` Jack Haverty
2024-02-26 20:02 ` [NNagain] [M-Lab-Discuss] " rjmcmahon
2024-02-26 20:59   ` Jack Haverty
2024-02-27  0:25     ` rjmcmahon
2024-02-27  2:06       ` Jack Haverty
2024-02-27 17:48         ` rjmcmahon
2024-02-27 20:11           ` Jack Haverty
     [not found]             ` <223D4AB0-DBA9-4DF2-AEEE-876C7B994E89@gmx.de>
2024-02-27 21:16               ` Jack Haverty
2024-02-27 21:29             ` rjmcmahon
2024-02-28 19:11   ` Fenwick Mckelvey
2024-02-27 21:06 [NNagain] " Livingood, Jason
2024-02-27 22:00 ` rjmcmahon
2024-02-27 23:17   ` Jack Haverty
2024-02-27 23:41     ` Jeremy Austin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox