Many ISPs need the kinds of quality shaping cake can do
 help / color / mirror / Atom feed
* [LibreQoS] The New Zealand broadband report
@ 2023-11-09 14:56 Dave Taht
  2023-11-20  0:34 ` Jim Forster
  0 siblings, 1 reply; 4+ messages in thread
From: Dave Taht @ 2023-11-09 14:56 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!
  Cc: Nicholas Weaver, libreqos

https://comcom.govt.nz/__data/assets/pdf_file/0025/329515/MBNZ-Winter-Report-28-September-2023.pdf

While this is evolving to be one of the best of government reports I
know of, leveraging samknows rather than ookla, it has
a few glaring omissions that I wish someone would address.

It does not measure web page load time. This is probably a relative
constant across all access technologies, limited only by the closeness
of the (latency) to the web serving site(s).

It does not break out non-lte, non-5g fixed wireless. I am in touch
with multiple companies within NZ that use unlicensed or licensed
spectrum such as uber.nz that are quite proud of how competitive they
are with fiber. Much of the 5g problem is actually backhaul routing in
this report's case. Rural links can also have more latency because of
how far away from the central servers they are in the first place, it
would be good if future reports did a bit more geo-location to
determine how much latency was unavoidable due to the laws of physics.

My second biggest kvetch is on figure 16. The differences in latency
under load are *directly* correlated to a fixed and overlarge buffer
size across all these technologies running at different bandwidths.
More speed, at the same buffering = less delay. The NetAlyzer research
showed this back in 2011  - so if they re-plotted this data in the way
described below -  they would derive the same result. Sadly the
netalyzer project died due to lack of funding and the closest I have
to being able to have a historical record of dslreports variant of the
same test is via the internet archive.

https://web.archive.org/web/20230000000000*/http://www.dslreports.com/speedtest/results/bufferbloat?up=1

To pick one of those datasets and try to explain them -

https://web.archive.org/web/20180323183459/http://www.dslreports.com/speedtest/results/bufferbloat?up=1

The big blue blobs were the default buffersizes in cable 2018 at those
upload speeds. DSL was similar. Fiber historically had saner values
for buffering in the first place - but I am seeing bad clusters of
100+ms extra ms at 100Mbit speeds there.

dslreports has been dying also, so anything much past 2020 is suspect
and even before then, as the site was heavily used by people tuning
their SQM/fq_codel/or cake implementations, not representative of the
real world, which is worse. The test also cuts off at 4 seconds. This
and most speedtests we have today do not include tests that do not
complete - which is probably the most important indicator of genuine
problems.

My biggest kvetch (for decades now) is that none of the tests test up
+ down + voip or videoconferencing, just sequentially. This is the
elephant in the room, the screenshare or upload moment when a home
internet gets glitchy, your videoconference freezes or distorts, or
your kids scream in frustration at missing their shot in their game. 1
second of induced latency on the upload link makes a web page like
slashdot, normally taking
10s, take... wait for it... 4 minutes. This very easily demonstrable
to anyone that might disbelieve this....

Despite my advocacy of fancy algorithms like SFQ, fq_codel or cake,
the mere adoption across the routers along the edge of a correct FIFO
buffersize for the configured bandwidth would help enormously,
especially for uploads.  We are talking about setting *one* number
here correctly for the configured bandwidth. We are not talking about
recompiling firmware, either. Just one number, set right. I typically
see 1256 packet buffers, where at 10Mbit, not much more than 50 packet
buffers is needed. Ideally that gets set in bytes... or replaced with
at the very least, SFQ, which has been in linux since 2002.


-- 
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [LibreQoS] The New Zealand broadband report
  2023-11-09 14:56 [LibreQoS] The New Zealand broadband report Dave Taht
@ 2023-11-20  0:34 ` Jim Forster
  2023-11-20  8:14   ` Brewer, Jonathan
  0 siblings, 1 reply; 4+ messages in thread
From: Jim Forster @ 2023-11-20  0:34 UTC (permalink / raw)
  To: Jonathan Brewer; +Cc: Nicholas Weaver, libreqos, Dave Taht

Jon,

I couldn’t resist asking you for comment. :-)

Jim

> On Nov 9, 2023, at 4:56 PM, Dave Taht via LibreQoS <libreqos@lists.bufferbloat.net> wrote:
> 
> https://comcom.govt.nz/__data/assets/pdf_file/0025/329515/MBNZ-Winter-Report-28-September-2023.pdf
> 
> While this is evolving to be one of the best of government reports I
> know of, leveraging samknows rather than ookla, it has
> a few glaring omissions that I wish someone would address.
> 
> It does not measure web page load time. This is probably a relative
> constant across all access technologies, limited only by the closeness
> of the (latency) to the web serving site(s).
> 
> It does not break out non-lte, non-5g fixed wireless. I am in touch
> with multiple companies within NZ that use unlicensed or licensed
> spectrum such as uber.nz that are quite proud of how competitive they
> are with fiber. Much of the 5g problem is actually backhaul routing in
> this report's case. Rural links can also have more latency because of
> how far away from the central servers they are in the first place, it
> would be good if future reports did a bit more geo-location to
> determine how much latency was unavoidable due to the laws of physics.
> 
> My second biggest kvetch is on figure 16. The differences in latency
> under load are *directly* correlated to a fixed and overlarge buffer
> size across all these technologies running at different bandwidths.
> More speed, at the same buffering = less delay. The NetAlyzer research
> showed this back in 2011  - so if they re-plotted this data in the way
> described below -  they would derive the same result. Sadly the
> netalyzer project died due to lack of funding and the closest I have
> to being able to have a historical record of dslreports variant of the
> same test is via the internet archive.
> 
> https://web.archive.org/web/20230000000000*/http://www.dslreports.com/speedtest/results/bufferbloat?up=1
> 
> To pick one of those datasets and try to explain them -
> 
> https://web.archive.org/web/20180323183459/http://www.dslreports.com/speedtest/results/bufferbloat?up=1
> 
> The big blue blobs were the default buffersizes in cable 2018 at those
> upload speeds. DSL was similar. Fiber historically had saner values
> for buffering in the first place - but I am seeing bad clusters of
> 100+ms extra ms at 100Mbit speeds there.
> 
> dslreports has been dying also, so anything much past 2020 is suspect
> and even before then, as the site was heavily used by people tuning
> their SQM/fq_codel/or cake implementations, not representative of the
> real world, which is worse. The test also cuts off at 4 seconds. This
> and most speedtests we have today do not include tests that do not
> complete - which is probably the most important indicator of genuine
> problems.
> 
> My biggest kvetch (for decades now) is that none of the tests test up
> + down + voip or videoconferencing, just sequentially. This is the
> elephant in the room, the screenshare or upload moment when a home
> internet gets glitchy, your videoconference freezes or distorts, or
> your kids scream in frustration at missing their shot in their game. 1
> second of induced latency on the upload link makes a web page like
> slashdot, normally taking
> 10s, take... wait for it... 4 minutes. This very easily demonstrable
> to anyone that might disbelieve this....
> 
> Despite my advocacy of fancy algorithms like SFQ, fq_codel or cake,
> the mere adoption across the routers along the edge of a correct FIFO
> buffersize for the configured bandwidth would help enormously,
> especially for uploads.  We are talking about setting *one* number
> here correctly for the configured bandwidth. We are not talking about
> recompiling firmware, either. Just one number, set right. I typically
> see 1256 packet buffers, where at 10Mbit, not much more than 50 packet
> buffers is needed. Ideally that gets set in bytes... or replaced with
> at the very least, SFQ, which has been in linux since 2002.
> 
> 
> -- 
> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
> _______________________________________________
> LibreQoS mailing list
> LibreQoS@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/libreqos


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [LibreQoS] The New Zealand broadband report
  2023-11-20  0:34 ` Jim Forster
@ 2023-11-20  8:14   ` Brewer, Jonathan
  2023-11-20 11:12     ` Dave Taht
  0 siblings, 1 reply; 4+ messages in thread
From: Brewer, Jonathan @ 2023-11-20  8:14 UTC (permalink / raw)
  To: Jim Forster; +Cc: Nicholas Weaver, libreqos, Dave Taht


[-- Attachment #1.1: Type: text/plain, Size: 5720 bytes --]

Hi All,

The WISP association in NZ claims coverage of 70k households, but this
includes many situations where WISPs purchase wholesale 4G, fibre, or even
VDSL from the larger operators and resell where it's available. Then there
are a few WISPs who operate 4G/LTE networks based on Telrad and Cambium
cnRanger too. Our Commerce Commission still doesn't understand how it all
works and continues to use "WiMAX" to describe the mainly 5 GHz Ubiquiti
and Cambium FWA networks used by Uber (mentioned below) and many others.

So the reason the report doesn't break out non-lte, non-5g FWA is that it's
probably not more than a few tens of thousands of subscribers, and not
statistically significant. These providers do great work and more than 30
of them are customers of my radio engineering practice, but few have
significant numbers of non-LTE, non-5G FWA.

I've attached a different report from NZ's ComCom that might be of interest
as it looks at the industry makeup, not just performance.

Cheers,

Jon

On Mon, 20 Nov 2023 at 13:35, Jim Forster <jim@connectivitycap.com> wrote:

> Jon,
>
> I couldn’t resist asking you for comment. :-)
>
> Jim
>
> > On Nov 9, 2023, at 4:56 PM, Dave Taht via LibreQoS <
> libreqos@lists.bufferbloat.net> wrote:
> >
> >
> https://comcom.govt.nz/__data/assets/pdf_file/0025/329515/MBNZ-Winter-Report-28-September-2023.pdf
> >
> > While this is evolving to be one of the best of government reports I
> > know of, leveraging samknows rather than ookla, it has
> > a few glaring omissions that I wish someone would address.
> >
> > It does not measure web page load time. This is probably a relative
> > constant across all access technologies, limited only by the closeness
> > of the (latency) to the web serving site(s).
> >
> > It does not break out non-lte, non-5g fixed wireless. I am in touch
> > with multiple companies within NZ that use unlicensed or licensed
> > spectrum such as uber.nz that are quite proud of how competitive they
> > are with fiber. Much of the 5g problem is actually backhaul routing in
> > this report's case. Rural links can also have more latency because of
> > how far away from the central servers they are in the first place, it
> > would be good if future reports did a bit more geo-location to
> > determine how much latency was unavoidable due to the laws of physics.
> >
> > My second biggest kvetch is on figure 16. The differences in latency
> > under load are *directly* correlated to a fixed and overlarge buffer
> > size across all these technologies running at different bandwidths.
> > More speed, at the same buffering = less delay. The NetAlyzer research
> > showed this back in 2011  - so if they re-plotted this data in the way
> > described below -  they would derive the same result. Sadly the
> > netalyzer project died due to lack of funding and the closest I have
> > to being able to have a historical record of dslreports variant of the
> > same test is via the internet archive.
> >
> >
> https://web.archive.org/web/20230000000000*/http://www.dslreports.com/speedtest/results/bufferbloat?up=1
> >
> > To pick one of those datasets and try to explain them -
> >
> >
> https://web.archive.org/web/20180323183459/http://www.dslreports.com/speedtest/results/bufferbloat?up=1
> >
> > The big blue blobs were the default buffersizes in cable 2018 at those
> > upload speeds. DSL was similar. Fiber historically had saner values
> > for buffering in the first place - but I am seeing bad clusters of
> > 100+ms extra ms at 100Mbit speeds there.
> >
> > dslreports has been dying also, so anything much past 2020 is suspect
> > and even before then, as the site was heavily used by people tuning
> > their SQM/fq_codel/or cake implementations, not representative of the
> > real world, which is worse. The test also cuts off at 4 seconds. This
> > and most speedtests we have today do not include tests that do not
> > complete - which is probably the most important indicator of genuine
> > problems.
> >
> > My biggest kvetch (for decades now) is that none of the tests test up
> > + down + voip or videoconferencing, just sequentially. This is the
> > elephant in the room, the screenshare or upload moment when a home
> > internet gets glitchy, your videoconference freezes or distorts, or
> > your kids scream in frustration at missing their shot in their game. 1
> > second of induced latency on the upload link makes a web page like
> > slashdot, normally taking
> > 10s, take... wait for it... 4 minutes. This very easily demonstrable
> > to anyone that might disbelieve this....
> >
> > Despite my advocacy of fancy algorithms like SFQ, fq_codel or cake,
> > the mere adoption across the routers along the edge of a correct FIFO
> > buffersize for the configured bandwidth would help enormously,
> > especially for uploads.  We are talking about setting *one* number
> > here correctly for the configured bandwidth. We are not talking about
> > recompiling firmware, either. Just one number, set right. I typically
> > see 1256 packet buffers, where at 10Mbit, not much more than 50 packet
> > buffers is needed. Ideally that gets set in bytes... or replaced with
> > at the very least, SFQ, which has been in linux since 2002.
> >
> >
> > --
> > Oct 30:
> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> > Dave Täht CSO, LibreQos
> > _______________________________________________
> > LibreQoS mailing list
> > LibreQoS@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/libreqos
>
>

-- 
Web: https://jon.brewer.nz/
Mobile +64 27 502 8230
DDI: +64 4 913 8123

[-- Attachment #1.2: Type: text/html, Size: 7686 bytes --]

[-- Attachment #2: 2022-Annual-Telecommunications-Monitoring-Report-15-June-2023.pdf --]
[-- Type: application/pdf, Size: 3967148 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [LibreQoS] The New Zealand broadband report
  2023-11-20  8:14   ` Brewer, Jonathan
@ 2023-11-20 11:12     ` Dave Taht
  0 siblings, 0 replies; 4+ messages in thread
From: Dave Taht @ 2023-11-20 11:12 UTC (permalink / raw)
  To: Brewer, Jonathan; +Cc: Jim Forster, Nicholas Weaver, libreqos

My larger point is that the relationship between overbuffering and
latency under load is VERY statistically significant, and independent
of the underlying physical transport, and if it were called out more,
we would more rapidly see an improvement here, across all technologies
you survey. As an example in the USA, both COX and Comcast have
deployed DOCSIS pie (RFC8033), in the past few years - 3-4x
improvements! From looking at your data, that seems not to have
happened yet there, and knowledge of the netalyzer work and all the
bufferbloat research that followed, spreading still all too slowly.

The rfc8290 technology the wisps have been deploying is pretty great,
but of course I would say that benign one of the authors! I hope that
these algorithms become common against all network infrastructure
types some day.  Fiber benefits too!

It just involves rigorously creating a correct buffering number on the
cpe, or putting in a better algorithm, to deploy, nearly no new
physical infrastructure.

I liked the dslreports style of report which tried to call out winning
and losing ISPs on its metrics. It would spur competition. Thx for
engaging a bit! I otherwise like this reporting series a lot!


On Mon, Nov 20, 2023 at 3:14 AM Brewer, Jonathan <jon@brewer.nz> wrote:
>
> Hi All,
>
> The WISP association in NZ claims coverage of 70k households, but this includes many situations where WISPs purchase wholesale 4G, fibre, or even VDSL from the larger operators and resell where it's available. Then there are a few WISPs who operate 4G/LTE networks based on Telrad and Cambium cnRanger too. Our Commerce Commission still doesn't understand how it all works and continues to use "WiMAX" to describe the mainly 5 GHz Ubiquiti and Cambium FWA networks used by Uber (mentioned below) and many others.
>
> So the reason the report doesn't break out non-lte, non-5g FWA is that it's probably not more than a few tens of thousands of subscribers, and not statistically significant. These providers do great work and more than 30 of them are customers of my radio engineering practice, but few have significant numbers of non-LTE, non-5G FWA.
>
> I've attached a different report from NZ's ComCom that might be of interest as it looks at the industry makeup, not just performance.
>
> Cheers,
>
> Jon
>
> On Mon, 20 Nov 2023 at 13:35, Jim Forster <jim@connectivitycap.com> wrote:
>>
>> Jon,
>>
>> I couldn’t resist asking you for comment. :-)
>>
>> Jim
>>
>> > On Nov 9, 2023, at 4:56 PM, Dave Taht via LibreQoS <libreqos@lists.bufferbloat.net> wrote:
>> >
>> > https://comcom.govt.nz/__data/assets/pdf_file/0025/329515/MBNZ-Winter-Report-28-September-2023.pdf
>> >
>> > While this is evolving to be one of the best of government reports I
>> > know of, leveraging samknows rather than ookla, it has
>> > a few glaring omissions that I wish someone would address.
>> >
>> > It does not measure web page load time. This is probably a relative
>> > constant across all access technologies, limited only by the closeness
>> > of the (latency) to the web serving site(s).
>> >
>> > It does not break out non-lte, non-5g fixed wireless. I am in touch
>> > with multiple companies within NZ that use unlicensed or licensed
>> > spectrum such as uber.nz that are quite proud of how competitive they
>> > are with fiber. Much of the 5g problem is actually backhaul routing in
>> > this report's case. Rural links can also have more latency because of
>> > how far away from the central servers they are in the first place, it
>> > would be good if future reports did a bit more geo-location to
>> > determine how much latency was unavoidable due to the laws of physics.
>> >
>> > My second biggest kvetch is on figure 16. The differences in latency
>> > under load are *directly* correlated to a fixed and overlarge buffer
>> > size across all these technologies running at different bandwidths.
>> > More speed, at the same buffering = less delay. The NetAlyzer research
>> > showed this back in 2011  - so if they re-plotted this data in the way
>> > described below -  they would derive the same result. Sadly the
>> > netalyzer project died due to lack of funding and the closest I have
>> > to being able to have a historical record of dslreports variant of the
>> > same test is via the internet archive.
>> >
>> > https://web.archive.org/web/20230000000000*/http://www.dslreports.com/speedtest/results/bufferbloat?up=1
>> >
>> > To pick one of those datasets and try to explain them -
>> >
>> > https://web.archive.org/web/20180323183459/http://www.dslreports.com/speedtest/results/bufferbloat?up=1
>> >
>> > The big blue blobs were the default buffersizes in cable 2018 at those
>> > upload speeds. DSL was similar. Fiber historically had saner values
>> > for buffering in the first place - but I am seeing bad clusters of
>> > 100+ms extra ms at 100Mbit speeds there.
>> >
>> > dslreports has been dying also, so anything much past 2020 is suspect
>> > and even before then, as the site was heavily used by people tuning
>> > their SQM/fq_codel/or cake implementations, not representative of the
>> > real world, which is worse. The test also cuts off at 4 seconds. This
>> > and most speedtests we have today do not include tests that do not
>> > complete - which is probably the most important indicator of genuine
>> > problems.
>> >
>> > My biggest kvetch (for decades now) is that none of the tests test up
>> > + down + voip or videoconferencing, just sequentially. This is the
>> > elephant in the room, the screenshare or upload moment when a home
>> > internet gets glitchy, your videoconference freezes or distorts, or
>> > your kids scream in frustration at missing their shot in their game. 1
>> > second of induced latency on the upload link makes a web page like
>> > slashdot, normally taking
>> > 10s, take... wait for it... 4 minutes. This very easily demonstrable
>> > to anyone that might disbelieve this....
>> >
>> > Despite my advocacy of fancy algorithms like SFQ, fq_codel or cake,
>> > the mere adoption across the routers along the edge of a correct FIFO
>> > buffersize for the configured bandwidth would help enormously,
>> > especially for uploads.  We are talking about setting *one* number
>> > here correctly for the configured bandwidth. We are not talking about
>> > recompiling firmware, either. Just one number, set right. I typically
>> > see 1256 packet buffers, where at 10Mbit, not much more than 50 packet
>> > buffers is needed. Ideally that gets set in bytes... or replaced with
>> > at the very least, SFQ, which has been in linux since 2002.
>> >
>> >
>> > --
>> > Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>> > Dave Täht CSO, LibreQos
>> > _______________________________________________
>> > LibreQoS mailing list
>> > LibreQoS@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/libreqos
>>
>
>
> --
> Web: https://jon.brewer.nz/
> Mobile +64 27 502 8230
> DDI: +64 4 913 8123
>


-- 
:( My old R&D campus is up for sale: https://tinyurl.com/yurtlab
Dave Täht CSO, LibreQos

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-11-20 11:12 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-09 14:56 [LibreQoS] The New Zealand broadband report Dave Taht
2023-11-20  0:34 ` Jim Forster
2023-11-20  8:14   ` Brewer, Jonathan
2023-11-20 11:12     ` Dave Taht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox