[NNagain] The New Zealand broadband report

Dave Taht dave.taht at gmail.com
Thu Nov 9 09:56:21 EST 2023


https://comcom.govt.nz/__data/assets/pdf_file/0025/329515/MBNZ-Winter-Report-28-September-2023.pdf

While this is evolving to be one of the best of government reports I
know of, leveraging samknows rather than ookla, it has
a few glaring omissions that I wish someone would address.

It does not measure web page load time. This is probably a relative
constant across all access technologies, limited only by the closeness
of the (latency) to the web serving site(s).

It does not break out non-lte, non-5g fixed wireless. I am in touch
with multiple companies within NZ that use unlicensed or licensed
spectrum such as uber.nz that are quite proud of how competitive they
are with fiber. Much of the 5g problem is actually backhaul routing in
this report's case. Rural links can also have more latency because of
how far away from the central servers they are in the first place, it
would be good if future reports did a bit more geo-location to
determine how much latency was unavoidable due to the laws of physics.

My second biggest kvetch is on figure 16. The differences in latency
under load are *directly* correlated to a fixed and overlarge buffer
size across all these technologies running at different bandwidths.
More speed, at the same buffering = less delay. The NetAlyzer research
showed this back in 2011  - so if they re-plotted this data in the way
described below -  they would derive the same result. Sadly the
netalyzer project died due to lack of funding and the closest I have
to being able to have a historical record of dslreports variant of the
same test is via the internet archive.

https://web.archive.org/web/20230000000000*/http://www.dslreports.com/speedtest/results/bufferbloat?up=1

To pick one of those datasets and try to explain them -

https://web.archive.org/web/20180323183459/http://www.dslreports.com/speedtest/results/bufferbloat?up=1

The big blue blobs were the default buffersizes in cable 2018 at those
upload speeds. DSL was similar. Fiber historically had saner values
for buffering in the first place - but I am seeing bad clusters of
100+ms extra ms at 100Mbit speeds there.

dslreports has been dying also, so anything much past 2020 is suspect
and even before then, as the site was heavily used by people tuning
their SQM/fq_codel/or cake implementations, not representative of the
real world, which is worse. The test also cuts off at 4 seconds. This
and most speedtests we have today do not include tests that do not
complete - which is probably the most important indicator of genuine
problems.

My biggest kvetch (for decades now) is that none of the tests test up
+ down + voip or videoconferencing, just sequentially. This is the
elephant in the room, the screenshare or upload moment when a home
internet gets glitchy, your videoconference freezes or distorts, or
your kids scream in frustration at missing their shot in their game. 1
second of induced latency on the upload link makes a web page like
slashdot, normally taking
10s, take... wait for it... 4 minutes. This very easily demonstrable
to anyone that might disbelieve this....

Despite my advocacy of fancy algorithms like SFQ, fq_codel or cake,
the mere adoption across the routers along the edge of a correct FIFO
buffersize for the configured bandwidth would help enormously,
especially for uploads.  We are talking about setting *one* number
here correctly for the configured bandwidth. We are not talking about
recompiling firmware, either. Just one number, set right. I typically
see 1256 packet buffers, where at 10Mbit, not much more than 50 packet
buffers is needed. Ideally that gets set in bytes... or replaced with
at the very least, SFQ, which has been in linux since 2002.


-- 
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos


More information about the Nnagain mailing list