From: Jim Forster <jim@connectivitycap.com>
To: Jonathan Brewer <jon@brewer.nz>
Cc: Nicholas Weaver <nweaver@icsi.berkeley.edu>,
libreqos <libreqos@lists.bufferbloat.net>,
Dave Taht <dave.taht@gmail.com>
Subject: Re: [LibreQoS] The New Zealand broadband report
Date: Mon, 20 Nov 2023 02:34:20 +0200 [thread overview]
Message-ID: <77361236-700F-462F-A8AA-74F592D3DE84@connectivitycap.com> (raw)
In-Reply-To: <CAA93jw4oT-oZYXHCCoTmPm7WuD2T6rXMXo-F7LgL-72g2G9Vsg@mail.gmail.com>
Jon,
I couldn’t resist asking you for comment. :-)
Jim
> On Nov 9, 2023, at 4:56 PM, Dave Taht via LibreQoS <libreqos@lists.bufferbloat.net> wrote:
>
> https://comcom.govt.nz/__data/assets/pdf_file/0025/329515/MBNZ-Winter-Report-28-September-2023.pdf
>
> While this is evolving to be one of the best of government reports I
> know of, leveraging samknows rather than ookla, it has
> a few glaring omissions that I wish someone would address.
>
> It does not measure web page load time. This is probably a relative
> constant across all access technologies, limited only by the closeness
> of the (latency) to the web serving site(s).
>
> It does not break out non-lte, non-5g fixed wireless. I am in touch
> with multiple companies within NZ that use unlicensed or licensed
> spectrum such as uber.nz that are quite proud of how competitive they
> are with fiber. Much of the 5g problem is actually backhaul routing in
> this report's case. Rural links can also have more latency because of
> how far away from the central servers they are in the first place, it
> would be good if future reports did a bit more geo-location to
> determine how much latency was unavoidable due to the laws of physics.
>
> My second biggest kvetch is on figure 16. The differences in latency
> under load are *directly* correlated to a fixed and overlarge buffer
> size across all these technologies running at different bandwidths.
> More speed, at the same buffering = less delay. The NetAlyzer research
> showed this back in 2011 - so if they re-plotted this data in the way
> described below - they would derive the same result. Sadly the
> netalyzer project died due to lack of funding and the closest I have
> to being able to have a historical record of dslreports variant of the
> same test is via the internet archive.
>
> https://web.archive.org/web/20230000000000*/http://www.dslreports.com/speedtest/results/bufferbloat?up=1
>
> To pick one of those datasets and try to explain them -
>
> https://web.archive.org/web/20180323183459/http://www.dslreports.com/speedtest/results/bufferbloat?up=1
>
> The big blue blobs were the default buffersizes in cable 2018 at those
> upload speeds. DSL was similar. Fiber historically had saner values
> for buffering in the first place - but I am seeing bad clusters of
> 100+ms extra ms at 100Mbit speeds there.
>
> dslreports has been dying also, so anything much past 2020 is suspect
> and even before then, as the site was heavily used by people tuning
> their SQM/fq_codel/or cake implementations, not representative of the
> real world, which is worse. The test also cuts off at 4 seconds. This
> and most speedtests we have today do not include tests that do not
> complete - which is probably the most important indicator of genuine
> problems.
>
> My biggest kvetch (for decades now) is that none of the tests test up
> + down + voip or videoconferencing, just sequentially. This is the
> elephant in the room, the screenshare or upload moment when a home
> internet gets glitchy, your videoconference freezes or distorts, or
> your kids scream in frustration at missing their shot in their game. 1
> second of induced latency on the upload link makes a web page like
> slashdot, normally taking
> 10s, take... wait for it... 4 minutes. This very easily demonstrable
> to anyone that might disbelieve this....
>
> Despite my advocacy of fancy algorithms like SFQ, fq_codel or cake,
> the mere adoption across the routers along the edge of a correct FIFO
> buffersize for the configured bandwidth would help enormously,
> especially for uploads. We are talking about setting *one* number
> here correctly for the configured bandwidth. We are not talking about
> recompiling firmware, either. Just one number, set right. I typically
> see 1256 packet buffers, where at 10Mbit, not much more than 50 packet
> buffers is needed. Ideally that gets set in bytes... or replaced with
> at the very least, SFQ, which has been in linux since 2002.
>
>
> --
> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
> _______________________________________________
> LibreQoS mailing list
> LibreQoS@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/libreqos
next prev parent reply other threads:[~2023-11-20 0:35 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-09 14:56 Dave Taht
2023-11-20 0:34 ` Jim Forster [this message]
2023-11-20 8:14 ` Brewer, Jonathan
2023-11-20 11:12 ` Dave Taht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/libreqos.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=77361236-700F-462F-A8AA-74F592D3DE84@connectivitycap.com \
--to=jim@connectivitycap.com \
--cc=dave.taht@gmail.com \
--cc=jon@brewer.nz \
--cc=libreqos@lists.bufferbloat.net \
--cc=nweaver@icsi.berkeley.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox