Regarding ping during the test, although it isn't ideal, it appears
to be enough to identify problems, and also consistently get the same
grade, to within one grade level anyway.
An "A" or "A+" is not going to get a "C", and a "D" or "F" is never
going to get a "B" no matter how many times the test is re-run.
(Regarding the transition between idle and downloading, the
downloads are phased in, not all started at once so any conclusion
on transition response has to take that into account as well).
I can increase the ping frequency when the connection is seen as
fast, but 100hz or 10hz would have issues, for one, there is no
visibility into whether a browser is using tcp push, and for another,
doing 10 or 100 a second - if they turn into packets - takes a lot of
capacity out of the upload channel. If they coalesce then the
measurements just add noise to the result.
My hope is the test evolves but is balanced, leading to pointers on
problems that may require other more specialised tests to fully
explore. It has taken almost half a million tests to mostly avoid buggy
browser versions and platforms, and get a repeatable and largely
correct speed measurement. In a clean lab network after
a dozen tests that phase of things would be over with and done.
I hope there can be a user settable option for getting finer view
of latency under load. Or another tool designed just for it.
I don't see any issue with a solid desktop PC running a current
browser, connected to a server dedicated to listening emitting a
10hz-100hz web socket ping while also doing a bunch of
downloads, if that was the entire purpose of the exercise.
In the mean time I'd like to add a way to allow a user to easily
tag the equipment they are using because at the moment we're
getting all this useful grade information without any context. We don't
even know which home users have made an attempt to ameliorate
problems.