[Bloat] DSLReports Speed Test has latency measurement built-in

jb justin at dslr.net
Sun Apr 19 19:21:59 EDT 2015


I just woke up.
I'm sure there are some questions missed however I'll just put some info in
here and the plans. If I missed something please reply direct or to the
list?

1, the latency is done with web socket pings, they are light weight. They
are done to dslreports.com not to any of the participating servers in a
speed test. I could change this to pick a participating server but unless
there is a really good reason to, it is easier to worry about one web
socket server rather than 21+ of them.

2. The test does not do latency pinging on 3G and GPRS
because of a concern I had that with slower lines (a lot of 3G results are
less than half a megabit) the pings would make the speed measured
unreliable. And/or, a slow android phone would be being asked to do too
much. I'll do some tests with and without pinging on a 56kbit shaped line
and see if there is a difference. If there is not, I can enable it.

3. The graph of latency post-test is log X-axis at the moment
because one spike can render the whole graph almost useless with axis
scaling. What I might do is X-Axis breaking, and see if that looks ok.
Alternatively, a two panel graph, one with 0-200ms axis, the other in full
perspective. Or just live with the spike. Or scale to the 95% highest
number and let the other 5% crop. There are tool-tips after all to show the
actual numbers.

4. The selection of speed test locations to use
I have been spending most time getting it right for USA users, and all
servers are mine, not donated ISP servers, so we're only at an early stage
for a testing network. Even so I think having a sever in your city, or ISP,
is no longer as critical as it would be with a single TCP stream. But if
you're on fiber and not in USA then it might take a bit longer before this
is solved. And I'm betting in some cases some ISPs will just not have every
inbound route capable of driving their fastest products to maximum, so
locations may have to be probed by speed, not latency...

5. Displaying latencies while the test is running
could be as simple as displaying some numbers. e.g., 95% confidence and
peak, in three categories: idle, up and down. Or it could be a calculation
showing number of packets in in flight (determined by current speed vs
current latency). Or it could be a bunch of small coloured boxes that light
up in a grid, indicating a queue, one per 1500 byte packet. Or it could be
a color-coded gauge that goes to yellow and red as the bandwidth delay
product rises. Probably there is a simple way to start, and a better way to
do it after some consideration.

6. The congestion window, re-transmits, and bandwidth per stream tables are
very early stage.
I don't think I'll be showing a giant table for too much longer. Instead
the numbers have to be summarised by location and only shown with another
click. Typically if an ISP has a great connection to location X and a poor
one to location Y, then all streams from X show high bandwidth, low RTT,
low RTT variability and appropriate congestion window. Whereas all streams
from Y show high re-transmits, high RTT and big congestion window and/or
slow speeds. The difference between these stats for a google fiber user in
Kansas city. and a poorer quality ISP is startling to say the least.
I also need to do something with PMTU/MSS and TCP option information for
cases where there is weird stuff there. Finally, on the server, tcptrace
can be run and report back much more detailed statistics but if you've used
tcptrace you know there is a huge amount of data, including xplot files,
that just one transfer will generate. But if necessary we can go there.
I try to keep in mind that 99.9% of the people using the tests just want an
upload and download number, and perhaps a statement that says compared to
their peers, things are great. Everything else provokes confusion.

that's all I can think of right now..

On Mon, Apr 20, 2015 at 7:57 AM, Rich Brown <richb.hanover at gmail.com> wrote:

> Hi folks,
>
> A couple comments re: the DSLReports Speed Test.
>
> 1) It's just becoming daylight for Justin, so he hasn't had a chance to
> respond to all these notes. :-)
>
> 2) In a private note earlier this week, he mentioned that he uses
> "websocket pings" which he believes are pretty speedy/low latency.
>
> 3) He does have plans to incorporate stats from the server end's TCP stack
> (cwnd, packet loss, retransmissions, etc.) in a future version of the speed
> test. I imagine it would help him to know what you'd like to see...
>
> Best,
>
> Rich
>
> On Apr 19, 2015, at 1:15 PM, Mikael Abrahamsson <swmike at swm.pp.se> wrote:
>
> > On Sun, 19 Apr 2015, Toke Høiland-Jørgensen wrote:
> >
> >> The upload latency figures are definitely iffy, but the download ones
> seem to match roughly what I've measured myself on this link.
> >
> > Also, I don't trust parallel latency measures done by for instance ICMP
> ping. Yes, they indicate something, but what?
> >
> > We need insight into the TCP stack. So how can an application like
> dslresports that runs in a browser, get meaningful performance metrics on
> its measurement TCP-sessions from the OS TCP stack? This is a multi-layer
> problem and I don't see any meaningful progress in this area...
> >
> > --
> > Mikael Abrahamsson    email:
> swmike at swm.pp.se_______________________________________________
> > Bloat mailing list
> > Bloat at lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20150420/159df56e/attachment-0003.html>


More information about the Bloat mailing list