From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-x22f.google.com (mail-oi0-x22f.google.com [IPv6:2607:f8b0:4003:c06::22f]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id CAEC721F36B for ; Thu, 23 Apr 2015 15:22:02 -0700 (PDT) Received: by oiko83 with SMTP id o83so26772509oik.1 for ; Thu, 23 Apr 2015 15:22:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=LLv6FTsoCyf13DDy/IYZb8RWr4ZD5dy/um2VI3XawZ0=; b=qtmRzsWX1EnnZBGFuLd2mXF0c6PvIawGznYAMfqa/5DHrfnmhorA0Yo3tz5VVJHWZO ByB2TN3Ljwmyh6QIiwNV/0VkvNEQiKcpKh9vCF2DISTkez310cbf5bGJ1BNiRdY5ryGq +sc6gTfCUAWJn2PiNJRBKtdyHeI5UoHyzvriGRS7lLRWh94/2pVfHHdEx4j6EIYe2UJn cU7i3MeYJKNKQg+oLdMKU1ERV0hUMS8aHbhdAdrF3zBqDEwE9b9BMDXMbzeGUvXNdnQ3 4NDM5GgNinQoIYu7Ip2rZX83v7+emvb20fAmmey5ikV4L99blB8iTyle44dytLI2fSnb 5CDA== MIME-Version: 1.0 X-Received: by 10.182.255.171 with SMTP id ar11mr4644773obd.29.1429827721708; Thu, 23 Apr 2015 15:22:01 -0700 (PDT) Received: by 10.202.71.139 with HTTP; Thu, 23 Apr 2015 15:22:01 -0700 (PDT) In-Reply-To: <0D391BB1-9CA5-4DAF-8FD6-6628AB09C1C5@gmail.com> References: <87wq18jmak.fsf@toke.dk> <87oamkjfhf.fsf@toke.dk> <87k2x8jcnw.fsf@toke.dk> <0D391BB1-9CA5-4DAF-8FD6-6628AB09C1C5@gmail.com> Date: Thu, 23 Apr 2015 15:22:01 -0700 Message-ID: From: Dave Taht To: Rich Brown , jb Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: bloat Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Apr 2015 22:22:31 -0000 On Thu, Apr 23, 2015 at 2:44 PM, Rich Brown wrote= : > Hi Justin, > > The newest Speed Test is great! It is more convincing than I even thought= it would be. These comments are focused on the "theater" of the measuremen= ts, so that they are unambiguous, and that people can figure out what's hap= pening > > I posted a video to Youtube at: http://youtu.be/EMkhKrXbjxQ to illustrate= my points. NB: I turned fq_codel off for this demo, so that the results wo= uld be more extreme. > > 1) It would be great to label the gauge as "Latency (msec)" I love the te= rm "bufferbloat" as much as the next guy, but the Speed Test page should ca= ll the measurement what it really is. (The help page can explain that the l= atency is almost certainly caused by bufferbloat, but that should be the pl= ace it's mentioned.) I would prefer that it say "bufferbloat (lag in msec)" there, rather than make people look up another word buried in the doc. Sending people to the right thing, at the getgo, is important - looking for "lag" on the internet takes you to a lot of wrong places, misinformation, and snake oil. So perhaps in doc page I would have an explanation of lag as it relates to bufferbloat and other possible causes of these behaviors. I also do not see the gauge in my linux firefox, that you are showing on youtube. Am I using a wrong link. I LOVE this gauge, however. Lastly, the static radar plot of pings occupies center stage yet does not do anything later in the test. Either animating it to show the bloat, or moving it off of center stage and the new bloat gauge to center stage after it sounds the net, would be good. bufferbloat as a single word is quite googlable to good resources, and there is some activity on fixing up wikipedia going on that I like a lot. > > 2) I can't explain why the latency gauge starts at 1-3 msec. I am guessin= g that it's showing incremental latency above the nominal value measured du= ring the initial setup. I recommend that the gauge always show actual laten= cy. Thus the gauge could start at 45 msec (0:11 in the video) then change d= uring the measurements. > > 3) I was a bit confused by the behavior of the gauge before/after the tes= t. I'd like it to change only when when something else is moving in the win= dow. Here are some suggestions for what would make it clearer: > - The gauge should not change until the graph starts moving. I fo= und it confusing to see the latency jump up at 0:13 just before the blue do= wnload chart started, or at 0:28 before the upload chart started at 0:31. > - Between the download and upload tests, the gauge should drop ba= ck to the nominal measured values. I think it does. > - After the test, the gauge should also drop back to the nominal = measured value. It seems stuck at 4928 msec (0:55). We had/have a lot of this problem in netperf-wrapper - a lot of data tends to accumulate at the end of the test(s) and pollute the last few data points in bloated scenarios. You have to wait for the queues to drain to get a "clean" test - although this begins to show what actually happen when the link is buried in both directions. Is there any chance to add a simultaneous up+down+ping test at the conclusi= on? > 4) I like the way the latency gauge changes color during the test. It's O= K for it to use the color to indicate an "opinion". Are you happy with the = thresholds for yellow & red colors? It is not clear to me what they are. > 5) The gauge makes it appear that moderate latency - 765 msec (0:29) - is= the same as when the value goes to 1768 msec (0:31), and also when it goes= to 4,447 msec (0:35), etc. It might make more sense to have the chart's fu= ll-scale at something like 10 seconds during the test. The scale could be l= ogarithmic, so that "normal" values occupy up to a third or half of scale, = and bad values get pretty close to the top end. Horrible latency - greater = than 10 sec, say - should peg the indicator at full scale. I am generally resistant to log scales as misleading an untrained eye. In this case I can certainly see the gauge behaving almost as described above, except that I would nearly flatline the gauge at > 250ms, and add indicators for higher rates at the outer edges of the graph. I can see staying below 30ms induced latency as "green", below 100ms as "blue", below 250ms as yellow, and > 250ms as red, and a line marking "rediculous" at > 1sec, and "insane" at 2sec would be good. Other pithy markings at the end of the tach would be fun. For example, gogo in flight has the interplanetary record for bufferbloat, something like 760 seconds the last time we tried it, so a 3rd line on the tach for earth-mars distances would be amusing. In the long term, somehow detecting if FQ is in play would be good, but I have no idea how to do that in a browser. > 6) On the Results page (1:20), I like the red background behind the laten= cy values. I don't understand why the grey bars at the right end of the cha= rt are so high. Is the latency still decreasing as the queue drains? Perhap= s the ping tests should run longer until it gets closer to the nominal valu= e. I would suspect the queues are still draining, filled with missing acknowledgements, etc, etc. waiting until the ping returned closer to normal before starting the next phase of the test would help. > This is such a great tool. Thanks! I am very, very, very delighted also. I hope that with tools like these in more users hands, AND the data collected from them, that we can make a logarithmic jump in the number of users, devices, and ISPs that have good bandwidth and low latency in the near future. Thank you very, very much for the work. As a side note, justin, have you fixed your own bloat at home? > > Rich > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat --=20 Dave T=C3=A4ht Open Networking needs **Open Source Hardware** https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67