From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ig0-x229.google.com (mail-ig0-x229.google.com [IPv6:2607:f8b0:4001:c05::229]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 8F6BD21F42B for ; Mon, 20 Apr 2015 00:00:19 -0700 (PDT) Received: by igbpi8 with SMTP id pi8so60695220igb.0 for ; Mon, 20 Apr 2015 00:00:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:cc:content-type; bh=pGVI/DJ1I7XKPwf8zgMA1A79Kd2113+zBCHexjcr7tc=; b=pK+4GjaVg0OPOyyzCBdlnJmizfsnilRZufDypCq4A+FMfmqDPvbV+x6xVEWARRExWV uhxisk3wgTNxYkCf7AwWZAARzhNRVnqwv646gRF1+P1PiLw/H7RQl4JRXBDgatbIImn9 48m31BRdN4CnWud3Q5C3YXPZ2GaxoLp7HyTmLjV+qhMctzL1ugKvw/oCIrDqlfSSk5RV /51M6bb8V5ASTm1AoJZSQ1ymVU4HbCs9SlCXbh5dmZXV7t3mducm0Upto3xvzFV2y8yy b1vLArkQvNunA964gWhNA8qXGu1MufBlYRPYsMSPei5yWUZGGeKV+T2Nl529bKuDc/zL ERPQ== MIME-Version: 1.0 X-Received: by 10.50.64.244 with SMTP id r20mr19210166igs.48.1429513217040; Mon, 20 Apr 2015 00:00:17 -0700 (PDT) Sender: justinbeech@gmail.com Received: by 10.50.107.42 with HTTP; Mon, 20 Apr 2015 00:00:16 -0700 (PDT) In-Reply-To: References: <87wq18jmak.fsf@toke.dk> <87oamkjfhf.fsf@toke.dk> <87k2x8jcnw.fsf@toke.dk> <87fv7wj9lh.fsf@toke.dk> <87zj64hsy9.fsf@toke.dk> Date: Mon, 20 Apr 2015 17:00:16 +1000 X-Google-Sender-Auth: wfWTh-YtvOpDoOIbaZU_gYWB8Dk Message-ID: From: jb Cc: bloat Content-Type: multipart/alternative; boundary=047d7bea3d22810a500514227b85 Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Apr 2015 07:00:47 -0000 --047d7bea3d22810a500514227b85 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable IPv6 is now available as an option, you just select it in the preferences pane. Unfortunately only one of the test servers (in Michigan) is native dual stack so the test is then fixed to that location. In addition the latency pinging during test is stays as ipv4 traffic, until I setup a web socket server on the ipv6 server. All the amazon google and other cloud servers do not support ipv6. They do support it as an edge network feature, like as a load balancing front end, however the test needs custom server software and custom code, and using a cloud proxy that must then talk to an ipv4 test server inside the cloud is rather useless. It should be native all the way. So until I get more native ipv6 servers, one location it is. Nevertheless as a proof of concept it works. Using the hurricane electric ipv6 tunnel from my australian non ipv6 ISP, I get about 80% of the speed that the local sydney ipv4 test server would give. On Mon, Apr 20, 2015 at 1:15 PM, Aaron Wood wrote: > Toke, > > I actually tend to see a bit higher latency with ICMP at the higher > percentiles. > > > http://burntchrome.blogspot.com/2014/05/fixing-bufferbloat-on-comcasts-bl= ast.html > > http://burntchrome.blogspot.com/2014/05/measured-bufferbloat-on-orangefr-= dsl.html > > Although the biggest "boost" I've seen ICMP given was on Free.fr's networ= k: > > http://burntchrome.blogspot.com/2014/01/bufferbloat-or-lack-thereof-on-fr= eefr.html > > -Aaron > > On Sun, Apr 19, 2015 at 11:30 AM, Toke H=C3=B8iland-J=C3=B8rgensen > wrote: > >> Jonathan Morton writes: >> >> >> Why not? They can be a quite useful measure of how competing traffic >> >> performs when bulk flows congest the link. Which for many >> >> applications is more important then the latency experienced by the >> >> bulk flow itself. >> > >> > One clear objection is that ICMP is often prioritised when UDP is not. >> > So measuring with UDP gives a better indication in those cases. >> > Measuring with a separate TCP flow, such as HTTPing, is better still >> > by some measures, but most truly latency-sensitive traffic does use >> > UDP. >> >> Sure, well I tend to do both. Can't recall ever actually seeing any >> performance difference between the UDP and ICMP latency measurements, >> though... >> >> -Toke >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat >> > > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > > --047d7bea3d22810a500514227b85 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
IPv6 is now available as an option, you just select i= t in the preferences pane.

Unfortunately only = one of the test servers (in Michigan) is native dual stack so the test is t= hen fixed to that location. In addition the latency pinging during test is = stays as ipv4 traffic, until I setup a web socket server on the ipv6 server= .

All the amazon google and other cloud servers do= not support ipv6. They do support it as an edge network feature, like as a= load balancing front end, however the test needs custom server software an= d custom code, and using a cloud proxy that must then talk to an ipv4 test = server inside the cloud is rather useless. It should be native all the way.= So until I get more native ipv6 servers, one location it is.
Nevertheless as a proof of concept it works. Using the hurrican= e electric ipv6 tunnel from my australian non ipv6 ISP, I get about 80% of = the speed that the local sydney ipv4 test server would give.

=

On Mo= n, Apr 20, 2015 at 1:15 PM, Aaron Wood <woody77@gmail.com> w= rote:
Toke,
-Aaron

On= Sun, Apr 19, 2015 at 11:30 AM, Toke H=C3=B8iland-J=C3=B8rgensen <toke@toke.dk= > wrote:
Jonathan Mor= ton <chromati= x99@gmail.com> writes:

>> Why not? They can be a quite useful measure of how competing traff= ic
>> performs when bulk flows congest the link. Which for many
>> applications is more important then the latency experienced by the=
>> bulk flow itself.
>
> One clear objection is that ICMP is often prioritised when UDP is not.=
> So measuring with UDP gives a better indication in those cases.
> Measuring with a separate TCP flow, such as HTTPing, is better still > by some measures, but most truly latency-sensitive traffic does use > UDP.

Sure, well I tend to do both. Can't recall ever actually seeing = any
performance difference between the UDP and ICMP latency measurements,
though...

-Toke
_______________________________________________
Bloat mailing list
Bloat@list= s.bufferbloat.net
= https://lists.bufferbloat.net/listinfo/bloat


_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<= /a>
= https://lists.bufferbloat.net/listinfo/bloat


--047d7bea3d22810a500514227b85--