[Bloat] DSLReports Speed Test has latency measurement built-in
jb
justin at dslr.net
Mon Apr 20 01:11:44 PDT 2015
Whoops I better set that z-index correctly thanks.
It is interesting you mentioned gaming because 10 of the servers are from a
place that rents clan servers. They have to be in top quality data centres
and not congest anything because their customers abandon them immediately
and they're always watching ping time and packet loss.
Always happy for long term commitments to a servers. I just need permanent
root on an ubuntu 14.10 virtual or real box, or centos I guess. ipv6
dual-stack would be good. Memory cpu and disk unimportant. A 1 gig/e port,
but average usage can be set to stay below whatever they prefer.
I'm going to do some kind of donor recognition thing, so if a donated
server is used it'll show something like a company name and URL I just
haven't had to do that yet.
thanks
-Justin
On Mon, Apr 20, 2015 at 5:28 PM, Pedro Tumusok <pedro.tumusok at gmail.com>
wrote:
> I noticed on my tests that the label Ping time during test, was displayed
> on top of the tool tips, which meant I only had the y-axis to look at and
> had to guessestimate my ping time.
>
> Another step to help cure the bufferbloat, maybe drawing a few vertical
> threshold lines through the ping times.
> Visualizing that ping over x ms will make VoIP work badly, gaming will
> suffer etc.
> At least VoIP have some hard numbers we can use, gaming is more dependant
> upon the network code of the game and its client-side prediction I guess.
> But still anything over y ms in a fps game means you're dead before you
> even see your opponent.
>
> Are you looking for places to deploy servers? I got a couple of people
> here in Norway and Sweden, I can reach out to and ask about that. If yes,
> what requirements do you have?
>
> Pedro
>
> On Mon, Apr 20, 2015 at 9:00 AM, jb <justin at dslr.net> wrote:
>
>> IPv6 is now available as an option, you just select it in the preferences
>> pane.
>>
>> Unfortunately only one of the test servers (in Michigan) is native dual
>> stack so the test is then fixed to that location. In addition the latency
>> pinging during test is stays as ipv4 traffic, until I setup a web socket
>> server on the ipv6 server.
>>
>> All the amazon google and other cloud servers do not support ipv6. They
>> do support it as an edge network feature, like as a load balancing front
>> end, however the test needs custom server software and custom code, and
>> using a cloud proxy that must then talk to an ipv4 test server inside the
>> cloud is rather useless. It should be native all the way. So until I get
>> more native ipv6 servers, one location it is.
>>
>> Nevertheless as a proof of concept it works. Using the hurricane electric
>> ipv6 tunnel from my australian non ipv6 ISP, I get about 80% of the speed
>> that the local sydney ipv4 test server would give.
>>
>>
>> On Mon, Apr 20, 2015 at 1:15 PM, Aaron Wood <woody77 at gmail.com> wrote:
>>
>>> Toke,
>>>
>>> I actually tend to see a bit higher latency with ICMP at the higher
>>> percentiles.
>>>
>>>
>>> http://burntchrome.blogspot.com/2014/05/fixing-bufferbloat-on-comcasts-blast.html
>>>
>>> http://burntchrome.blogspot.com/2014/05/measured-bufferbloat-on-orangefr-dsl.html
>>>
>>> Although the biggest "boost" I've seen ICMP given was on Free.fr's
>>> network:
>>>
>>> http://burntchrome.blogspot.com/2014/01/bufferbloat-or-lack-thereof-on-freefr.html
>>>
>>> -Aaron
>>>
>>> On Sun, Apr 19, 2015 at 11:30 AM, Toke Høiland-Jørgensen <toke at toke.dk>
>>> wrote:
>>>
>>>> Jonathan Morton <chromatix99 at gmail.com> writes:
>>>>
>>>> >> Why not? They can be a quite useful measure of how competing traffic
>>>> >> performs when bulk flows congest the link. Which for many
>>>> >> applications is more important then the latency experienced by the
>>>> >> bulk flow itself.
>>>> >
>>>> > One clear objection is that ICMP is often prioritised when UDP is not.
>>>> > So measuring with UDP gives a better indication in those cases.
>>>> > Measuring with a separate TCP flow, such as HTTPing, is better still
>>>> > by some measures, but most truly latency-sensitive traffic does use
>>>> > UDP.
>>>>
>>>> Sure, well I tend to do both. Can't recall ever actually seeing any
>>>> performance difference between the UDP and ICMP latency measurements,
>>>> though...
>>>>
>>>> -Toke
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat at lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>>
>>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>
>
> --
> Best regards / Mvh
> Jan Pedro Tumusok
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20150420/b18a0db5/attachment-0001.html>
More information about the Bloat
mailing list