[Bloat] DSLReports Speed Test has latency measurement built-in

Simon Barber simon at superduper.net
Fri Apr 24 22:24:10 EDT 2015


Perhaps where the green is should depend on the customer's access type. For 
instance someone on fiber should have a much better ping than someone on 
3G. But I agree this should be a fixed scale, not dependent on idle ping 
time. Although VoIP might be good up to 100ms, gamers would want lower values.

Simon

Sent with AquaMail for Android
http://www.aqua-mail.com


On April 24, 2015 1:19:08 AM Sebastian Moeller <moeller0 at gmx.de> wrote:

> Hi jb,
>
> this looks great!
>
> On Apr 23, 2015, at 12:08 , jb <justin at dslr.net> wrote:
>
> > This is how I've changed the graph of latency under load per input from 
> you guys.
> >
> > Taken away log axis.
> >
> > Put in two bands. Yellow starts at double the idle latency, and goes to 
> 4x the idle latency
> > red starts there, and goes to the top. No red shows if no bars reach into it.
> > And no yellow band shows if no bars get into that zone.
> >
> > Is it more descriptive?
>
> 	Mmmh, so the delay we see consists out of the delay caused by the distance 
> to the server and the delay of the access technology, meaning the un-loaded 
> latency can range from a few milliseconds to several 100s of milliseconds 
> (for the poor sods behind a satellite link…). Any further latency 
> developing under load should be independent of distance and access 
> technology as those are already factored in the bade latency. In both the 
> extreme cases multiples of the base-latency do not seem to be relevant 
> measures of bloat, so I would like to argue that the yellow and the red 
> zones should be based on fixed increments and not as a ratio of the 
> base-latency. This is relevant as people on a slow/high-access-latency link 
> have a much smaller tolerance for additional latency than people on a fast 
> link if certain latency guarantees need to be met, and thresholds as a 
> function of base-latency do not reflect this.
> 	Now ideally the colors should not be based on the base-latency at all but 
> should be at fixed total values, like 200 to 300 ms for voip (according to 
> ITU-T G.114 for voip one-way delay <= 150 ms is recommended) in yellow, and 
> say 400 to 600 ms in orange, 400ms is upper bound for good voip and 600ms 
> for decent voip (according to ITU-T G.114,users are very satisfied up to 
> 200 ms one way delay and satisfied up to roughly 300ms) so anything above 
> 600 in deep red?
> 	I know this is not perfect and the numbers will probably require severe 
> "bike-shedding” (and I am not sure that ITU-T G.114 really iOS good source 
> for the thresholds), but to get a discussion started here are the numbers 
> again:
> 0 	to 100 ms 	no color
> 101 	to 200 ms		green
> 201	to 400 ms		yellow
> 401	to 600 ms		orange
> 601 	to 1000 ms	red
> 1001 to infinity		purple (or better marina red?)
>
> Best Regards
> 	Sebastian
>
>
> >
> > (sorry to the list moderator, gmail keeps sending under the wrong email 
> and I get a moderator message)
> >
> > On Thu, Apr 23, 2015 at 8:05 PM, jb <justinbeech at gmail.com> wrote:
> > This is how I've changed the graph of latency under load per input from 
> you guys.
> >
> > Taken away log axis.
> >
> > Put in two bands. Yellow starts at double the idle latency, and goes to 
> 4x the idle latency
> > red starts there, and goes to the top. No red shows if no bars reach into it.
> > And no yellow band shows if no bars get into that zone.
> >
> > Is it more descriptive?
> >
> >
> > On Thu, Apr 23, 2015 at 4:48 PM, Eric Dumazet <eric.dumazet at gmail.com> wrote:
> > Wait, this is a 15 years old experiment using Reno and a single test
> > bed, using ns simulator.
> >
> > Naive TCP pacing implementations were tried, and probably failed.
> >
> > Pacing individual packet is quite bad, this is the first lesson one
> > learns when implementing TCP pacing, especially if you try to drive a
> > 40Gbps NIC.
> >
> > https://lwn.net/Articles/564978/
> >
> > Also note we use usec based rtt samples, and nanosec high resolution
> > timers in fq. I suspect the ns simulator experiment had sync issues
> > because of using low resolution timers or simulation artifact, without
> > any jitter source.
> >
> > Billions of flows are now 'paced', but keep in mind most packets are not
> > paced. We do not pace in slow start, and we do not pace when tcp is ACK
> > clocked.
> >
> > Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can
> > eventually have all packets being paced, using TSO 'clusters' for TCP.
> >
> >
> >
> > On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote:
> > > one reference with pdf publicly available. On the website there are
> > > various papers
> > > on this topic. Others might me more relevant but I did not check all of
> > > them.
> >
> > > Understanding the Performance of TCP Pacing,
> > > Amit Aggarwal, Stefan Savage, and Tom Anderson,
> > > IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165.
> > >
> > > http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat at lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> >
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat at lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat





More information about the Bloat mailing list