[Cake] Simple metrics
Dave Taht
dave at taht.net
Tue Nov 28 13:15:35 EST 2017
Changing the title of the thread.
Pete Heist <peteheist at gmail.com> writes:
> On Nov 27, 2017, at 7:28 PM, Jonathan Morton <chromatix99 at gmail.com> wrote:
>
> An important factor when designing the test is the difference between
> intra-flow and inter-flow induced latencies, as well as the baseline
> latency.
>
> In general, AQM by itself controls intra-flow induced latency, while flow
> isolation (commonly FQ) controls inter-flow induced latency. I consider the
> latter to be more important to measure.
>
> Intra-flow induced latency should also be important for web page load time and
> websockets, for example. Maybe not as important as inter-flow, because there
> you’re talking about how voice, videoconferencing and other interactive apps
> work together with other traffic, which is what people are affected by the most
> when it doesn’t work.
>
> I don’t think it’s too much to include one public metric for each. People are
> used to “upload” and “download”, maybe they’d one day get used to “reactivity”
> and “interactivity”, or some more accessible terms.
Well, what I proposed was using a pfifo as the reference
standard, and "FQ" as one metric name against pfifo 1000/newstuff.
That normalizes any test we come up with.
>
> Baseline latency is a factor of the underlying network topology, and is
> the type of latency most often measured. It should be measured in the
> no-load condition, but the choice of remote endpoint is critical. Large ISPs
> could gain an unfair advantage if they can provide a qualifying endpoint
> within their network, closer to the last mile links than most realistic
> Internet services. Conversely, ISPs are unlikely to endorse a measurement
> scheme which places the endpoints too far away from them.
>
> One reasonable possibility is to use DNS lookups to randomly-selected gTLDs
> as the benchmark. There are gTLD DNS servers well-placed in essentially all
> regions of interest, and effective DNS caching is a legitimate means for an
> ISP to improve their customers' internet performance. Random lookups
> (especially of domains which are known to not exist) should defeat the
> effects of such caching.
>
> Induced latency can then be measured by applying a load and comparing the
> new latency measurement to the baseline. This load can simultaneously be
> used to measure available throughput. The tests on dslreports offer a decent
> example of how to do this, but it would be necessary to standardise the
> load.
>
> It would be good to know what an average worst case heavy load is on a typical
> household Internet connection and standardize on that. Windows updates for
> example can be pretty bad (many flows).
My mental reference has always been family of four -
Mom in a videoconference
Dad surfing the web
Son playing a game
Daughter uploading to youtube
(pick your gender neutral roles at will)
+ Torrenting or dropbox or windows update or steam or ...
A larger scale reference might be a company of 30 people.
>
> DNS is an interesting possibility. On the one hand all you get is RTT, but on
> the other hand your server infrastructure is already available. I use the
> dslreports speedtest pretty routinely as it’s decent, although results can vary
> significantly between runs. If they’re using DNS to measure latency, I hadn’t
> realized it.
>
> _______________________________________________
> Cake mailing list
> Cake at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
More information about the Cake
mailing list