[Cake] Recomended HW to run cake and fq_codel?

Pete Heist peteheist at gmail.com
Mon Nov 27 16:49:45 EST 2017


> On Nov 27, 2017, at 7:28 PM, Jonathan Morton <chromatix99 at gmail.com> wrote:
> An important factor when designing the test is the difference between intra-flow and inter-flow induced latencies, as well as the baseline latency.
> 
> In general, AQM by itself controls intra-flow induced latency, while flow isolation (commonly FQ) controls inter-flow induced latency.  I consider the latter to be more important to measure.
> 
Intra-flow induced latency should also be important for web page load time and websockets, for example. Maybe not as important as inter-flow, because there you’re talking about how voice, videoconferencing and other interactive apps work together with other traffic, which is what people are affected by the most when it doesn’t work.

I don’t think it’s too much to include one public metric for each. People are used to “upload” and “download”, maybe they’d one day get used to “reactivity” and “interactivity”, or some more accessible terms.
> Baseline latency is a factor of the underlying network topology, and is the type of latency most often measured.  It should be measured in the no-load condition, but the choice of remote endpoint is critical.  Large ISPs could gain an unfair advantage if they can provide a qualifying endpoint within their network, closer to the last mile links than most realistic Internet services.  Conversely, ISPs are unlikely to endorse a measurement scheme which places the endpoints too far away from them.
> 
> One reasonable possibility is to use DNS lookups to randomly-selected gTLDs as the benchmark.  There are gTLD DNS servers well-placed in essentially all regions of interest, and effective DNS caching is a legitimate means for an ISP to improve their customers' internet performance.  Random lookups (especially of domains which are known to not exist) should defeat the effects of such caching.
> 
> Induced latency can then be measured by applying a load and comparing the new latency measurement to the baseline.  This load can simultaneously be used to measure available throughput.  The tests on dslreports offer a decent example of how to do this, but it would be necessary to standardise the load.
> 
It would be good to know what an average worst case heavy load is on a typical household Internet connection and standardize on that. Windows updates for example can be pretty bad (many flows).

DNS is an interesting possibility. On the one hand all you get is RTT, but on the other hand your server infrastructure is already available. I use the dslreports speedtest pretty routinely as it’s decent, although results can vary significantly between runs. If they’re using DNS to measure latency, I hadn’t realized it.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cake/attachments/20171127/e4a12677/attachment.html>


More information about the Cake mailing list