From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.taht.net (mail.taht.net [IPv6:2a01:7e00::f03c:91ff:feae:7028]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id D24E43B29E for ; Tue, 28 Nov 2017 13:15:38 -0500 (EST) Received: from nemesis.taht.net (unknown [IPv6:2603:3024:1536:86f0:2e0:4cff:fec1:1206]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.taht.net (Postfix) with ESMTPSA id 281B321341; Tue, 28 Nov 2017 18:15:37 +0000 (UTC) From: Dave Taht To: Pete Heist Cc: Jonathan Morton , Cake List References: <7B914FA7-38A6-4113-9B2C-D6246873676A@gmail.com> <1493791193941.83458@telenor.com> <53AFD81B-F9F5-4DE1-A418-A177D0F339E8@gmail.com> <5FD7BE71-CD6B-4C2F-8149-54763F43C519@gmx.de> <4939D5F5-B880-424B-874A-477E75B0D0A1@gmail.com> <9F1487EC-5300-4349-AC6B-7B9F7626F08F@gmail.com> Date: Tue, 28 Nov 2017 10:15:35 -0800 In-Reply-To: <9F1487EC-5300-4349-AC6B-7B9F7626F08F@gmail.com> (Pete Heist's message of "Mon, 27 Nov 2017 22:49:45 +0100") Message-ID: <87mv36p8vs.fsf_-_@nemesis.taht.net> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: [Cake] Simple metrics X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Nov 2017 18:15:39 -0000 Changing the title of the thread. Pete Heist writes: > On Nov 27, 2017, at 7:28 PM, Jonathan Morton = wrote: >=20=20=20=20=20 > An important factor when designing the test is the difference between > intra-flow and inter-flow induced latencies, as well as the baseline > latency. > > In general, AQM by itself controls intra-flow induced latency, while = flow > isolation (commonly FQ) controls inter-flow induced latency. I consid= er the > latter to be more important to measure. > > Intra-flow induced latency should also be important for web page load tim= e and > websockets, for example. Maybe not as important as inter-flow, because th= ere > you=E2=80=99re talking about how voice, videoconferencing and other inter= active apps > work together with other traffic, which is what people are affected by th= e most > when it doesn=E2=80=99t work. > > I don=E2=80=99t think it=E2=80=99s too much to include one public metric = for each. People are > used to =E2=80=9Cupload=E2=80=9D and =E2=80=9Cdownload=E2=80=9D, maybe th= ey=E2=80=99d one day get used to =E2=80=9Creactivity=E2=80=9D > and =E2=80=9Cinteractivity=E2=80=9D, or some more accessible terms. Well, what I proposed was using a pfifo as the reference standard, and "FQ" as one metric name against pfifo 1000/newstuff.=20 That normalizes any test we come up with. > > Baseline latency is a factor of the underlying network topology, = and is > the type of latency most often measured. It should be measured in the > no-load condition, but the choice of remote endpoint is critical. Lar= ge ISPs > could gain an unfair advantage if they can provide a qualifying endpo= int > within their network, closer to the last mile links than most realist= ic > Internet services. Conversely, ISPs are unlikely to endorse a measure= ment > scheme which places the endpoints too far away from them. > > One reasonable possibility is to use DNS lookups to randomly-selected= gTLDs > as the benchmark. There are gTLD DNS servers well-placed in essential= ly all > regions of interest, and effective DNS caching is a legitimate means = for an > ISP to improve their customers' internet performance. Random lookups > (especially of domains which are known to not exist) should defeat the > effects of such caching. > > Induced latency can then be measured by applying a load and comparing= the > new latency measurement to the baseline. This load can simultaneously= be > used to measure available throughput. The tests on dslreports offer a= decent > example of how to do this, but it would be necessary to standardise t= he > load. > > It would be good to know what an average worst case heavy load is on a ty= pical > household Internet connection and standardize on that. Windows updates for > example can be pretty bad (many flows). My mental reference has always been family of four - Mom in a videoconference Dad surfing the web Son playing a game Daughter uploading to youtube (pick your gender neutral roles at will) + Torrenting or dropbox or windows update or steam or ... A larger scale reference might be a company of 30 people. > > DNS is an interesting possibility. On the one hand all you get is RTT, bu= t on > the other hand your server infrastructure is already available. I use the > dslreports speedtest pretty routinely as it=E2=80=99s decent, although re= sults can vary > significantly between runs. If they=E2=80=99re using DNS to measure laten= cy, I hadn=E2=80=99t > realized it. > > _______________________________________________ > Cake mailing list > Cake@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cake