From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp80.ord1d.emailsrvr.com (smtp80.ord1d.emailsrvr.com [184.106.54.80]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id E236E3B29E for ; Wed, 6 Oct 2021 16:36:16 -0400 (EDT) X-Auth-ID: jf@jonathanfoulkes.com Received: by smtp19.relay.ord1d.emailsrvr.com (Authenticated sender: jf-AT-jonathanfoulkes.com) with ESMTPSA id 22E3F60212; Wed, 6 Oct 2021 16:36:16 -0400 (EDT) From: Jonathan Foulkes Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.120.0.1.13\)) Date: Wed, 6 Oct 2021 16:36:15 -0400 References: To: Rich Brown , rpm@lists.bufferbloat.net In-Reply-To: Message-Id: <7B6803AB-A910-4BD0-A4E8-63E7CD600790@jonathanfoulkes.com> X-Mailer: Apple Mail (2.3654.120.0.1.13) X-Classification-ID: 1cad6e03-3474-4a6b-a90c-2b5075460b07-1-1 Subject: Re: [Rpm] Alternate definitions of "working condition" - unnecessary? X-BeenThere: rpm@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: revolutions per minute - a new metric for measuring responsiveness List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 Oct 2021 20:36:17 -0000 Let me add another tool, as it=E2=80=99s the one I use to ensure I = measure full line capacity during our tuning tests. Netperf. Netperf supports both TCP and UDP streams, and since one usually needs = many streams, you can mix those in any combination of proportions to = generate load. Note- I manage the load on my Netperf servers in a way that guarantees I = can measure up to a gig worth of capacity on any single given test. An = often overlooked aspect of =E2=80=98load=E2=80=99 is whether the remote = can actually meet a given capacity/latency goal. I can tell you, that = matters. As I mentioned in one of the IAB breakout sessions, even a CMTS with an = AQM on upload can be driven into bloated conditions, but it takes = substantially more streams than what would be expected to fully load it = to the point of bloat. I have such a line, a DOCSIS 3.0 300/25 that I use for testing. After a = particularly brutal couple of hours of testing while I tuned some = algorithms, the ISP engaged an AQM on upload (automatically or manually, = don't know, but a similar line on the same CMTS does NOT have the AQM) = that produces test results that look good, but a limited capacity of = 17Mbps, with a =E2=80=99normal=E2=80=99 (12) number of streams. But when = hammered with 30 upload streams, we see the full 25Mbps and some 300ms = of bloat (or worse). I=E2=80=99ll also note that whatever load pattern the Waveform test = uses, seems to generate some bloat within my own MacBook Pro, as if I = run a concurrent PingPlotter session on the MBP that is also running the = Waveform test in a browser, PP logs high (800+ms) pings during the test. I recall checking with PP running on another device and the plots looked = like what one would expect with Cake running on the router. So what =E2=80=98load=E2=80=99 the current device running the test has = going on concurrently can skew the tests (at least insofar as = determining if the problem is the line vs the device). But for research, I total agree Flent is great tool. Just wish it was = easier to tweak parameters, maybe I just need to use it more ;-) > What new information would another "working conditions" test expose = that doesn't already come from Flent/RPM Tool? Thanks. While I=E2=80=99m happy with what I get from Netperf and Flent, I=E2=80=99= m one who would like to see a long-running ( >60sec) test suite that had = a real-world set of traffic patterns that combines a mix of use cases = (VoIP, VideoConf, streaming, DropBox uploads, web surfing, gaming, etc.) = and would rank the performance results for each category.=20 To be effective at testing a router, it would ideally be a distributed = test run from multiple devices with varying networking stacks. Maybe an = array of RPi4=E2=80=99s with VMs running various OS=E2=80=99s? So to Rich=E2=80=99s question, ranking results in each category might = show some QoS approaches being more effective at some use cases over = others. Even if the Qos=E2=80=99s are reasonably effective at the usual = bloat metrics. Even using the same QoS (Cake on OpenWRT) two different sets of settings = will both give A=E2=80=99s on the DSLreports test, but have very = different results in a mixed load scenario. Cheers, Jonathan > On Oct 6, 2021, at 3:11 PM, Rich Brown via Rpm = wrote: >=20 > A portion of yesterday's RPM call encouraged people to come up with = new definitions of "working conditions". >=20 > This feels like a red herring.=20 >=20 > We already have two worst-case definitions - with implementations - of = tools that "stuff up" a network. Flent and Apple's RPM Tool drive a = network into worst-case behavior for long (> 60 seconds) and medium (~20 = seconds) terms. >=20 > What new information would another "working conditions" test expose = that doesn't already come from Flent/RPM Tool? Thanks. >=20 > Rich > _______________________________________________ > Rpm mailing list > Rpm@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/rpm