[Rpm] Alternate definitions of "working condition" - unnecessary?
Jonathan Foulkes
jf at jonathanfoulkes.com
Wed Oct 6 16:36:15 EDT 2021
Let me add another tool, as it’s the one I use to ensure I measure full line capacity during our tuning tests. Netperf.
Netperf supports both TCP and UDP streams, and since one usually needs many streams, you can mix those in any combination of proportions to generate load.
Note- I manage the load on my Netperf servers in a way that guarantees I can measure up to a gig worth of capacity on any single given test. An often overlooked aspect of ‘load’ is whether the remote can actually meet a given capacity/latency goal. I can tell you, that matters.
As I mentioned in one of the IAB breakout sessions, even a CMTS with an AQM on upload can be driven into bloated conditions, but it takes substantially more streams than what would be expected to fully load it to the point of bloat.
I have such a line, a DOCSIS 3.0 300/25 that I use for testing. After a particularly brutal couple of hours of testing while I tuned some algorithms, the ISP engaged an AQM on upload (automatically or manually, don't know, but a similar line on the same CMTS does NOT have the AQM) that produces test results that look good, but a limited capacity of 17Mbps, with a ’normal’ (12) number of streams. But when hammered with 30 upload streams, we see the full 25Mbps and some 300ms of bloat (or worse).
I’ll also note that whatever load pattern the Waveform test uses, seems to generate some bloat within my own MacBook Pro, as if I run a concurrent PingPlotter session on the MBP that is also running the Waveform test in a browser, PP logs high (800+ms) pings during the test.
I recall checking with PP running on another device and the plots looked like what one would expect with Cake running on the router.
So what ‘load’ the current device running the test has going on concurrently can skew the tests (at least insofar as determining if the problem is the line vs the device).
But for research, I total agree Flent is great tool. Just wish it was easier to tweak parameters, maybe I just need to use it more ;-)
> What new information would another "working conditions" test expose that doesn't already come from Flent/RPM Tool? Thanks.
While I’m happy with what I get from Netperf and Flent, I’m one who would like to see a long-running ( >60sec) test suite that had a real-world set of traffic patterns that combines a mix of use cases (VoIP, VideoConf, streaming, DropBox uploads, web surfing, gaming, etc.) and would rank the performance results for each category.
To be effective at testing a router, it would ideally be a distributed test run from multiple devices with varying networking stacks. Maybe an array of RPi4’s with VMs running various OS’s?
So to Rich’s question, ranking results in each category might show some QoS approaches being more effective at some use cases over others. Even if the Qos’s are reasonably effective at the usual bloat metrics.
Even using the same QoS (Cake on OpenWRT) two different sets of settings will both give A’s on the DSLreports test, but have very different results in a mixed load scenario.
Cheers,
Jonathan
> On Oct 6, 2021, at 3:11 PM, Rich Brown via Rpm <rpm at lists.bufferbloat.net> wrote:
>
> A portion of yesterday's RPM call encouraged people to come up with new definitions of "working conditions".
>
> This feels like a red herring.
>
> We already have two worst-case definitions - with implementations - of tools that "stuff up" a network. Flent and Apple's RPM Tool drive a network into worst-case behavior for long (> 60 seconds) and medium (~20 seconds) terms.
>
> What new information would another "working conditions" test expose that doesn't already come from Flent/RPM Tool? Thanks.
>
> Rich
> _______________________________________________
> Rpm mailing list
> Rpm at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
More information about the Rpm
mailing list