[Rpm] in case anyone here digs podcasts

Tim Chown Tim.Chown at jisc.ac.uk
Mon Feb 20 10:38:46 EST 2023


Hi,

> On 19 Feb 2023, at 23:49, Dave Taht via Rpm <rpm at lists.bufferbloat.net> wrote:
> 
> https://packetpushers.net/podcast/heavy-networking-666-improving-quality-of-experience-with-libreqos/

I’m a bit lurgy-ridden today so had a listen as it’s nice passive content.  I found it good and informative, though somewhat I the weeds (for me) after about half way through, but I looked top a few things that were Brough up and learnt a few useful details, so overall well worth the time, thanks.

> came out yesterday. You'd have to put up with about 8 minutes of my
> usual rants before we get into where we are today with the project and
> the problems we are facing. (trying to scale past 80Gbit now) We have
> at this point validated the behavior of several benchmarks, and are
> moving towards more fully emulating various RTTs. See
> https://payne.taht.net and click on run bandwidth test to see how we
> are moving along. It is so nice to see sawtooths in real time!

I tried the link and clicked the start test.  I feel I should be able to click a stop test button too, bt again interesting to see :)

> Bufferbloat is indeed, the number of the beast.

I’m in a different world to the residential ISP one that was the focus of what you presented, specifically he R&E networks where most users are connected via local Ethernet campus networks.  But there will be a lot of WiFi of course.

It would be interesting to gauge to what extent buffer boat is a problem for typical campus users, vs typical residential network users.  Is there data on that?  We’re very interested in the new rpm (well, rps!) draft and the iperf2 implementation, which we’ve run from both home network and campus systems to an iperf2 server on our NREN backbone. I think my next question on the iperf2 tool would be the methodology to ramp up the testing to see at what point buffer bloat is experienced (noting some of your per-hit comments in the podcast).  

Regarding the speeds, we are interested in high speed large scale file strangers, e.g. for the CERN community, so might (say) typically see iperf3 test up to 20-25Gbps single flow or iperf2 (which is much better multi-flow) filling a high RTT 100G link with around half a dozen flows.  In practice though the CERN transfers are hundreds or thousands of flows, each of a few hundred Mbps or a small number of Gbps, and the site access networks for the larger facilities 100G-400G.

In the longest prefix match topic, are there people looking at that with white box platforms, open NOSes and P4 type solutions?

Tim


More information about the Rpm mailing list