[Codel] RFC: Realtime Response Under Load (rrul) test specification

Rick Jones rick.jones2 at hp.com
Tue Nov 6 13:14:53 EST 2012


On 11/06/2012 04:42 AM, Dave Taht wrote:
> I have been working on developing a specification for testing networks
> more effectively for various side effects of bufferbloat, notably
> gaming and voip performance, and especially web performance.... as
> well as a few other things that concerned me, such as IPv6 behavior,
> and the effects of packet classification.
>
> A key goal is to be able to measure the quality of the user experience
> while a network is otherwise busy, with complex stuff going on in the
> background, but with a simple presentation of the results in the end,
> in under 60 seconds.

Would you like fries with that?

Snark aside, I think that being able to capture the state of the user 
experience in only 60 seconds is daunting at best.  Especially if this 
testing is going to run over the Big Bad Internet (tm) rather than in a 
controlled test lab.

> While it's not done yet, it escaped into the wild today, and I might
> as well solicit wider opinions on it, sooo... get the spec at:
>
> https://github.com/dtaht/deBloat/blob/master/spec/rrule.doc?raw=true

Github is serving that up as a plain text file, which then has Firefox 
looking to use gedit to look at the file, and gedit does not seem at all 
happy with it.  It was necessary to download the file and open it 
"manually" in LibreOffice.

> MUST run long enough to defeat bursty bandwidth optimizations such as
> PowerBoost and discard data from that interval.

I'll willingly display my ignorance, but for how long does PowerBoost 
and its cousins boost bandwidth?

I wasn't looking for PowerBoost, and given the thing being examined I 
wasn't seeing that, but recently when I was evaluating the network 
performance of something "out there" in the cloud (not my home cloud as 
it were though) I noticed performance spikes repeating at intervals 
which would require > 60 seconds to "defeat"

> MUST track and sum bi-directional throughput, using estimates for ACK
> sizes of ipv4, ipv6, and encapsulated ipv6 packets, udp and tcp_rr
> packets, etc.

Estimating the bandwidth consumed by ACKs and/or protocol headers, using 
code operating at user-space, is going to be guessing.  Particularly 
portable user-space.  While those things may indeed affect the user's 
experience, the user doesn't particularly care about ACKs or header 
sizes.  She cares how well the page loads or the call sounds.

> MUST have the test server(s) within 80ms of the testing client

Why?  Perhaps there is something stating that some number of nines worth 
of things being accessed are within 80ms of the user.  If there is, that 
should be given in support of the requirement.

> This portion of the test will take your favorite website as a target
> and show you how much it will slow down, under load.

Under load on the website itself, or under load on one's link.  I 
ass-u-me the latter, but that should be made clear.  And while the 
chances of the additional load on a web site via this testing is likely 
epsilon, there is still the matter of its "optics" if you will - how it 
looks.  Particularly if there is going to be something distributed with 
a default website coded into it.

Further, websites are not going to remain static, so there will be the 
matter of being able to compare results over time.  Perhaps that can be 
finessed with the "unloaded" (again I assume relative to the link of 
interest/test) measurement.

rick jones



More information about the Codel mailing list