[Codel] RFC: Realtime Response Under Load (rrul) test specification
Rick Jones
rick.jones2 at hp.com
Fri Nov 9 12:57:58 EST 2012
>> Github is serving that up as a plain text file, which then has Firefox
>> looking to use gedit to look at the file, and gedit does not seem at all
>> happy with it. It was necessary to download the file and open it "manually"
>> in LibreOffice.
>
> Sorry. The original was in emacs org mode. Shall I put that up instead?
Just make sure the file has the correct MIME type (?) associated with it
and I think it will be fine.
>> Estimating the bandwidth consumed by ACKs and/or protocol headers, using
>> code operating at user-space, is going to be guessing. Particularly
>> portable user-space. While those things may indeed affect the user's
>> experience, the user doesn't particularly care about ACKs or header sizes.
>> She cares how well the page loads or the call sounds.
>
> I feel an "optimum" ack overhead should be calculated, vs the actual
> (which is impossible)
Well, keep in mind that there will be cases where the two will be rather
divergent. Consider a request/response sort of exchange. For excessive
simplicity assume a netperf TCP_RR test. Presumably for the single byte
case, there will be no ACKs - they will all be piggy-backed on the
segments carrying the requests and responses. But now suppose there was
a little think time in there - say to do a disc I/O or a back-end query
or whatnot. That may or may not make the response to the request or the
next request after a response come after the stack's standalone ACK
interval, which is a value we will not know up in user space.
Now make the responses longer and cross the MSS threshold - say
something like 8KB. We might ass-u-me an ACK-every-two-MSS, and we can
get the MSS from user space (at least under *nix) but we will not know
if GRO is present, enabled, or even effective from up at user space.
And if GRO is working, rather than sending something like 5 or 6 ACKs
for that 8KB the stack will have sent just one.
>>> MUST have the test server(s) within 80ms of the testing client
>>
>>
>> Why? Perhaps there is something stating that some number of nines worth of
>> things being accessed are within 80ms of the user. If there is, that should
>> be given in support of the requirement.
>
> Con-US distance. Despite me pushing the test to 200ms, I have a great
> deal more confidence it will work consistently at 80ms.
>
> Can make this a "SHOULD" if you like.
MUST or SHOULD, either way you should... include the reason for the
requirement/request.
>
>>> This portion of the test will take your favorite website as a target
>>> and show you how much it will slow down, under load.
>>
>>
>> Under load on the website itself, or under load on one's link. I ass-u-me
>> the latter, but that should be made clear. And while the chances of the
>> additional load on a web site via this testing is likely epsilon, there is
>> still the matter of its "optics" if you will - how it looks. Particularly
>> if there is going to be something distributed with a default website coded
>> into it.
>>
>> Further, websites are not going to remain static, so there will be the
>> matter of being able to compare results over time. Perhaps that can be
>> finessed with the "unloaded" (again I assume relative to the link of
>> interest/test) measurement.
>
> A core portion of the test really is comparing unloaded vs loaded
> performance of the same place, in the same test, over the course of
> about a minute.
>
> And as these two baseline figures are kept, those can be compared for
> any given website from any given location, over history, and changes
> in the underlying network.
Adding further clarity on specifically *what* is presumed to be
unloaded/loaded and calling-out the assumption that the web server being
accessed will itself have uniform loading for the duration of the test
would be goodness.
David Collier-Brown mentioned "stretch factor" - the ratio of the
unloaded vs loaded delay (assuming I've interpreted what he wrote
correctly). Comparing stretch factors (as one is tweaking things)
still calls for a rather consistent-over-time baseline doesn't it? (what
David referred to as the "normal service time") If I target webserver
foo.com on Monday, and on Monday I see an unloaded-network latency to it
of 100ms and loaded of 200ms that would be a stretch factor of 2 yes?
If I then look again on Tuesday, having made some change to my network
under test that causes it to add only 75 ms, if unloaded access to
webserver foo.com is for some reason 50 ms I will have a stretch factor
of 2.5. That is something which will need to be kept in mind.
rick
More information about the Codel
mailing list