[Codel] plots
Rick Jones
rick.jones2 at hp.com
Mon Nov 26 14:12:39 EST 2012
On 11/23/2012 01:50 AM, Toke Høiland-Jørgensen wrote:
> Dave Taht <dave.taht at gmail.com> writes:
>
>
>> The *.svg of high bandwidths is nice, the *.ps of low is not so much.
>
> Yeah, I see your point.
>
>> 1) I think the way to fix the upload plot is to use a larger sample
>> interval, like 1 second, when dealing with low bandwidths.
>
> Well, the reason for missing data is that netperf "misses its deadlines"
> so to speak. I.e. we tell it to output intermediate results every 0.1
> seconds, and it only does so every 0.5. As I understood Rick's
> explanation of how netperf's demo mode works, the problem with missing
> deadlines is that netperf basically tries to guess how much data it will
> send in the requested interval, and then after having sent that much
> data it checks the time to see if it's time to output an intermediate
> result. So if the data transfer rate is slower than expected, the
> deadline will be missed.
That is correct. That behaviour goes back to the days/systems
when/where a gettimeofday() call was not "cheap." Even today, that can
have a measurable effect on the likes of a TCP_RR test.
> Using negative numbers for the interval makes it check the time every
> time it sends something, but it still needs to send a bit of data
> between every check, and if that takes too long the holes will appear.
What I do when I go to shove netperf interim results into an RRD is make
the heartbeat at least as long as the longest interval in the stream of
interim results. I also use the minimum step size of one second. So,
if I ran a netperf command with -D -0.25 but I saw some intervals which
were oh, say 1.5 seconds long, I would set the heartbeat of the RRD to 2
seconds to avoid any unknowns in the data.
happy benchmarking,
rick
> Using a longer sampling interval for the entire plot might alleviate
> this, I suppose, but since we're measuring bytes sent per second, doing
> so amounts to averaging subsequent points; so is there any reason why we
> can't just do the averaging over the data we already have (i.e. just
> increase the interpolation interval)?
>
> If we do want to increase the actual sampling interval, do you propose
> to do so only for low-bandwidth tests? Because in this case we would
> either need to detect when the bandwidth is low and adjust parameters,
> or there would have to be two versions of the test: a low-bandwith and a
> high-bandwith version.
>
>> 2) I am really hating losing the outliers entirely. In particular,
>>
>> Getting a second ping plot that used a cdf would be more accurate and
>> revealing.
>
> I agree that this is not optimal, and I've been thinking that I would
> like to decouple the data gathering part from the plotting part a bit.
> I.e. making it possible to specify a test that gathers a lot of data
> (could add in tc stats for instance) and saves it, and then specify
> several sets of plots to do on that data. CDF would be one of them,
> another one could be a simpler plot that plots the average (or total)
> upload and download with just the ping, similar to the simple plots we
> did with just two streams. And I'm sure we can come up with more. Export
> would still be there, of course; in fact, I was planning to switch to
> using json as the native storage format.
>
> I still have a way to go on my project writing before I get to doing the
> tests for myself, but I can try and interleave some work on
> netperf-wrapper to get the above changes in there? :)
>
>
> -Toke
>
>
>
> _______________________________________________
> Codel mailing list
> Codel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/codel
>
More information about the Codel
mailing list