Rick Jones writes: Hi Rick Thanks for your feedback. > The script looks reasonable. Certainly cleaner than any Python I've > yet written :) I might be a little worried about skew error though > (assuming I've not mis-read the script and example ini file). That is > why I use the "demo mode" of netperf in > http://www.netperf.org/svn/netperf2/trunk/doc/examples/bloat.sh though > it does make the post-processing rather more involved. Ah, I see. I'll look into that. As far as I can tell from that script, you're basically running it with the demo mode enabled, and graphing the results with each line as a data point? There's a comment about using negative values for -D to increase accuracy at a cost of performance. Is this cost significant? And if it is, would there be any reason why it wouldn't be possible to just use positive values and then fix the error by interpolating values to be at fixed intervals when graphing? > I see you are running the TCP_RR test for less time than the > TCP_STREAM/TCP_MAERTS test. What do you then do to show the latency > without the bulk transfer load? I ran the TCP_RR test by itself to get a baseline result. The idea with the different lengths is to start the streams, wait two seconds, and then run the roundtrip so that it finished two seconds before the streams (i.e. the roundtrip test is only running while the streams are). This is for my test setup, which is just a few computers connected with ethernet; so no sudden roundtrip variances should occur. I can see how it would be useful to get something that can be graphed over time; I'll try to look into getting that working. > I was thinking of trying to write a version of bloat.sh in python but > before I did I wanted to know if python was sufficiently available in > most folks bufferbloat testing environments. I figure in > "full-featured" *nix systems that isn't an issue, but what about in > the routers? Is there any reason why it wouldn't be possible to run the python script on a full-featured machine and run the netperf instances via an ssh tunnel on the router (on a different interface than the one being tested, of course)? And since Dave replied while I was writing this, I'll just continue straight on: > That said, there is a start at lua wrappers as well as various > post-processing scripts for things like gnuplot in my deBloat repo on > github. The last time I got serious about getting something more > conventionally publishable is in > https://github.com/dtaht/deBloat/tree/master/test/results I'll go digging in there for ideas for tests to add. And probably just have a general look around. > But: It became obvious fast that long RTT tests were needed, which > I've been trying to establish the infrastructure to do I assume that by "infrastructure" you mean "(netperf) servers far away"? What would be needed for a test server in terms of resources (bandwidth and otherwise)? I could try and persuade my university to let me setup a test server on their network (which is in Denmark)... > I'm tickled Toke is also dumping stuff into org-mode, which makes > publishing content much easier from emacs. Well, I'm keeping my notes in org mode, so it seemed like the thing to do. My plan was (is) to add other output formatter(s) if I needed to output to something different. > Coherently tracking certain other variables (such as the actual tc > qdisc line and sampling tc -s statistics across an interval) across > tests is also needed. This is on the router, I assume? Is that just running the tc command and parsing the output? And if so, any ideas for a good way to timestamp it? `date && tc -s` or? -Toke -- Toke Høiland-Jørgensen toke@toke.dk