But in your test, a reduced TCP RTT should result in a higher throughput, no?
Theoretically, but I think the difference may be only marginal at modern bandwidths. For example, during my 20Mbit, 10 second “one vs one” iperf test, 23770 segments are sent and only 9 are dropped. ECN saves those 9 segments, but that’s only 13626 bytes (0.038%), plus what ever side effects there may be. My current test with iperf isn’t sensitive enough to measure a corresponding difference in throughput.
I'll rather compare packet captures with it on and off to look for an improvement in the TCP RTT spikes typically associated with drops.
Well, the big danger of dropping packets is that you might stall a flow (say, by dropping enough consecutive packets to drive the flow into RTO) something much less likely with SACK (at least that is my understanding of one of SACKs promises).
That may be, but my simple simulation doesn’t reproduce that case. I’ve updated it and made some TCP RTT graphs, which does show a clearer difference with ECN. All the files and pcaps are here:
Compare these two one-vs-one RTT graphs and the difference with ECN enabled can be seen:
Similar for one-vs-pulses:
The graphs of TCP window are also more appealing in the ECN case, at least.
Now, re-reading Dave’s posts about why the ECN-sane project was started, there appear to be some pathological cases. This simple test doesn’t get to those. For now just wanted to get in touch with some basics. :)