[Ecn-sane] results of two simple ECN tests

Sebastian Moeller moeller0 at gmx.de
Sun Feb 17 16:07:42 EST 2019


Hi Pete,


> On Feb 17, 2019, at 21:57, Pete Heist <pete at heistp.net> wrote:
> 
> Yes, it's enabled by default. I think I'm just measuring the wrong thing. ECN seems to be about reducing TCP RTT and jitter, not increasing throughput per se.

	But in your test, a reduced TCP RTT should result in a higher throughput, no?

> I'll rather compare packet captures with it on and off to look for an improvement in the TCP RTT spikes typically associated with drops.

Well, the big danger of dropping packets is that you might stall a flow (say, by dropping enough consecutive packets to drive the flow into RTO) something much less likely with SACK (at least that is my understanding of one of SACKs promises). For post-bottleneck shaping there is also the issue that ECN-marking an incoming packet will at least not have wasted the "transmit-slot" that was occupied for the transmit. But given that TCP was designed to interpret lost packets as a sign of having exceeded capacity I am not that amazed that it still does a decent job doing so ;) I believe there is an argument for giving ECN capable flows a lower marking probability than non-ECN flows would get a drop probability, but since that is easily gamed (end-points negotiating ECN but simply not slowing down on receiving marks) it is not an option for the wide-internet and hence ECN should not give much improvement in throughput (although it will reduce the number of retransmitted packets). I wonder how this would change if you would reconfigure the shaper to half the bandwidth in the middle of the 


> 
> On 17 Feb 2019, at 14:02, Sebastian Moeller <moeller0 at gmx.de> wrote:
> 
>> Did you use SACK?
>> 
>> On February 17, 2019 12:26:51 PM GMT+01:00, Pete Heist <pete at heistp.net> wrote:
>> Attached are some scripts that run two simple tests of ECN with veth devices, with and without ECN. The topology is:
>> 
>> client - middlebox (20Mbit htb+fq_codel egress both ways) - net (40ms netem delay both ways, i.e. 80ms RTT) - server
>> 
>> Here are some results from the APU2 with Debian 9 / kernel 4.9.0-8:
>> 
>> Test 1 (“One vs one”, two clients uploads competing, one flow each for 60 seconds, measure total data transferred):
>> 
>> 	No ECN, 63.2 + 63.5 transferred = 126.7MB
>> 	ECN, 63.2 + 61.5 transferred = 124.7MB
>> 
>> Test 2 (“One vs pulses”, client #1: upload for 60 seconds, client #2: 40x 1M uploads sequentially (iperf -n 1M), measure client #1 data transferred):
>> 
>> 	No ECN, 63.2 MB transferred
>> 	ECN, 65.0 MB transferred
>> 
>> Can anyone suggest changes to this test or a better test that would more clearly show the benefit of ECN? I guess we’d want more congestion and the cost of each lost packet to be higher, meaning higher RTTs and more clients?
>> 
>> Pete
>> 
>> -- 
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.



More information about the Ecn-sane mailing list