[Codel] happy 4th!
Eric Dumazet
eric.dumazet at gmail.com
Tue Jul 9 08:56:33 EDT 2013
On Tue, 2013-07-09 at 09:57 +0200, Toke Høiland-Jørgensen wrote:
> Mikael Abrahamsson <swmike at swm.pp.se> writes:
>
> > For me, it shows that FQ_CODEL indeed affects TCP performance
> > negatively for long links, however it looks like the impact is only
> > about 20-30%.
>
> As far as I can tell, fq_codel's throughput is about 10% lower on
> 100mbit in one direction, while being higher in the other. For 10mbit
> fq_codel shows higher throughput throughout?
What do you mean ? This makes little sense to me.
>
> > What's stranger is that latency only goes up to around 230ms from its
> > 200ms "floor" with FIFO, I had expected a bigger increase in buffering
> > with FIFO. Have you done any TCP tuning?
>
> Not apart from what's in mainline (3.9.9 kernel). The latency-inducing
> box is after the bottleneck, though, so perhaps it has something to do
> with that? Some interaction between netem and the ethernet link?
I did not received a copy of your setup, so its hard to tell. But using
netem correctly is tricky.
My current testbed uses the following script, meant to exercise tcp
flows with random RTT between 49.9 and 50.1 ms, to check how TCP stack
reacts to reorders (The answer is : pretty badly.)
Note that using this setup forced me to send two netem patches,
currently in net-next, or else netem used too many cpu cycles on its
own.
http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=aec0a40a6f78843c0ce73f7398230ee5184f896d
Followed by a fix :
http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=36b7bfe09b6deb71bf387852465245783c9a6208
The script :
# netem based setup, installed at receiver side only
ETH=eth4
IFB=ifb0
EST="est 1sec 4sec" # Optional rate estimator
modprobe ifb
ip link set dev $IFB up
tc qdisc add dev $ETH ingress 2>/dev/null
tc filter add dev $ETH parent ffff: \
protocol ip u32 match u32 0 0 flowid 1:1 action mirred egress \
redirect dev $IFB
ethtool -K $ETH gro off lro off
tc qdisc del dev $IFB root 2>/dev/null
# Use netem at ingress to delay packets by 25 ms +/- 100us (to get reorders)
tc qdisc add dev $IFB root $EST netem limit 100000 delay 25ms 100us # loss 0.1
tc qdisc del dev $ETH root 2>/dev/null
# Use netem at egress to delay packets by 25 ms (no reorders)
tc qd add dev $ETH root $EST netem delay 25ms limit 100000
And the results for a single tcp flow :
lpq84:~# ./netperf -H 10.7.7.83 -l 10
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.7.7.83 () port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.20 37.60
lpq84:~# ./netperf -H 10.7.7.83 -l 10
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.7.7.83 () port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.06 116.94
See rates at the receiver side (check if packets were dropped because of
too low qdisc limits, and rates)
lpq83:~# tc -s -d qd
qdisc netem 800e: dev eth4 root refcnt 257 limit 100000 delay 25.0ms
Sent 10791616 bytes 115916 pkt (dropped 0, overlimits 0 requeues 0)
rate 7701Kbit 10318pps backlog 47470b 509p requeues 0
qdisc ingress ffff: dev eth4 parent ffff:fff1 ----------------
Sent 8867475174 bytes 5914081 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc netem 800d: dev ifb0 root refcnt 2 limit 100000 delay 25.0ms 99us
Sent 176209244 bytes 116430 pkt (dropped 0, overlimits 0 requeues 0)
rate 123481Kbit 10198pps backlog 0b 0p requeues 0
More information about the Codel
mailing list