[Bloat] sfqred and RTT testing at 4Mbit
Dave Taht
dave.taht at gmail.com
Thu Jan 12 17:40:24 EST 2012
In the little remaining time I have in France I have been testing out
Eric Dumazet's implementation of REDSFQ, against two boxes with
his newer SFQ on them. I need to take time out to do this work
in isolation, but I'm out of time for the next several weeks to do
so.
Baseline numbers to:
* one hop router
64 bytes from 172.30.50.1: icmp_req=2 ttl=64 time=0.245 ms
two hop router
64 bytes from 172.30.47.1: icmp_req=1 ttl=63 time=0.610 ms
During this test I am pinging away with a 10ms
stream, which stays at or below...
172.30.50.1 : [999], 248 bytes, 2.56 ms (2.55 avg, 0% loss)
for 100% of the samples.
(this is my attempt at simulating voip to some extent)
And 10 iperfs going to the closest router do:
[ 12] 0.0-66.3 sec 2.75 MBytes 348 Kbits/sec
[ 13] 0.0-66.3 sec 2.75 MBytes 348 Kbits/sec
[ 8] 0.0-66.3 sec 2.75 MBytes 348 Kbits/sec
[ 11] 0.0-66.4 sec 2.75 MBytes 348 Kbits/sec
[ 9] 0.0-66.4 sec 2.75 MBytes 348 Kbits/sec
[ 10] 0.0-66.4 sec 2.75 MBytes 348 Kbits/sec
[ 6] 0.0-66.4 sec 2.75 MBytes 347 Kbits/sec
[ 5] 0.0-67.2 sec 2.88 MBytes 359 Kbits/sec
[ 7] 0.0-67.2 sec 2.88 MBytes 359 Kbits/sec
[ 4] 0.0-67.2 sec 2.88 MBytes 359 Kbits/sec
[SUM] 0.0-67.2 sec 27.9 MBytes 3.48 Mbits/sec
vs a single latecomer RTT stream, started
after those and ending before them (because
I told it to run for 55 seconds, not 60), to the
further router:
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.7 sec 2.62 MBytes 363 Kbits/sec
You'll note that the iperf tests tend to run longer than 60
seconds for some reason, but the net result is statistically
right in the ballpark.
$TC -s qdisc show dev eth0
qdisc htb 1: root refcnt 2 r2q 10 default 1 direct_packets_stat 0
Sent 130783919 bytes 93092 pkt (dropped 1, overlimits 177024 requeues 0)
rate 1744bit 2pps backlog 0b 0p requeues 0
qdisc sfq 10: parent 1:1 limit 120p quantum 1514b depth 16 headdrop
divisor 16384
ewma 6 min 1500b max 18000b probability 0.12 ecn
prob_mark 0 prob_mark_head 3223 prob_drop 0
forced_mark 0 forced_mark_head 0 forced_drop 0
Sent 130783919 bytes 93092 pkt (dropped 1, overlimits 3223 requeues 0)
rate 1728bit 2pps backlog 0b 0p requeues 0
To set this up:
I am not going to claim this is a correct setting for redsfq, but it
is certainly
the simplest qdisc setup for a shaper I've yet seen. Many of these params
may well be optional...
tc qdisc del dev eth0
tc qdisc add dev eth0 root handle 1: est 1sec 8sec htb default 1
tc class add dev eth0 parent 1: classid 1:1 est 1sec 8sec \
htb rate 4Mbit mtu 1500 quantum 4500
qdisc add dev eth0 parent 1:1 handle 10: est 1sec 4sec sfq \
limit 120 headdrop flows 64 divisor 16384 \
redflowlimit 40000 min 1500 max 18000 \
depth 16 probability 0.12 ecn
For a single stream at this setting, I get:
[14] 0.0-60.6 sec 27.5 MBytes 3.81 Mbits/sec
which translates out well to decimal megabits, which keep mentally
messing with me.
3.81*1.05
4.00
the packet captures are lovely, and it's nice to see ecn working.
I also did a 24 hour wireless stress test against both the latest
cerowrt and two laptops
without ever dropping a connection... and similar results.
Now I pack for Britain!
--
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net
More information about the Bloat
mailing list