[Bloat] some (very good) preliminary results from fiddling with byte queue limits on 100Mbit ethernet

Dave Taht dave.taht at gmail.com
Sat Nov 19 12:33:49 PST 2011


Dear Tom (author of the byte queue limits patch)

I have finally got out of 'embedded computing' mode and more into a
place where I can hack on the kernel.
(Not that I'm any good at it)

So I extracted the current set of BQL related patches from the
debloat-testing kernel and applied them to a recent linus-head
(3.2-rc2 + a little)
(they are at: http://www.teklibre.com/~d/tnq )

Now, the behavior that I had hoped for was that tx rate would be
closely tied to completion rate, and the buffers on the device driver
would more rarely fill.

(it's a e1000e in my case - tx ring set to 256 by default, only
reducable to 64 via ethtool)

this is my standard latency under load test for a gigE ethernet card
stepped down to 100Mbit.

ethtool -s eth0 advertise 0x008 # force the device to 100Mbit
ethtool -G eth0 tx 64 # Knock the ring buffer down as far as it can go
# Plug the device in (which runs the attached script to configure the
interface to ~'good' values)

netperf -l 60 -H some_other_server
# in my case cerowrt - which has a 4 buffer tx ring and a 8 buffer
txqueuelen set at the moment - far too low)

and ping some_other_server in another window.

AND - YES! YES! YES!

SUB 10ms inter-stream latencies. Ranging from 1.3ms to about 6ms,
median around 4ms.

I haven't seen latencies under load this good on 100Mbit ethernet
since the DECchip tulip 21140!

This is within the range you'd expect for SFQ's 'typical' bunching of
packets. And a tiny fraction of tcp's
speed is lost in the general case. I mean, it's so close to to what
I'd get without the script as to be
statistically insignificant. CPU load, hardly measurable...

Now. Look at the script.

When a link speed of < 101 mbit is detected:

I set the byte queue limit to 3*mtu #  lower and latencies get mildly
lower and more unstable

When without BQL... I tried to use cbq to set a bandwidth limit at 92mbit
I added in sfq on top of that# the documentation for which is now
wrong, there's no way to set a packet limit

Without byte queue limits, latency under load goes to 130ms and stays
there. EG - the
default buffering in the ethernet driver defeats my attempt at
controlling bandwidth with CBQ + SFQ entirely.

With byte queue limits alone and the default pfifo fast qdisc...
... at mtu*3, we still end up with 130ms latency under load. :(

With byte queue limits at mtu*3 + the SFQ qdisc, latency under load
can be hammered
 down below 6ms when running at a 100Mbit line rate. No CBQ needed.

When doing a reverse test (mostly data) - with cerowrt set to the
above (insanely low values)
I see similar response times to the above.

netperf -l 60 -H 172.30.42.1 -t TCP_MAERTS

Anyway, script could use improvement, and I'm busily patching BQL into
the ag71xx driver as I write.

Sorry it's taken me so long to get to this since your bufferbloat
talks at linux plumbers. APPLAUSE.
It's looking like BQL + SFQ is an effective means of improving
fairness and reducing latency on drivers
that can support it. Even if they have large tx rings that the
hardware demands.

More testing on more stuff is needed of course... I'd like to convince
QFQ to work...

#!/bin/sh
# Starving the beast on ethernet v.000001
# Place this file in /etc/network/if-up.d NetworkManager will call it
# for you automagically when the interface is brought up.

# Today's ethernet device drivers are over-optimized for 1000Mbit
# If you are unfortunate enough to run at less than that
# you are going to lose on latency. As one example you will
# have over 130ms latency under load with the default settings in the e1000e
# driver - common to many laptops.

# To force your network device to 100Mbit
# (so you can test and then bitch about bloat in your driver)
# ethtool -s your_device advertise 0x008
# It will then stay stuck at 100Mbit until you change it back.
# It also helps to lower your ring buffer as far as it will go
# ethtool -G your_device tx 64 # or lower if you can
# And after doing all that you wil be lucky to get 120ms latency under load.

# So I have also built byte queue limits into my kernels at
# http://www.teklibre.com/~d/tnq

# Adding in the below, without byte queue limits enabled, and cbq, gets you to
# around 12ms. With byte queue limits, I can get to ~4-6 ms latency under load.
# However, (less often of late), I sometimes end up at 130ms.
# It would be my hope, with some more tuning (QFQ?), better SFQ setup?
# to get below 1 ms.

debloat_ethernet() {
percent=92
txqueuelen=100
bytelimit=64000

speed=`cat /sys/class/net/$IFACE/speed`
mtu=`ip -o link show dev $IFACE | awk '{print $5;}'`
bytelimit=`expr $mtu '*' 3`

[ $speed -lt 1001 ] && { percent=94; txqueuelen=100; }
if [ $speed -lt 101 ]
then
	percent=92;
	txqueuelen=50;

fi

#[ $speed -lt 11 ] && { percent=90; txqueuelen=20; }

newspeed=`expr $speed \* $percent / 100`

modprobe sch_cbq
modprobe sch_sfq
modprobe sch_qfq # I can't get QFQ to work

# Doing this twice kicks the driver harder. Sometimes it gets stuck otherwise

ifconfig $IFACE txqueuelen $txqueuelen
tc qdisc del dev $IFACE root
ifconfig $IFACE txqueuelen $txqueuelen

tc qdisc del dev $IFACE root
#tc qdisc add dev $IFACE root handle 1 cbq bandwidth ${newspeed}mbit avpkt 1524
#tc qdisc add dev $IFACE parent 1: handle 10 sfq

if [ -e /sys/class/net/$IFACE/queues/tx-0/byte_queue_limits ]
then
	for i in /sys/class/net/$IFACE/queues/tx-*/byte_queue_limits
	do
		echo $bytelimit > $i/limit_max
	done

	tc qdisc add dev $IFACE handle 1 root sfq
else

tc qdisc add dev $IFACE root handle 1 cbq bandwidth ${newspeed}mbit avpkt 1524
tc qdisc add dev $IFACE parent 1: handle 10 sfq

fi

}

debloat_wireless() {

# HAH. Like any of this helps wireless

exit

percent=92
txqueuelen=100
speed=`cat /sys/class/net/$IFACE/speed`
[ $speed -lt 1001 ] && { percent=94; txqueuelen=100; }
[ $speed -lt 101 ] && { percent=93; txqueuelen=50; }
[ $speed -lt 11 ] && { percent=90; txqueuelen=20; }

newspeed=`expr $speed \* $percent / 100`

#echo $newspeed

modprobe sch_cbq
modprobe sch_sfq
modprobe sch_qfq

# Just this much would help. If wireless had a 'speed'
ifconfig $IFACE txqueuelen $txqueuelen

}

if [ -h /sys/class/net/$IFACE/phy80211 ]
then
	debloat_wireless
else
	debloat_ethernet
fi


-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net
-------------- next part --------------
A non-text attachment was scrubbed...
Name: debloat
Type: application/octet-stream
Size: 3066 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20111119/6e2805bc/attachment.obj>


More information about the Bloat mailing list