CoDel AQM discussions
 help / color / mirror / Atom feed
* [Codel] GRO (generic receive offload) is not helpful with codel and fq_codel
@ 2012-11-22  7:22 Dave Taht
  2012-11-22  7:57 ` Eric Dumazet
  0 siblings, 1 reply; 2+ messages in thread
From: Dave Taht @ 2012-11-22  7:22 UTC (permalink / raw)
  To: codel, David Woodhouse

I am fiddling with one of david woodhouse's machines (which is 16MBit
down, .75Mbit up)

Pouring packets through it, I noticed significant jitter while using
HTB to rate limit, and saw maxpacket consistently
hitting 14546, rather than staying at a MTU.

Normally, when I see this, this is due to TSO/GSO offloads which I've
turned off on this machine's egress network card. But as this is
happening when I'm forwarding packets I thought that perhaps it was
GRO. But I'd turned that off too, on
this network card and hard limited BQL to my usual 3k, and  I still hit it....

tc -s qdisc show dev eth1
... snip, snip...
qdisc fq_codel 120: parent 1:12 limit 10240p flows 16000 quantum 500
target 20.0ms interval 100.0ms
 Sent 2213780 bytes 4843 pkt (dropped 48, overlimits 0 requeues 0)
 backlog 12160b 8p requeues 0
  maxpacket 14546 drop_overlimit 0 new_flow_count 1237 ecn_mark 0
                     *******
  new_flows_len 1 old_flows_len 2

Then! (after sleeping on it)  I realized that the source of the
overlarge packets was coming from the *main* ethernet interface, which
were then getting pumped through the egress interface. So turning off
GRO on THAT (eth0) interface, got maxpacket down to the sane size of
1514. And latency and jitter - and for that matter, inter-flow
fairness - markedly improved.

I went from induced latency under load (pre-fq_codel) of 350 ms on
this link, to 0-90ms with GRO on, to 0-30ms with it off. I will try
and get around to doing a cdf plot of the overall differences today,
because the median looked closer to 70 with GRO on and closer to 20
with GRO off.

This was 3.6.6-1.fc16.i686 with a realtek 8169. I don't remember if
this is BQLed or not (doesn't matter in this test case, as I am using
HTB)

[   16.589197] r8169 eth1: RTL8169sc/8110sc

So, those of you that are testing codel and fq_codel, etc, might want
to see what maxpacket is hitting on
your setup. That seems explain some fq_codel jitter and codel
performance issues in several respects in the testing people are doing
at low bandwidths.

You can turn off offloads via

ethtool -K eth1 tso off off gro off

ethtool -k eth1

Offload parameters for eth1:
rx-checksumming: on
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: off

Lastly I note that given the very slow uplink I was also fiddling with
the codel target value. What effect that actually has is unknown, more
testing today - but I think perhaps I was debugging the wrong problem,
and it was GRO on one of the interfaces causing the oddities.

My thanks to David Woodhouse for letting me hack on his home system...

-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2012-11-22  7:57 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-11-22  7:22 [Codel] GRO (generic receive offload) is not helpful with codel and fq_codel Dave Taht
2012-11-22  7:57 ` Eric Dumazet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox