[Bloat] Recommendations for fq_codel and tso/gso in 2017
Dave Täht
dave at taht.net
Fri Jan 27 02:55:40 EST 2017
On 1/26/17 11:21 PM, Hans-Kristian Bakke wrote:
> Hi
>
> After having had some issues with inconcistent tso/gso configuration
> causing performance issues for sch_fq with pacing in one of my systems,
> I wonder if is it still recommended to disable gso/tso for interfaces
> used with fq_codel qdiscs and shaping using HTB etc.
At lower bandwidths gro can do terrible things. Say you have a 1Mbit
uplink, and IW10. (At least one device (mvneta) will synthesise 64k of
gro packets)
a single IW10 burst from one flow injects 130ms of latency.
>
> If there is a trade off, at which bandwith does it generally make more
> sense to enable tso/gso than to have it disabled when doing HTB shaped
> fq_codel qdiscs?
I stopped caring about tuning params at > 40Mbit. < 10 gbit, or rather,
trying get below 200usec of jitter|latency. (Others care)
And: My expectation was generally that people would ignore our
recommendations on disabling offloads!
Yes, we should revise the sample sqm code and recommendations for a post
gigabit era to not bother with changing network offloads. Were you
modifying the old debloat script?
TBF & sch_Cake do peeling of gro/tso/gso back into packets, and then
interleave their scheduling, so GRO is both helpful (transiting the
stack faster) and harmless, at all bandwidths.
HTB doesn't peel. We just ripped out hsfc for sqm-scripts (too buggy),
alsp. Leaving: tbf + fq_codel, htb+fq_codel, and cake models there.
...
Cake is coming along nicely. I'd love a test in your 2Gbit bonding
scenario, particularly in a per host fairness test, at line or shaped
rates. We recently got cake working well with nat.
http://blog.cerowrt.org/flent/steam/down_working.svg (ignore the latency
figure, the 6 flows were to spots all over the world)
> Regards,
> Hans-Kristian
>
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
More information about the Bloat
mailing list