[Cake] FIB lookup verses GRO

Dave Taht dave.taht at gmail.com
Sun Apr 12 18:08:22 EDT 2015


On the linksys 1900ac I was able to achieve full line rate (920Mbits)
using the mvneta driver in it (which lacks BQL, AND is hardware
multiqueue - we need to add BQL at least)

Flooding it in both directions with rrul I got about 600/500.

With offloads off, that was halved. mvneta has a interesting property
in that it is actually doing software GRO - and I assume (need
profiling) much of the savings here are coming from not having to do a
FIB lookup on every packet. FIB lookups are vastly improved in linux
4.0 and later.

(Turning off offloads univrsally to shape is not a good idea, which is
also why I suggested GRO "peeling" be explored. )

A) Having the hash handy, we could associate the next hop on rx and
steer from there (RPS), caching the next hop as part of an associated
structure in the hash, cleaning it when the FIB table is changed.

B) mvneta had GRO of up to 64k! so to me this means that entire tcp
windows are being buffered up at line rate and sent as a burst and
these MUST be broken up at low rates - so GRO "peeling" is a necessity
when shaping.


-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67



More information about the Cake mailing list