[Cake] cake separate qos for lan

Jonathan Morton chromatix99 at gmail.com
Mon Mar 28 15:20:58 EDT 2016

> On 28 Mar, 2016, at 15:25, Allan Pinto <allan316 at gmail.com> wrote:
> I should have made this more clear please see below topology with added comments. the customers connecting to the linux router can be in range from 100 to 2000, so shaping on the switch is not really a option. I am right now testing on a i3 machine, but for actual live testing am planning to test with i7 or a xeon.
>                   Cache-Server [ connected to internet gateway , traffic can be sent to it via wccp or policy based routing ]
>                            |
>   internet---->internet Gateway —> L2 switch [ MEN network on fiber ]   --> LInux router with cake[ includes a pppoe server which authenticates with radius ] - - [ pppoe connection over a fiber men network ]  --> customer [ customers can be 100 to 2000 ]

I see - so you are doing something like FTTP to a block of flats, where each flat is potentially a separate customer.  That also means you have lots of virtual interfaces (each carrying one PPPoE session) over one physical interface.

> getting a illegal filter id for these two commands, 
> >tc filter replace dev ppp0 protocol ip prio 1 handle 11 u32 match ip src $CACHE_IP/32
> >tc filter replace dev ppp0 protocol ip prio 2 handle 12 u32 action mirred egress redirect dev ifb0

Hmm.  Apparently filters need to be attached to a classful qdisc as a parent, which cake is not - I had hoped that the root class would act as a surrogate.  This filter system has a lot of under-documented and counter-intuitive behaviour.

Did you try the IMQ alternative?  I don’t see any reason why it shouldn’t work, as long as you can build a kernel with IMQ support.

But since you have lots of customers per router, there may still be a sensible way to do it without IMQ.  Here we step back from the idea of attaching cake instances directly to the virtual interfaces, and instead construct a complete shaping system using a classful structure on the physical device.  Each customer gets a class (and thus a cake instance which you can set the bandwidth of individually), and the cache gets its own separate class (and a cake instance).  This reduces the number of cake instances required considerably.

I believe cake has been tested as capable of handling 10Gb/sec traffic on commodity hardware.  Certainly I have personally had a very modest machine (an AMD E-450) shaping accurately at essentially GigE line rate, that being the fastest network hardware I own.  I don’t think this will be materially affected by the number of cake instances present, but as always, hard data from real experiments trumps theory.

However, by the time traffic hits the physical egress interface, it has already been encapsulated in PPPoE, making it much harder to write a filter to identify which customer’s traffic it is, or whether it came from the cache.  So we need to do the shaping on an IFB device, with the traffic redirected from the internal ingress device, where it has not yet been encapsulated.  For sanity’s sake, we should also put a basic cake instance on the physical egress port.

The following is thus a combination of ingress filtering and a hashed filter, to handle up to 1022 customers with consecutive IPv4 addresses.  Adjust as required.  Beware of the leopard.

tc qdisc replace dev $EGRESS root handle 1: cake bandwidth $LOTS besteffort flows

ip link set ifb0 up

tc qdisc replace dev $INGRESS handle ffff: ingress

tc filter replace dev $INGRESS parent ffff: protocol all u32 action mirred egress redirect dev ifb0

tc filter replace dev ifb0 parent 2:0 prio 5 protocol ip u32

tc filter add dev ifb0 parent 2:0 prio 6 handle 3: protocol ip u32 match ip src $CACHEIP flowid 2:3ff

tc qdisc replace dev ifb0 parent 2:3ff handle 5:3ff cake bandwidth $PLENTY besteffort triple-isolate

tc filter add dev ifb0 parent 2:0 prio 5 handle 4: protocol ip u32 divisor 1024

for each customer, with $HEXID in [1..3fe]:

	tc filter add dev ifb0 protocol ip parent 2:0 prio 5 u32 ht 4:${HEXID}: match ip dst $CUSTIP flowid 2:${HEXID}

	tc qdisc replace dev ifb0 parent 2:${HEXID} handle 5:${HEXID} cake bandwidth $CUSTRATE triple-isolate

tc filter add dev ifb0 protocol ip parent 2:0 prio 5 u32 ht 800:: match ip dst ${CUSTNET}/22 hashkey mask 0x000003ff at 16 link 4:

Honestly, I think the IMQ version is simpler and easier to understand, as well as catching all customer traffic.  The above mess doesn’t even handle IPv6...

 - Jonathan Morton

More information about the Cake mailing list