>tc qdisc replace dev imq0 root handle 2: cake raw bandwidth $NONCACHE_RATE flows here imq0 should be replaced by ifb0 right?. i will be testing this tonight and reply with results. On Mon, Mar 28, 2016 at 4:01 PM, Jonathan Morton wrote: > > > On 27 Mar, 2016, at 11:20, moeller0 wrote: > > > > it might be more future-proof to just use IFBs from the get-go > > For this particular use-case, it seems to be more complicated to use IFB > than IMQ, largely because there is no iptables rule to divert packets > through an IFB device, and unlike iptables, the CBQ filter mechanism > doesn’t directly support negative matches of any kind. > > However, I think this would work - though it’s completely untested: > > ip link set ifb0 up > > tc qdisc replace dev ppp0 root handle 1: cake pppoe-vcmux bandwidth > $FULL_RATE triple-isolate > > tc qdisc replace dev imq0 root handle 2: cake raw bandwidth $NONCACHE_RATE > flows > > tc filter replace dev ppp0 protocol ip prio 1 handle 11 u32 match ip src > $CACHE_IP/32 > > tc filter replace dev ppp0 protocol ip prio 2 handle 12 u32 action mirred > egress redirect dev ifb0 > > The logic of the above is that a positive match is made on the cache > traffic, but no action is taken. This terminates filter processing for > that traffic. The remaining traffic is redirected unconditionally to the > IFB device by the second filter rule. > > One thing I’m not entirely certain of is whether traffic that has been > through an IFB device is then requeued in the normal way on the original > device. I’d appreciate feedback on whether this system does in fact work. > > > I would respectfully recommend to avoid the symbolic overhead parameters > > Even if I change their underlying behaviour in the future, it’ll be in a > way that retains backwards compatibility with all the examples I’ve given > for the current scheme. I mostly wanted to raise awareness that the > overhead compensation system exists for use on encapsulated links. > > - Jonathan Morton > > -- Thanx and regd's. Allan.