[Cake] flow isolation with ipip
Cong Xu
davidxu06 at gmail.com
Mon Jun 12 15:13:53 EDT 2017
Hi,
I wonder if cake's flow isolation works with the ipip tunnel? I hope to
guarantee the networking fair-share among containers/VMs in the same host.
Thus, I used sfq/fq to associate with each tc class created in advance to
provide both shaping and scheduling. The scripts roughly look like this
(Assume 2 containers hosting iperf client run in the same host. One
container sends 100 parallel streams via -P 100 to iperf server running in
another host, the other one send 10 parallel streams with -P 10.):
tc qdisc add dev $NIC root handle 1: htb default 2
tc class add dev $NIC parent 1: classid 1:1 htb rate ${NIC_RATE}mbit burst
1m cburst 1m
tc class add dev $NIC parent 1:1 classid 1:2 htb rate ${RATE1}mbit ceil
${NIC_RATE}mbit burst 1m cburst 1m
tc class add dev $NIC parent 1:1 classid 1:3 htb rate ${RATE2}mbit ceil
${NIC_RATE}mbit burst1m cburst 1m
tc qdisc add dev $NIC parent 1:2 handle 2 sfq perturb 10
tc qdisc add dev $NIC parent 1:3 handle 3 sfq perturb 10
tc filter ad ...
It works well, each container running iperf gets the almost same bandwidth
regardless of the flows number. (Without the sfq, the container sending 100
streams acchieves much higher bandwidth than the 10 streams guy.)
-------------- simultaneous 2 unlimited (100 conns vs 10 conns)
-------------
job "big-unlimited-client" created
job "small-unlimited-client" created
-------------- unlimited server <-- unlimited client (100 conns)
-------------
[SUM] 0.00-50.01 sec 24.9 GBytes 4.22 Gbits/sec 16874
sender
[SUM] 0.00-50.01 sec 24.8 GBytes 4.21 Gbits/sec
receiver
-------------- unlimited server <-- unlimited client (10 conns)
-------------
[SUM] 0.00-50.00 sec 24.4 GBytes 4.19 Gbits/sec 13802
sender
[SUM] 0.00-50.00 sec 24.4 GBytes 4.19 Gbits/sec
receiver
However, if the ipip is enabled, sfq dose not work anymore.
-------------- simultaneous 2 unlimited (100 conns vs 10 conns)
-------------
job "big-unlimited-client" created
job "small-unlimited-client" created
-------------- unlimited server <-- unlimited client (100 conns)
-------------
[SUM] 0.00-50.00 sec 27.2 GBytes 4.67 Gbits/sec 391278
sender
[SUM] 0.00-50.00 sec 27.1 GBytes 4.65 Gbits/sec
receiver
-------------- unlimited server <-- unlimited client (10 conns)
-------------
[SUM] 0.00-50.00 sec 6.85 GBytes 1.18 Gbits/sec 64153
sender
[SUM] 0.00-50.00 sec 6.82 GBytes 1.17 Gbits/sec
receiver
The reason behind is that the src/dst ip addresses using ipip tunnel are
same for all flows which are the src/dst ip of the host NICs instead of
veth ip of each container/VM, and there is no ports number for the outside
header of ipip packet. I verified this by capturing the traffic on NIC and
analyzing it with wireshark. The real src/dst ip of container/VM is visible
on the tunnel device (e.g. tunl0). Theoretically, this issue can be solved
if I set up tc class and sfq on tunl0 instead of host NIC. I tried it,
unfortunately, it did not work either. fq does not work for the same
reason, because both sfq and fq use the same flow classifier (src/dst ips
and ports). So, I just wonder if cake works with ipip tunnel or not.
I appreciate if you can provide any help based on your expertise. Thanks.
Regards,
Cong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cake/attachments/20170612/9a6f9066/attachment.html>
More information about the Cake
mailing list