<p dir="ltr">Cake makes use of Linux' "packet dissecting" infrastructure. If the latter knows about the tunnelling protocol, Cake should naturally see the IP and port numbers of the inner payload rather than the outer tunnel.</p>
<p dir="ltr">I don't know, however, precisely what tunnels are supported. At minimum, don't ever expect encrypted tunnels to behave this way!</p>
<p dir="ltr"> - Jonathan Morton<br>
</p>
<div class="gmail_extra"><br><div class="gmail_quote">On 18 Jun 2017 21:13, "Cong Xu" <<a href="mailto:davidxu06@gmail.com">davidxu06@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Hi,<br><br></div>I wonder if cake's flow isolation works with the ipip tunnel? I hope to guarantee the networking fair-share among containers/VMs in the same host. Thus, I used sfq/fq to associate with each tc class created in advance to provide both shaping and scheduling. The scripts roughly look like this (Assume 2 containers hosting iperf client run in the same host. One container sends 100 parallel streams via -P 100 to iperf server running in another host, the other one send 10 parallel streams with -P 10.): <br><br>tc qdisc add dev <span class="m_6951193542822369620gmail-m_8758044782553112137gmail-pl-smi">$NIC</span> root handle 1: htb default 2<br>tc class add dev <span class="m_6951193542822369620gmail-m_8758044782553112137gmail-pl-smi">$NIC</span> parent 1: classid 1:1 htb rate <span class="m_6951193542822369620gmail-m_8758044782553112137gmail-pl-smi">${NIC_RATE}</span>mbit burst 1m cburst 1m<br>tc class add dev <span class="m_6951193542822369620gmail-m_8758044782553112137gmail-pl-smi">$NIC</span> parent 1:1 classid 1:2 htb rate <span class="m_6951193542822369620gmail-m_8758044782553112137gmail-pl-smi">${RATE1}</span>mbit ceil <span class="m_6951193542822369620gmail-m_8758044782553112137gmail-pl-smi">${NIC_RATE}</span>mbit burst 1m cburst 1m <br>tc class add dev <span class="m_6951193542822369620gmail-m_8758044782553112137gmail-pl-smi">$NIC</span> parent 1:1 classid 1:3 htb rate <span class="m_6951193542822369620gmail-m_8758044782553112137gmail-pl-smi">${RATE2}</span>mbit ceil <span class="m_6951193542822369620gmail-m_8758044782553112137gmail-pl-smi">${NIC_RATE}</span>mbit burst1m cburst 1m <br>tc qdisc add dev <span class="m_6951193542822369620gmail-m_8758044782553112137gmail-pl-smi">$NIC</span> parent 1:2 handle 2 sfq perturb 10 <br>tc qdisc add dev <span class="m_6951193542822369620gmail-m_8758044782553112137gmail-pl-smi">$NIC</span> parent 1:3 handle 3 sfq perturb 10 <br>tc filter ad ...<br><br>It works well, each container running iperf gets the almost same bandwidth regardless of the flows number. (Without the sfq, the container sending 100 streams acchieves much higher bandwidth than the 10 streams guy.)<br><br>-------------- simultaneous 2 unlimited (100 conns vs 10 conns) -------------<br>job "big-unlimited-client" created<br>job "small-unlimited-client" created<br>-------------- unlimited server <-- unlimited client (100 conns) -------------<br>[SUM] 0.00-50.01 sec 24.9 GBytes 4.22 Gbits/sec 16874 sender<br>[SUM] 0.00-50.01 sec 24.8 GBytes 4.21 Gbits/sec receiver<br><br>-------------- unlimited server <-- unlimited client (10 conns) -------------<br>[SUM] 0.00-50.00 sec 24.4 GBytes 4.19 Gbits/sec 13802 sender<br>[SUM] 0.00-50.00 sec 24.4 GBytes 4.19 Gbits/sec receiver<br><br>However, if the ipip is enabled, sfq dose not work anymore. <br><br>-------------- simultaneous 2 unlimited (100 conns vs 10 conns) -------------<br>job "big-unlimited-client" created<br>job "small-unlimited-client" created<br>-------------- unlimited server <-- unlimited client (100 conns) -------------<br>[SUM] 0.00-50.00 sec 27.2 GBytes 4.67 Gbits/sec 391278 sender<br>[SUM] 0.00-50.00 sec 27.1 GBytes 4.65 Gbits/sec receiver<br><br>-------------- unlimited server <-- unlimited client (10 conns) -------------<br>[SUM] 0.00-50.00 sec 6.85 GBytes 1.18 Gbits/sec 64153 sender<br>[SUM] 0.00-50.00 sec 6.82 GBytes 1.17 Gbits/sec receiver<br><br>The reason behind is that the src/dst ip addresses using ipip tunnel are same for all flows which are the src/dst ip of the host NICs instead of veth ip of each container/VM, and there is no ports number for the outside header of ipip packet. I verified this by capturing the traffic on NIC and analyzing it with wireshark. The real src/dst ip of container/VM is visible on the tunnel device (e.g. tunl0). Theoretically, this issue can be solved if I set up tc class and sfq on tunl0 instead of host NIC. I tried it, unfortunately, it did not work either. fq does not work for the same reason, because both sfq and fq use the same flow classifier (src/dst ips and ports). So, I just wonder if cake works with ipip tunnel or not. <br><br>I appreciate if you can provide any help based on your expertise. Thanks.<br><br>Regards,<br>Cong<br></div>
<br>______________________________<wbr>_________________<br>
Cake mailing list<br>
<a href="mailto:Cake@lists.bufferbloat.net">Cake@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/cake" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/<wbr>listinfo/cake</a><br>
<br></blockquote></div></div>