[Cake] flow isolation with ipip
Cong Xu
davidxu06 at gmail.com
Thu Aug 17 15:27:02 EDT 2017
Hi Pete,
Thanks for your reply. It is helpful. I have fixed my problem with IPIP. I
checked the code of sfq in the kernel, it can recognize the IPIP
encapsulation. The issue I met is caused by some other reasons, not IPIP.
Regards,
Cong
2017-08-17 12:38 GMT-05:00 Pete Heist <peteheist at gmail.com>:
> I don’t know if this helps, but I think this should work. :) I used IPIP
> tunnels (with and without FOU encapsulation) over a point-to-point WiFi
> bridge as a way of testing Cake over WiFi without traffic being prioritized
> by the Linux WiFi stack or WMM, for example. The WiFi stack “sees" the
> outer IPIP packet, and treats it with whatever diffserv marking is on the
> outer packet, rather than what’s on the inner packet that’s encapsulated. I
> applied Cake to the tunnel device, which seemed to see the packets before
> encapsulation, and it worked well. I think it should also work for flow
> isolation.
>
> I can go through my setup scripts and get more specific if need be, to
> make sure I’m not leading anyone astray. I think the important part is that
> Cake be applied to the tunnel device and not just a regular device that’s
> carrying IPIP traffic...
>
> > Message: 1
> > Date: Thu, 17 Aug 2017 02:55:17 +0300
> > From: Jonathan Morton <chromatix99 at gmail.com>
> > To: Cong Xu <davidxu06 at gmail.com>
> > Cc: Cake List <cake at lists.bufferbloat.net>
> > Subject: Re: [Cake] flow isolation with ipip
> > Message-ID:
> > <CAJq5cE0qSNrbzUufzaup3sZyeKaN=R=JAfqREojbyK6pFAyzDw at mail.
> gmail.com>
> > Content-Type: text/plain; charset="utf-8"
> >
> > Cake makes use of Linux' "packet dissecting" infrastructure. If the
> latter
> > knows about the tunnelling protocol, Cake should naturally see the IP and
> > port numbers of the inner payload rather than the outer tunnel.
> >
> > I don't know, however, precisely what tunnels are supported. At minimum,
> > don't ever expect encrypted tunnels to behave this way!
> >
> > - Jonathan Morton
> >
> > On 18 Jun 2017 21:13, "Cong Xu" <davidxu06 at gmail.com> wrote:
> >
> >> Hi,
> >>
> >> I wonder if cake's flow isolation works with the ipip tunnel? I hope to
> >> guarantee the networking fair-share among containers/VMs in the same
> host.
> >> Thus, I used sfq/fq to associate with each tc class created in advance
> to
> >> provide both shaping and scheduling. The scripts roughly look like this
> >> (Assume 2 containers hosting iperf client run in the same host. One
> >> container sends 100 parallel streams via -P 100 to iperf server running
> in
> >> another host, the other one send 10 parallel streams with -P 10.):
> >>
> >> tc qdisc add dev $NIC root handle 1: htb default 2
> >> tc class add dev $NIC parent 1: classid 1:1 htb rate ${NIC_RATE}mbit
> >> burst 1m cburst 1m
> >> tc class add dev $NIC parent 1:1 classid 1:2 htb rate ${RATE1}mbit ceil
> >> ${NIC_RATE}mbit burst 1m cburst 1m
> >> tc class add dev $NIC parent 1:1 classid 1:3 htb rate ${RATE2}mbit ceil
> >> ${NIC_RATE}mbit burst1m cburst 1m
> >> tc qdisc add dev $NIC parent 1:2 handle 2 sfq perturb 10
> >> tc qdisc add dev $NIC parent 1:3 handle 3 sfq perturb 10
> >> tc filter ad ...
> >>
> >> It works well, each container running iperf gets the almost same
> bandwidth
> >> regardless of the flows number. (Without the sfq, the container sending
> 100
> >> streams acchieves much higher bandwidth than the 10 streams guy.)
> >>
> >> -------------- simultaneous 2 unlimited (100 conns vs 10 conns)
> >> -------------
> >> job "big-unlimited-client" created
> >> job "small-unlimited-client" created
> >> -------------- unlimited server <-- unlimited client (100 conns)
> >> -------------
> >> [SUM] 0.00-50.01 sec 24.9 GBytes 4.22 Gbits/sec 16874
> >> sender
> >> [SUM] 0.00-50.01 sec 24.8 GBytes 4.21 Gbits/sec
> >> receiver
> >>
> >> -------------- unlimited server <-- unlimited client (10 conns)
> >> -------------
> >> [SUM] 0.00-50.00 sec 24.4 GBytes 4.19 Gbits/sec 13802
> >> sender
> >> [SUM] 0.00-50.00 sec 24.4 GBytes 4.19 Gbits/sec
> >> receiver
> >>
> >> However, if the ipip is enabled, sfq dose not work anymore.
> >>
> >> -------------- simultaneous 2 unlimited (100 conns vs 10 conns)
> >> -------------
> >> job "big-unlimited-client" created
> >> job "small-unlimited-client" created
> >> -------------- unlimited server <-- unlimited client (100 conns)
> >> -------------
> >> [SUM] 0.00-50.00 sec 27.2 GBytes 4.67 Gbits/sec 391278
> >> sender
> >> [SUM] 0.00-50.00 sec 27.1 GBytes 4.65 Gbits/sec
> >> receiver
> >>
> >> -------------- unlimited server <-- unlimited client (10 conns)
> >> -------------
> >> [SUM] 0.00-50.00 sec 6.85 GBytes 1.18 Gbits/sec 64153
> >> sender
> >> [SUM] 0.00-50.00 sec 6.82 GBytes 1.17 Gbits/sec
> >> receiver
> >>
> >> The reason behind is that the src/dst ip addresses using ipip tunnel are
> >> same for all flows which are the src/dst ip of the host NICs instead of
> >> veth ip of each container/VM, and there is no ports number for the
> outside
> >> header of ipip packet. I verified this by capturing the traffic on NIC
> and
> >> analyzing it with wireshark. The real src/dst ip of container/VM is
> visible
> >> on the tunnel device (e.g. tunl0). Theoretically, this issue can be
> solved
> >> if I set up tc class and sfq on tunl0 instead of host NIC. I tried it,
> >> unfortunately, it did not work either. fq does not work for the same
> >> reason, because both sfq and fq use the same flow classifier (src/dst
> ips
> >> and ports). So, I just wonder if cake works with ipip tunnel or not.
> >>
> >> I appreciate if you can provide any help based on your expertise.
> Thanks.
> >>
> >> Regards,
> >> Cong
> >>
> >> _______________________________________________
> >> Cake mailing list
> >> Cake at lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/cake
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cake/attachments/20170817/6a765573/attachment.html>
More information about the Cake
mailing list