Cake - FQ_codel the next generation
 help / color / mirror / Atom feed
* Re: [Cake] flow isolation with ipip
       [not found] <mailman.1.1502985601.3893.cake@lists.bufferbloat.net>
@ 2017-08-17 17:38 ` Pete Heist
  2017-08-17 19:27   ` Cong Xu
  0 siblings, 1 reply; 4+ messages in thread
From: Pete Heist @ 2017-08-17 17:38 UTC (permalink / raw)
  To: davidxu06; +Cc: cake

I don’t know if this helps, but I think this should work. :) I used IPIP tunnels (with and without FOU encapsulation) over a point-to-point WiFi bridge as a way of testing Cake over WiFi without traffic being prioritized by the Linux WiFi stack or WMM, for example. The WiFi stack “sees" the outer IPIP packet, and treats it with whatever diffserv marking is on the outer packet, rather than what’s on the inner packet that’s encapsulated. I applied Cake to the tunnel device, which seemed to see the packets before encapsulation, and it worked well. I think it should also work for flow isolation.

I can go through my setup scripts and get more specific if need be, to make sure I’m not leading anyone astray. I think the important part is that Cake be applied to the tunnel device and not just a regular device that’s carrying IPIP traffic...

> Message: 1
> Date: Thu, 17 Aug 2017 02:55:17 +0300
> From: Jonathan Morton <chromatix99@gmail.com>
> To: Cong Xu <davidxu06@gmail.com>
> Cc: Cake List <cake@lists.bufferbloat.net>
> Subject: Re: [Cake] flow isolation with ipip
> Message-ID:
> 	<CAJq5cE0qSNrbzUufzaup3sZyeKaN=R=JAfqREojbyK6pFAyzDw@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> 
> Cake makes use of Linux' "packet dissecting" infrastructure.  If the latter
> knows about the tunnelling protocol, Cake should naturally see the IP and
> port numbers of the inner payload rather than the outer tunnel.
> 
> I don't know, however, precisely what tunnels are supported. At minimum,
> don't ever expect encrypted tunnels to behave this way!
> 
> - Jonathan Morton
> 
> On 18 Jun 2017 21:13, "Cong Xu" <davidxu06@gmail.com> wrote:
> 
>> Hi,
>> 
>> I wonder if cake's flow isolation works with the ipip tunnel? I hope to
>> guarantee the networking fair-share among containers/VMs in the same host.
>> Thus, I used sfq/fq to associate with each tc class created in advance to
>> provide both shaping and scheduling. The scripts roughly look like this
>> (Assume 2 containers hosting iperf client run in the same host. One
>> container sends 100 parallel streams via -P 100 to iperf server running in
>> another host, the other one send 10 parallel streams with -P 10.):
>> 
>> tc qdisc add dev $NIC root handle 1: htb default 2
>> tc class add dev $NIC parent 1: classid 1:1 htb rate ${NIC_RATE}mbit
>> burst 1m cburst 1m
>> tc class add dev $NIC parent 1:1 classid 1:2 htb rate ${RATE1}mbit ceil
>> ${NIC_RATE}mbit burst 1m cburst 1m
>> tc class add dev $NIC parent 1:1 classid 1:3 htb rate ${RATE2}mbit ceil
>> ${NIC_RATE}mbit burst1m cburst 1m
>> tc qdisc add dev $NIC parent 1:2 handle 2 sfq perturb 10
>> tc qdisc add dev $NIC parent 1:3 handle 3 sfq perturb 10
>> tc filter ad ...
>> 
>> It works well, each container running iperf gets the almost same bandwidth
>> regardless of the flows number. (Without the sfq, the container sending 100
>> streams acchieves much higher bandwidth than the 10 streams guy.)
>> 
>> -------------- simultaneous 2 unlimited (100 conns vs 10 conns)
>> -------------
>> job "big-unlimited-client" created
>> job "small-unlimited-client" created
>> -------------- unlimited server <-- unlimited client (100 conns)
>> -------------
>> [SUM]   0.00-50.01  sec  24.9 GBytes  4.22 Gbits/sec  16874
>> sender
>> [SUM]   0.00-50.01  sec  24.8 GBytes  4.21 Gbits/sec
>> receiver
>> 
>> -------------- unlimited server <-- unlimited client (10 conns)
>> -------------
>> [SUM]   0.00-50.00  sec  24.4 GBytes  4.19 Gbits/sec  13802
>> sender
>> [SUM]   0.00-50.00  sec  24.4 GBytes  4.19 Gbits/sec
>> receiver
>> 
>> However, if the ipip is enabled, sfq dose not work anymore.
>> 
>> -------------- simultaneous 2 unlimited (100 conns vs 10 conns)
>> -------------
>> job "big-unlimited-client" created
>> job "small-unlimited-client" created
>> -------------- unlimited server <-- unlimited client (100 conns)
>> -------------
>> [SUM]   0.00-50.00  sec  27.2 GBytes  4.67 Gbits/sec  391278
>> sender
>> [SUM]   0.00-50.00  sec  27.1 GBytes  4.65 Gbits/sec
>> receiver
>> 
>> -------------- unlimited server <-- unlimited client (10 conns)
>> -------------
>> [SUM]   0.00-50.00  sec  6.85 GBytes  1.18 Gbits/sec  64153
>> sender
>> [SUM]   0.00-50.00  sec  6.82 GBytes  1.17 Gbits/sec
>> receiver
>> 
>> The reason behind is that the src/dst ip addresses using ipip tunnel are
>> same for all flows which are the src/dst ip of the host NICs instead of
>> veth ip of each container/VM, and there is no ports number for the outside
>> header of ipip packet. I verified this by capturing the traffic on NIC and
>> analyzing it with wireshark. The real src/dst ip of container/VM is visible
>> on the tunnel device (e.g. tunl0). Theoretically, this issue can be solved
>> if I set up tc class and sfq on tunl0 instead of host NIC. I tried it,
>> unfortunately, it did not work either. fq does not work for the same
>> reason, because both sfq and fq use the same flow classifier (src/dst ips
>> and ports). So, I just wonder if cake works with ipip tunnel or not.
>> 
>> I appreciate if you can provide any help based on your expertise. Thanks.
>> 
>> Regards,
>> Cong
>> 
>> _______________________________________________
>> Cake mailing list
>> Cake@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cake


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Cake] flow isolation with ipip
  2017-08-17 17:38 ` [Cake] flow isolation with ipip Pete Heist
@ 2017-08-17 19:27   ` Cong Xu
  0 siblings, 0 replies; 4+ messages in thread
From: Cong Xu @ 2017-08-17 19:27 UTC (permalink / raw)
  To: Pete Heist; +Cc: cake

[-- Attachment #1: Type: text/plain, Size: 5984 bytes --]

Hi Pete,

Thanks for your reply. It is helpful. I have fixed my problem with IPIP. I
checked the code of sfq in the kernel, it can recognize the IPIP
encapsulation. The issue I met is caused by some other reasons, not IPIP.

Regards,
Cong

2017-08-17 12:38 GMT-05:00 Pete Heist <peteheist@gmail.com>:

> I don’t know if this helps, but I think this should work. :) I used IPIP
> tunnels (with and without FOU encapsulation) over a point-to-point WiFi
> bridge as a way of testing Cake over WiFi without traffic being prioritized
> by the Linux WiFi stack or WMM, for example. The WiFi stack “sees" the
> outer IPIP packet, and treats it with whatever diffserv marking is on the
> outer packet, rather than what’s on the inner packet that’s encapsulated. I
> applied Cake to the tunnel device, which seemed to see the packets before
> encapsulation, and it worked well. I think it should also work for flow
> isolation.
>
> I can go through my setup scripts and get more specific if need be, to
> make sure I’m not leading anyone astray. I think the important part is that
> Cake be applied to the tunnel device and not just a regular device that’s
> carrying IPIP traffic...
>
> > Message: 1
> > Date: Thu, 17 Aug 2017 02:55:17 +0300
> > From: Jonathan Morton <chromatix99@gmail.com>
> > To: Cong Xu <davidxu06@gmail.com>
> > Cc: Cake List <cake@lists.bufferbloat.net>
> > Subject: Re: [Cake] flow isolation with ipip
> > Message-ID:
> >       <CAJq5cE0qSNrbzUufzaup3sZyeKaN=R=JAfqREojbyK6pFAyzDw@mail.
> gmail.com>
> > Content-Type: text/plain; charset="utf-8"
> >
> > Cake makes use of Linux' "packet dissecting" infrastructure.  If the
> latter
> > knows about the tunnelling protocol, Cake should naturally see the IP and
> > port numbers of the inner payload rather than the outer tunnel.
> >
> > I don't know, however, precisely what tunnels are supported. At minimum,
> > don't ever expect encrypted tunnels to behave this way!
> >
> > - Jonathan Morton
> >
> > On 18 Jun 2017 21:13, "Cong Xu" <davidxu06@gmail.com> wrote:
> >
> >> Hi,
> >>
> >> I wonder if cake's flow isolation works with the ipip tunnel? I hope to
> >> guarantee the networking fair-share among containers/VMs in the same
> host.
> >> Thus, I used sfq/fq to associate with each tc class created in advance
> to
> >> provide both shaping and scheduling. The scripts roughly look like this
> >> (Assume 2 containers hosting iperf client run in the same host. One
> >> container sends 100 parallel streams via -P 100 to iperf server running
> in
> >> another host, the other one send 10 parallel streams with -P 10.):
> >>
> >> tc qdisc add dev $NIC root handle 1: htb default 2
> >> tc class add dev $NIC parent 1: classid 1:1 htb rate ${NIC_RATE}mbit
> >> burst 1m cburst 1m
> >> tc class add dev $NIC parent 1:1 classid 1:2 htb rate ${RATE1}mbit ceil
> >> ${NIC_RATE}mbit burst 1m cburst 1m
> >> tc class add dev $NIC parent 1:1 classid 1:3 htb rate ${RATE2}mbit ceil
> >> ${NIC_RATE}mbit burst1m cburst 1m
> >> tc qdisc add dev $NIC parent 1:2 handle 2 sfq perturb 10
> >> tc qdisc add dev $NIC parent 1:3 handle 3 sfq perturb 10
> >> tc filter ad ...
> >>
> >> It works well, each container running iperf gets the almost same
> bandwidth
> >> regardless of the flows number. (Without the sfq, the container sending
> 100
> >> streams acchieves much higher bandwidth than the 10 streams guy.)
> >>
> >> -------------- simultaneous 2 unlimited (100 conns vs 10 conns)
> >> -------------
> >> job "big-unlimited-client" created
> >> job "small-unlimited-client" created
> >> -------------- unlimited server <-- unlimited client (100 conns)
> >> -------------
> >> [SUM]   0.00-50.01  sec  24.9 GBytes  4.22 Gbits/sec  16874
> >> sender
> >> [SUM]   0.00-50.01  sec  24.8 GBytes  4.21 Gbits/sec
> >> receiver
> >>
> >> -------------- unlimited server <-- unlimited client (10 conns)
> >> -------------
> >> [SUM]   0.00-50.00  sec  24.4 GBytes  4.19 Gbits/sec  13802
> >> sender
> >> [SUM]   0.00-50.00  sec  24.4 GBytes  4.19 Gbits/sec
> >> receiver
> >>
> >> However, if the ipip is enabled, sfq dose not work anymore.
> >>
> >> -------------- simultaneous 2 unlimited (100 conns vs 10 conns)
> >> -------------
> >> job "big-unlimited-client" created
> >> job "small-unlimited-client" created
> >> -------------- unlimited server <-- unlimited client (100 conns)
> >> -------------
> >> [SUM]   0.00-50.00  sec  27.2 GBytes  4.67 Gbits/sec  391278
> >> sender
> >> [SUM]   0.00-50.00  sec  27.1 GBytes  4.65 Gbits/sec
> >> receiver
> >>
> >> -------------- unlimited server <-- unlimited client (10 conns)
> >> -------------
> >> [SUM]   0.00-50.00  sec  6.85 GBytes  1.18 Gbits/sec  64153
> >> sender
> >> [SUM]   0.00-50.00  sec  6.82 GBytes  1.17 Gbits/sec
> >> receiver
> >>
> >> The reason behind is that the src/dst ip addresses using ipip tunnel are
> >> same for all flows which are the src/dst ip of the host NICs instead of
> >> veth ip of each container/VM, and there is no ports number for the
> outside
> >> header of ipip packet. I verified this by capturing the traffic on NIC
> and
> >> analyzing it with wireshark. The real src/dst ip of container/VM is
> visible
> >> on the tunnel device (e.g. tunl0). Theoretically, this issue can be
> solved
> >> if I set up tc class and sfq on tunl0 instead of host NIC. I tried it,
> >> unfortunately, it did not work either. fq does not work for the same
> >> reason, because both sfq and fq use the same flow classifier (src/dst
> ips
> >> and ports). So, I just wonder if cake works with ipip tunnel or not.
> >>
> >> I appreciate if you can provide any help based on your expertise.
> Thanks.
> >>
> >> Regards,
> >> Cong
> >>
> >> _______________________________________________
> >> Cake mailing list
> >> Cake@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/cake
>
>

[-- Attachment #2: Type: text/html, Size: 7652 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Cake] flow isolation with ipip
  2017-06-12 19:13 Cong Xu
@ 2017-08-16 23:55 ` Jonathan Morton
  0 siblings, 0 replies; 4+ messages in thread
From: Jonathan Morton @ 2017-08-16 23:55 UTC (permalink / raw)
  To: Cong Xu; +Cc: Cake List

[-- Attachment #1: Type: text/plain, Size: 3948 bytes --]

Cake makes use of Linux' "packet dissecting" infrastructure.  If the latter
knows about the tunnelling protocol, Cake should naturally see the IP and
port numbers of the inner payload rather than the outer tunnel.

I don't know, however, precisely what tunnels are supported. At minimum,
don't ever expect encrypted tunnels to behave this way!

- Jonathan Morton

On 18 Jun 2017 21:13, "Cong Xu" <davidxu06@gmail.com> wrote:

> Hi,
>
> I wonder if cake's flow isolation works with the ipip tunnel? I hope to
> guarantee the networking fair-share among containers/VMs in the same host.
> Thus, I used sfq/fq to associate with each tc class created in advance to
> provide both shaping and scheduling. The scripts roughly look like this
> (Assume 2 containers hosting iperf client run in the same host. One
> container sends 100 parallel streams via -P 100 to iperf server running in
> another host, the other one send 10 parallel streams with -P 10.):
>
> tc qdisc add dev $NIC root handle 1: htb default 2
> tc class add dev $NIC parent 1: classid 1:1 htb rate ${NIC_RATE}mbit
> burst 1m cburst 1m
> tc class add dev $NIC parent 1:1 classid 1:2 htb rate ${RATE1}mbit ceil
> ${NIC_RATE}mbit burst 1m cburst 1m
> tc class add dev $NIC parent 1:1 classid 1:3 htb rate ${RATE2}mbit ceil
> ${NIC_RATE}mbit burst1m cburst 1m
> tc qdisc add dev $NIC parent 1:2 handle 2 sfq perturb 10
> tc qdisc add dev $NIC parent 1:3 handle 3 sfq perturb 10
> tc filter ad ...
>
> It works well, each container running iperf gets the almost same bandwidth
> regardless of the flows number. (Without the sfq, the container sending 100
> streams acchieves much higher bandwidth than the 10 streams guy.)
>
> -------------- simultaneous 2 unlimited (100 conns vs 10 conns)
> -------------
> job "big-unlimited-client" created
> job "small-unlimited-client" created
> -------------- unlimited server <-- unlimited client (100 conns)
> -------------
> [SUM]   0.00-50.01  sec  24.9 GBytes  4.22 Gbits/sec  16874
> sender
> [SUM]   0.00-50.01  sec  24.8 GBytes  4.21 Gbits/sec
> receiver
>
> -------------- unlimited server <-- unlimited client (10 conns)
> -------------
> [SUM]   0.00-50.00  sec  24.4 GBytes  4.19 Gbits/sec  13802
> sender
> [SUM]   0.00-50.00  sec  24.4 GBytes  4.19 Gbits/sec
> receiver
>
> However, if the ipip is enabled, sfq dose not work anymore.
>
> -------------- simultaneous 2 unlimited (100 conns vs 10 conns)
> -------------
> job "big-unlimited-client" created
> job "small-unlimited-client" created
> -------------- unlimited server <-- unlimited client (100 conns)
> -------------
> [SUM]   0.00-50.00  sec  27.2 GBytes  4.67 Gbits/sec  391278
> sender
> [SUM]   0.00-50.00  sec  27.1 GBytes  4.65 Gbits/sec
> receiver
>
> -------------- unlimited server <-- unlimited client (10 conns)
> -------------
> [SUM]   0.00-50.00  sec  6.85 GBytes  1.18 Gbits/sec  64153
> sender
> [SUM]   0.00-50.00  sec  6.82 GBytes  1.17 Gbits/sec
> receiver
>
> The reason behind is that the src/dst ip addresses using ipip tunnel are
> same for all flows which are the src/dst ip of the host NICs instead of
> veth ip of each container/VM, and there is no ports number for the outside
> header of ipip packet. I verified this by capturing the traffic on NIC and
> analyzing it with wireshark. The real src/dst ip of container/VM is visible
> on the tunnel device (e.g. tunl0). Theoretically, this issue can be solved
> if I set up tc class and sfq on tunl0 instead of host NIC. I tried it,
> unfortunately, it did not work either. fq does not work for the same
> reason, because both sfq and fq use the same flow classifier (src/dst ips
> and ports). So, I just wonder if cake works with ipip tunnel or not.
>
> I appreciate if you can provide any help based on your expertise. Thanks.
>
> Regards,
> Cong
>
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
>
>

[-- Attachment #2: Type: text/html, Size: 5720 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Cake] flow isolation with ipip
@ 2017-06-12 19:13 Cong Xu
  2017-08-16 23:55 ` Jonathan Morton
  0 siblings, 1 reply; 4+ messages in thread
From: Cong Xu @ 2017-06-12 19:13 UTC (permalink / raw)
  To: cake

[-- Attachment #1: Type: text/plain, Size: 3226 bytes --]

Hi,

I wonder if cake's flow isolation works with the ipip tunnel? I hope to
guarantee the networking fair-share among containers/VMs in the same host.
Thus, I used sfq/fq to associate with each tc class created in advance to
provide both shaping and scheduling. The scripts roughly look like this
(Assume 2 containers hosting iperf client run in the same host. One
container sends 100 parallel streams via -P 100 to iperf server running in
another host, the other one send 10 parallel streams with -P 10.):

tc qdisc add dev $NIC root handle 1: htb default 2
tc class add dev $NIC parent 1: classid 1:1 htb rate ${NIC_RATE}mbit burst
1m cburst 1m
tc class add dev $NIC parent 1:1 classid 1:2 htb rate ${RATE1}mbit ceil
${NIC_RATE}mbit burst 1m cburst 1m
tc class add dev $NIC parent 1:1 classid 1:3 htb rate ${RATE2}mbit ceil
${NIC_RATE}mbit burst1m cburst 1m
tc qdisc add dev $NIC parent 1:2 handle 2 sfq perturb 10
tc qdisc add dev $NIC parent 1:3 handle 3 sfq perturb 10
tc filter ad ...

It works well, each container running iperf gets the almost same bandwidth
regardless of the flows number. (Without the sfq, the container sending 100
streams acchieves much higher bandwidth than the 10 streams guy.)

-------------- simultaneous 2 unlimited (100 conns vs 10 conns)
-------------
job "big-unlimited-client" created
job "small-unlimited-client" created
-------------- unlimited server <-- unlimited client (100 conns)
-------------
[SUM]   0.00-50.01  sec  24.9 GBytes  4.22 Gbits/sec  16874
sender
[SUM]   0.00-50.01  sec  24.8 GBytes  4.21 Gbits/sec
receiver

-------------- unlimited server <-- unlimited client (10 conns)
-------------
[SUM]   0.00-50.00  sec  24.4 GBytes  4.19 Gbits/sec  13802
sender
[SUM]   0.00-50.00  sec  24.4 GBytes  4.19 Gbits/sec
receiver

However, if the ipip is enabled, sfq dose not work anymore.

-------------- simultaneous 2 unlimited (100 conns vs 10 conns)
-------------
job "big-unlimited-client" created
job "small-unlimited-client" created
-------------- unlimited server <-- unlimited client (100 conns)
-------------
[SUM]   0.00-50.00  sec  27.2 GBytes  4.67 Gbits/sec  391278
sender
[SUM]   0.00-50.00  sec  27.1 GBytes  4.65 Gbits/sec
receiver

-------------- unlimited server <-- unlimited client (10 conns)
-------------
[SUM]   0.00-50.00  sec  6.85 GBytes  1.18 Gbits/sec  64153
sender
[SUM]   0.00-50.00  sec  6.82 GBytes  1.17 Gbits/sec
receiver

The reason behind is that the src/dst ip addresses using ipip tunnel are
same for all flows which are the src/dst ip of the host NICs instead of
veth ip of each container/VM, and there is no ports number for the outside
header of ipip packet. I verified this by capturing the traffic on NIC and
analyzing it with wireshark. The real src/dst ip of container/VM is visible
on the tunnel device (e.g. tunl0). Theoretically, this issue can be solved
if I set up tc class and sfq on tunl0 instead of host NIC. I tried it,
unfortunately, it did not work either. fq does not work for the same
reason, because both sfq and fq use the same flow classifier (src/dst ips
and ports). So, I just wonder if cake works with ipip tunnel or not.

I appreciate if you can provide any help based on your expertise. Thanks.

Regards,
Cong

[-- Attachment #2: Type: text/html, Size: 4407 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-08-17 19:27 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.1.1502985601.3893.cake@lists.bufferbloat.net>
2017-08-17 17:38 ` [Cake] flow isolation with ipip Pete Heist
2017-08-17 19:27   ` Cong Xu
2017-06-12 19:13 Cong Xu
2017-08-16 23:55 ` Jonathan Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox