Cake - FQ_codel the next generation
 help / color / mirror / Atom feed
* [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
@ 2019-01-01 23:04 Pete Heist
  2019-01-03  3:57 ` Georgios Amanakis
  0 siblings, 1 reply; 29+ messages in thread
From: Pete Heist @ 2019-01-01 23:04 UTC (permalink / raw)
  To: Cake List

[-- Attachment #1: Type: text/plain, Size: 2730 bytes --]

In my one-armed router setup I’m seeing host fairness work perfectly with srchost or dsthost, but with dual-srchost or dual-dsthost, host fairness deviates from the ideal, _only_ when there's bi-directional traffic. The deviation is then dependent on the number of flows. Is this expected?

I had thought that dual-src/dsthost worked the same as src/dsthost (fairness between hosts) with the exception that there is also fairness of flows within each host.

Here are some results (all rates aggregate throughput in Mbit):

IP1=8 up / 1 down   IP2=1 up / 8 down (post-test tc stats attached):
	srchost/dsthost, upload only: IP1=48.1, IP2=47.9  (OK)
	srchost/dsthost, download only: IP1=47.8, IP2=47.8  (OK)
	srchost/dsthost, bi-directional: IP1=47.5 up / 43.9 down, IP2=44.7 up / 46.7 down  (OK)

	dual-srchost/dual-dsthost, upload only: IP1=48.1, IP2=48.0  (OK)
	dual-srchost/dual-dsthost, download only: IP1=47.9, IP2=47.9  (OK)
	dual-srchost/dual-dsthost, bi-directional: IP1=83.0 up / 10.7 down, IP2=10.6 up / 83.0 down (*** asymmetric ***)

Dual-srchost/dual-dsthost, bi-directional tests with different flow counts:

IP1=4 up / 1 down   IP2=1 up / 4 down:
	IP1=74.8 up / 18.8 down, IP2=18.8 up / 74.8 down

IP1=2 up / 1 down   IP2=1 up / 2 down:
	IP1=62.4 up / 31.3 down, IP2=31.3 up / 62.4 down

IP1=4 up / 1 down   IP2=1 up / 8 down:
	IP1=81.8 up / 11.5 down, IP2=17.4 up / 76.3 down

IP1=2 up / 1 down   IP2=1 up / 8 down:
	IP1=79.9 up / 13.5 down, IP2=25.7 up / 68.1 down

The setup:

	apu2a (kernel 4.9)  <— default VLAN —>  apu1a (kernel 3.16.7)  <— VLAN 3300 —>  apu2b (kernel 4.9)

- apu1a is the router, and has cake only on egress of both eth0 and eth0.3300, rate limited to 100mbit for both
- it has no trouble shaping at 100mbit up and down simultaneously, so that should not be a problem
- the same problem occurs at 25mbit or 50mbit)
- since apu2a is the client [dual-]dsthost is used on eth0 and [dual-]srchost is used on eth0.3300
- the fairness test setup seems correct, based on the results of most of the tests, at least.
- note in the qdisc stats attached there is a prio qdisc on eth0 for filtering out VLAN traffic so it isn’t shaped twice
- I also get the exact same results with an htb or hfsc hierarchy on eth0 instead of adding a qdisc to eth0.3300
- printk’s in sch_cake.c shows values of flow_mode, srchost_hash and dsthost_hash as expected
- I also see it going into allocate_src and allocate_dst as expected, and later ending up in found_src and found_dst

I’m stumped. I know I’ve tested fairness of dual-src/dsthost before, but that was from the egress of client and server, and it was on a recent kernel. Time to sleep on it...


[-- Attachment #2: qdisc_stats.txt --]
[-- Type: text/plain, Size: 2195 bytes --]

qdisc prio 1: dev eth0 root refcnt 2 bands 2 priomap  1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 Sent 1502062182 bytes 1525226 pkt (dropped 204373, overlimits 0 requeues 0) 
 backlog 0b 4294550547p requeues 0
qdisc cake 10: dev eth0 parent 1:1 bandwidth 100Mbit besteffort dual-dsthost nonat nowash no-ack-filter split-gso rtt 100.0ms raw overhead 0 
 Sent 751856301 bytes 762993 pkt (dropped 8512, overlimits 2414269 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 361600b of 5000000b
 capacity estimate: 100Mbit
 min/max network layer size:           42 /    1514
 min/max overhead-adjusted size:       42 /    1514
 average network hdr offset:           14

                  Tin 0
  thresh        100Mbit
  target          5.0ms
  interval      100.0ms
  pk_delay        3.9ms
  av_delay        2.4ms
  sp_delay         60us
  backlog            0b
  pkts           771505
  bytes       764743469
  way_inds            6
  way_miss           26
  way_cols            0
  drops            8512
  marks               0
  ack_drop            0
  sp_flows            9
  bk_flows            4
  un_flows            0
  max_len         18168
  quantum          1514

qdisc cake 8060: dev eth0.3300 root refcnt 2 bandwidth 100Mbit besteffort dual-srchost nonat nowash no-ack-filter split-gso rtt 100.0ms raw overhead 0 
 Sent 750205881 bytes 762233 pkt (dropped 8542, overlimits 904221 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 279744b of 5000000b
 capacity estimate: 100Mbit
 min/max network layer size:           42 /    1514
 min/max overhead-adjusted size:       42 /    1514
 average network hdr offset:           14

                  Tin 0
  thresh        100Mbit
  target          5.0ms
  interval      100.0ms
  pk_delay        157us
  av_delay         83us
  sp_delay          1us
  backlog            0b
  pkts           770775
  bytes       763138469
  way_inds        31166
  way_miss           23
  way_cols            0
  drops            8542
  marks               0
  ack_drop            0
  sp_flows           17
  bk_flows            1
  un_flows            0
  max_len         30280
  quantum          1514

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-01 23:04 [Cake] dual-src/dsthost unfairness, only with bi-directional traffic Pete Heist
@ 2019-01-03  3:57 ` Georgios Amanakis
  2019-01-03  4:15   ` Georgios Amanakis
  0 siblings, 1 reply; 29+ messages in thread
From: Georgios Amanakis @ 2019-01-03  3:57 UTC (permalink / raw)
  To: Pete Heist, Cake List

I can reproduce this one to my surprise, too. 
I tested on my Comcast connection, with a WRT1900ACS, running openwrt
(r8082-95b3f8ec8d, 4.14.70), with two interfaces br-lan and eth0(wan).

IP1=1 up / 8 down    IP2=4 up / 4 down
	src/dst, bidir: IP1=0.88 /  8.44, IP2=0.66 / 7.75 (ok)
dualsrc/dualdst, bidir: IP1=0.27 / 10.56, IP2=1.41 / 6.42 (unfair)

No VLANs, no other schedulers on eth0 and br-lan apart from cake.



On Wed, 2019-01-02 at 00:04 +0100, Pete Heist wrote:
> In my one-armed router setup I’m seeing host fairness work perfectly
> with srchost or dsthost, but with dual-srchost or dual-dsthost, host
> fairness deviates from the ideal, _only_ when there's bi-directional
> traffic. The deviation is then dependent on the number of flows. Is
> this expected?
> 
> I had thought that dual-src/dsthost worked the same as src/dsthost
> (fairness between hosts) with the exception that there is also
> fairness of flows within each host.
> 
> Here are some results (all rates aggregate throughput in Mbit):
> 
> IP1=8 up / 1 down   IP2=1 up / 8 down (post-test tc stats attached):
> 	srchost/dsthost, upload only: IP1=48.1, IP2=47.9  (OK)
> 	srchost/dsthost, download only: IP1=47.8, IP2=47.8  (OK)
> 	srchost/dsthost, bi-directional: IP1=47.5 up / 43.9 down,
> IP2=44.7 up / 46.7 down  (OK)
> 
> 	dual-srchost/dual-dsthost, upload only: IP1=48.1,
> IP2=48.0  (OK)
> 	dual-srchost/dual-dsthost, download only: IP1=47.9,
> IP2=47.9  (OK)
> 	dual-srchost/dual-dsthost, bi-directional: IP1=83.0 up / 10.7
> down, IP2=10.6 up / 83.0 down (*** asymmetric ***)
> 
> Dual-srchost/dual-dsthost, bi-directional tests with different flow
> counts:
> 
> IP1=4 up / 1 down   IP2=1 up / 4 down:
> 	IP1=74.8 up / 18.8 down, IP2=18.8 up / 74.8 down
> 
> IP1=2 up / 1 down   IP2=1 up / 2 down:
> 	IP1=62.4 up / 31.3 down, IP2=31.3 up / 62.4 down
> 
> IP1=4 up / 1 down   IP2=1 up / 8 down:
> 	IP1=81.8 up / 11.5 down, IP2=17.4 up / 76.3 down
> 
> IP1=2 up / 1 down   IP2=1 up / 8 down:
> 	IP1=79.9 up / 13.5 down, IP2=25.7 up / 68.1 down
> 
> The setup:
> 
> 	apu2a (kernel 4.9)  <— default VLAN —>  apu1a (kernel
> 3.16.7)  <— VLAN 3300 —>  apu2b (kernel 4.9)
> 
> - apu1a is the router, and has cake only on egress of both eth0 and
> eth0.3300, rate limited to 100mbit for both
> - it has no trouble shaping at 100mbit up and down simultaneously, so
> that should not be a problem
> - the same problem occurs at 25mbit or 50mbit)
> - since apu2a is the client [dual-]dsthost is used on eth0 and [dual-
> ]srchost is used on eth0.3300
> - the fairness test setup seems correct, based on the results of most
> of the tests, at least.
> - note in the qdisc stats attached there is a prio qdisc on eth0 for
> filtering out VLAN traffic so it isn’t shaped twice
> - I also get the exact same results with an htb or hfsc hierarchy on
> eth0 instead of adding a qdisc to eth0.3300
> - printk’s in sch_cake.c shows values of flow_mode, srchost_hash and
> dsthost_hash as expected
> - I also see it going into allocate_src and allocate_dst as expected,
> and later ending up in found_src and found_dst
> 
> I’m stumped. I know I’ve tested fairness of dual-src/dsthost before,
> but that was from the egress of client and server, and it was on a
> recent kernel. Time to sleep on it...
> 
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03  3:57 ` Georgios Amanakis
@ 2019-01-03  4:15   ` Georgios Amanakis
  2019-01-03  5:18     ` Jonathan Morton
  0 siblings, 1 reply; 29+ messages in thread
From: Georgios Amanakis @ 2019-01-03  4:15 UTC (permalink / raw)
  To: Pete Heist, Cake List

It seems if both clients are having bidirectional traffic, dual-
{dst,src}host has the same effect as triple-isolate (on both lan and
wan interfaces) on their bandwidth.
This shouldn't happen though, or am I wrong?


On Wed, 2019-01-02 at 22:57 -0500, Georgios Amanakis wrote:
> I can reproduce this one to my surprise, too. 
> I tested on my Comcast connection, with a WRT1900ACS, running openwrt
> (r8082-95b3f8ec8d, 4.14.70), with two interfaces br-lan and
> eth0(wan).
> 
> IP1=1 up / 8 down    IP2=4 up / 4 down
> 	src/dst, bidir: IP1=0.88 /  8.44, IP2=0.66 / 7.75 (ok)
> dualsrc/dualdst, bidir: IP1=0.27 / 10.56, IP2=1.41 / 6.42 (unfair)
> 
> No VLANs, no other schedulers on eth0 and br-lan apart from cake.
> 
> 
> 
> On Wed, 2019-01-02 at 00:04 +0100, Pete Heist wrote:
> > In my one-armed router setup I’m seeing host fairness work
> > perfectly
> > with srchost or dsthost, but with dual-srchost or dual-dsthost,
> > host
> > fairness deviates from the ideal, _only_ when there's bi-
> > directional
> > traffic. The deviation is then dependent on the number of flows. Is
> > this expected?
> > 
> > I had thought that dual-src/dsthost worked the same as src/dsthost
> > (fairness between hosts) with the exception that there is also
> > fairness of flows within each host.
> > 
> > Here are some results (all rates aggregate throughput in Mbit):
> > 
> > IP1=8 up / 1 down   IP2=1 up / 8 down (post-test tc stats
> > attached):
> > 	srchost/dsthost, upload only: IP1=48.1, IP2=47.9  (OK)
> > 	srchost/dsthost, download only: IP1=47.8, IP2=47.8  (OK)
> > 	srchost/dsthost, bi-directional: IP1=47.5 up / 43.9 down,
> > IP2=44.7 up / 46.7 down  (OK)
> > 
> > 	dual-srchost/dual-dsthost, upload only: IP1=48.1,
> > IP2=48.0  (OK)
> > 	dual-srchost/dual-dsthost, download only: IP1=47.9,
> > IP2=47.9  (OK)
> > 	dual-srchost/dual-dsthost, bi-directional: IP1=83.0 up / 10.7
> > down, IP2=10.6 up / 83.0 down (*** asymmetric ***)
> > 
> > Dual-srchost/dual-dsthost, bi-directional tests with different flow
> > counts:
> > 
> > IP1=4 up / 1 down   IP2=1 up / 4 down:
> > 	IP1=74.8 up / 18.8 down, IP2=18.8 up / 74.8 down
> > 
> > IP1=2 up / 1 down   IP2=1 up / 2 down:
> > 	IP1=62.4 up / 31.3 down, IP2=31.3 up / 62.4 down
> > 
> > IP1=4 up / 1 down   IP2=1 up / 8 down:
> > 	IP1=81.8 up / 11.5 down, IP2=17.4 up / 76.3 down
> > 
> > IP1=2 up / 1 down   IP2=1 up / 8 down:
> > 	IP1=79.9 up / 13.5 down, IP2=25.7 up / 68.1 down
> > 
> > The setup:
> > 
> > 	apu2a (kernel 4.9)  <— default VLAN —>  apu1a (kernel
> > 3.16.7)  <— VLAN 3300 —>  apu2b (kernel 4.9)
> > 
> > - apu1a is the router, and has cake only on egress of both eth0 and
> > eth0.3300, rate limited to 100mbit for both
> > - it has no trouble shaping at 100mbit up and down simultaneously,
> > so
> > that should not be a problem
> > - the same problem occurs at 25mbit or 50mbit)
> > - since apu2a is the client [dual-]dsthost is used on eth0 and
> > [dual-
> > ]srchost is used on eth0.3300
> > - the fairness test setup seems correct, based on the results of
> > most
> > of the tests, at least.
> > - note in the qdisc stats attached there is a prio qdisc on eth0
> > for
> > filtering out VLAN traffic so it isn’t shaped twice
> > - I also get the exact same results with an htb or hfsc hierarchy
> > on
> > eth0 instead of adding a qdisc to eth0.3300
> > - printk’s in sch_cake.c shows values of flow_mode, srchost_hash
> > and
> > dsthost_hash as expected
> > - I also see it going into allocate_src and allocate_dst as
> > expected,
> > and later ending up in found_src and found_dst
> > 
> > I’m stumped. I know I’ve tested fairness of dual-src/dsthost
> > before,
> > but that was from the egress of client and server, and it was on a
> > recent kernel. Time to sleep on it...
> > 
> > _______________________________________________
> > Cake mailing list
> > Cake@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cake


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03  4:15   ` Georgios Amanakis
@ 2019-01-03  5:18     ` Jonathan Morton
  2019-01-03 10:46       ` Pete Heist
  0 siblings, 1 reply; 29+ messages in thread
From: Jonathan Morton @ 2019-01-03  5:18 UTC (permalink / raw)
  To: Georgios Amanakis; +Cc: Pete Heist, Cake List

> On 3 Jan, 2019, at 6:15 am, Georgios Amanakis <gamanakis@gmail.com> wrote:
> 
> It seems if both clients are having bidirectional traffic, dual-
> {dst,src}host has the same effect as triple-isolate (on both lan and
> wan interfaces) on their bandwidth.

> This shouldn't happen though, or am I wrong?

If both clients are communicating with the same single server IP, then there *should* be a difference between triple-isolate and the dual modes.  In that case triple-isolate would behave like plain flow isolation, because it takes the maximum flow-load of the src and dst hosts to determine which dual mode it should behave most like.

Conversely, if the clients are communicating with a different server IP for each flow, or are each sending all their flows to one server IP that's unique to them, then triple-isolate should behave the same as the appropriate dual modes.  This is the use-case that triple-isolate assumes in its design.

It's also possible for triple-isolation to behave differently from either of the dual modes, if there's a sufficiently complex pattern of traffic flows.  I think those cases would be relatively unusual in practice, but they certainly can occur.

I'm left wondering whether the sense of src and dst has got accidentally reversed at some point, or if the dual modes are being misinterpreted as triple-isolate.  To figure that out, I'd need to look carefully at several related parts of the code.  Can anyone reproduce it from the latest kernels' upstream code, or is it only in the module?  And precisely which version of iproute2 is everyone using?

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03  5:18     ` Jonathan Morton
@ 2019-01-03 10:46       ` Pete Heist
  2019-01-03 11:03         ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 29+ messages in thread
From: Pete Heist @ 2019-01-03 10:46 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: Georgios Amanakis, Cake List


> On Jan 3, 2019, at 6:18 AM, Jonathan Morton <chromatix99@gmail.com> wrote:
> 
>> On 3 Jan, 2019, at 6:15 am, Georgios Amanakis <gamanakis@gmail.com> wrote:
>> 
>> It seems if both clients are having bidirectional traffic, dual-
>> {dst,src}host has the same effect as triple-isolate (on both lan and
>> wan interfaces) on their bandwidth.

Exactly what I’m seeing- thanks for testing George...

> I'm left wondering whether the sense of src and dst has got accidentally reversed at some point, or if the dual modes are being misinterpreted as triple-isolate.  To figure that out, I'd need to look carefully at several related parts of the code.  Can anyone reproduce it from the latest kernels' upstream code, or is it only in the module?  And precisely which version of iproute2 is everyone using?

It will be a while before I can try this on 4.19+, but: iproute2/oldstable,now 3.16.0-2 i386. I compile tc-adv from HEAD.

Here are more bi-directional tests with 8 up / 1 down on IP1 and 1 up / 8 down on IP2:

dual-srchost/dual-dsthost:
	IP1: 83.1 / 10.9, IP2: 10.7 / 83.0
dual-dsthost/dual-srchost (sense flipped):
	IP1: 83.0 / 10.5, IP2: 10.7 / 82.9
triple-isolate:
	IP1: 83.1 / 10.5, IP2: 10.7 / 82.9
srchost/dsthost (sanity check):
	IP1: 47.6 / 43.8, IP2: 44.2 / 47.4
dsthost/srchost (sanity check, sense flipped):
	IP1: 81.3 / 9.79, IP2: 11.0 / 80.7
flows:
	IP1: 83.0 / 10.4, IP2: 10.5 / 82.9

I also tried testing shaping on eth0.3300 and ingress of eth0.3300 instead of egress of both eth0 and eth0.3300, because that’s more like what I tested before. There was no significant change from the above results.

I managed to compile versions all the way back to July 15, 2018 (1e2473f702cf253f8f5ade4d622c6e4ba661a09d) and still see the same result. I’ll try to go earlier.

As far as the code goes, the easy stuff:
- flow_mode values in cake_hash are 5 for dual-srchost, 6 for dual-dsthost and 7 for triple-isolate
- the values from cake_dsrc(flow_mode) and cake_ddst(flow_mode) are as expected in all three cases
- flow_override and host_override are both 0
- looks correct: !(flow_mode & CAKE_FLOW_FLOWS) == 0
- this looks normal to me (shows reply packets on eth0):
   IP1 ping: dsthost_idx = 450, reduced_hash = 129
   IP1 irtt: dsthost_idx = 450, reduced_hash = 158
   IP2 ping: dsthost_idx = 301, reduced_hash = 78
   IP2 irtt: dsthost_idx = 301, reduced_hash = 399

Jon, is there anything I can check by instrumenting the code somewhere specific?


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03 10:46       ` Pete Heist
@ 2019-01-03 11:03         ` Toke Høiland-Jørgensen
  2019-01-03 13:02           ` Pete Heist
  0 siblings, 1 reply; 29+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-01-03 11:03 UTC (permalink / raw)
  To: Pete Heist, Jonathan Morton; +Cc: Cake List

> Jon, is there anything I can check by instrumenting the code somewhere
> specific?

Is there any way you could test with a bulk UDP flow? I'm wondering
whether this is a second-order effect where TCP ACKs are limited in a
way that cause the imbalance? Are you using ACK compression?

-Toke

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03 11:03         ` Toke Høiland-Jørgensen
@ 2019-01-03 13:02           ` Pete Heist
  2019-01-03 13:20             ` Toke Høiland-Jørgensen
  2019-01-04 11:34             ` Pete Heist
  0 siblings, 2 replies; 29+ messages in thread
From: Pete Heist @ 2019-01-03 13:02 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Jonathan Morton, Cake List

[-- Attachment #1: Type: text/plain, Size: 1526 bytes --]


> On Jan 3, 2019, at 12:03 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> 
>> Jon, is there anything I can check by instrumenting the code somewhere
>> specific?
> 
> Is there any way you could test with a bulk UDP flow? I'm wondering
> whether this is a second-order effect where TCP ACKs are limited in a
> way that cause the imbalance? Are you using ACK compression?


Not using ack-filter, if that’s what’s meant by ACK compression. I thought about the TCP ACK traffic, but would be very surprised if that amount of ACK traffic could cause that large of an imbalance, although it’s worth trying to find out.

I tried iperf3 in UDP mode, but cake is treating these flows aggressively. I get the impression that cake penalizes flows heavily that do not respond to congestion control signals. If I pit one 8 TCP flows against a single UDP flow at 40mbit, the UDP flow goes into a death spiral with increasing drops over time (iperf3 output attached).

I’m not sure there’d be any way I can test fairness with iperf3 in UDP mode. We’d need something that has some congestion control feedback, right? Otherwise, I don’t think there are any rates I can choose to both reach saturation and not be severely punished. And if it has congestion control feedback, it has the ACK-like traffic we’re trying to avoid for the test. :)

As another test, I took out the one-armed router and just tried from a client to a server, no VLANs. Same result. So, still stumped. Thank you for the help...


[-- Attachment #2: iperf3_spiral.txt --]
[-- Type: text/plain, Size: 5287 bytes --]

-----------------------------------------------------------
Server listening on 5202
-----------------------------------------------------------
Accepted connection from 10.0.0.239, port 48289
[  5] local 10.0.0.231 port 5202 connected to 10.72.0.239 port 38334
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec  4.20 MBytes  35.3 Mbits/sec  0.467 ms  21/559 (3.8%)  
[  5]   1.00-2.00   sec  4.27 MBytes  35.8 Mbits/sec  0.555 ms  43/589 (7.3%)  
[  5]   2.00-3.00   sec  4.48 MBytes  37.6 Mbits/sec  0.482 ms  69/642 (11%)  
[  5]   3.00-4.00   sec  3.90 MBytes  32.7 Mbits/sec  0.461 ms  87/586 (15%)  
[  5]   4.00-5.00   sec  3.84 MBytes  32.2 Mbits/sec  0.490 ms  111/603 (18%)  
[  5]   5.00-6.00   sec  3.94 MBytes  33.0 Mbits/sec  0.341 ms  130/634 (21%)  
[  5]   6.00-7.00   sec  3.63 MBytes  30.5 Mbits/sec  0.539 ms  144/609 (24%)  
[  5]   7.00-8.00   sec  3.59 MBytes  30.1 Mbits/sec  0.451 ms  159/618 (26%)  
[  5]   8.00-9.00   sec  3.21 MBytes  26.9 Mbits/sec  0.987 ms  181/592 (31%)  
[  5]   9.00-10.00  sec  3.23 MBytes  27.1 Mbits/sec  0.224 ms  225/639 (35%)  
[  5]  10.00-11.00  sec  3.11 MBytes  26.1 Mbits/sec  0.204 ms  214/612 (35%)  
[  5]  11.00-12.00  sec  2.80 MBytes  23.5 Mbits/sec  0.371 ms  229/587 (39%)  
[  5]  12.00-13.00  sec  2.66 MBytes  22.3 Mbits/sec  0.543 ms  254/594 (43%)  
[  5]  13.00-14.00  sec  2.73 MBytes  22.9 Mbits/sec  0.386 ms  292/642 (45%)  
[  5]  14.00-15.00  sec  2.49 MBytes  20.9 Mbits/sec  0.399 ms  298/617 (48%)  
[  5]  15.00-16.00  sec  2.40 MBytes  20.1 Mbits/sec  0.216 ms  288/595 (48%)  
[  5]  16.00-17.00  sec  2.20 MBytes  18.5 Mbits/sec  0.486 ms  327/609 (54%)  
[  5]  17.00-18.00  sec  2.19 MBytes  18.3 Mbits/sec  0.538 ms  344/624 (55%)  
[  5]  18.00-19.00  sec  2.00 MBytes  16.8 Mbits/sec  0.519 ms  321/577 (56%)  
[  5]  19.00-20.00  sec  1.95 MBytes  16.4 Mbits/sec  0.930 ms  369/619 (60%)  
[  5]  20.00-21.00  sec  1.93 MBytes  16.2 Mbits/sec  0.526 ms  377/624 (60%)  
[  5]  21.00-22.00  sec  1.66 MBytes  13.9 Mbits/sec  0.543 ms  374/586 (64%)  
[  5]  22.00-23.00  sec  1.70 MBytes  14.2 Mbits/sec  0.833 ms  412/629 (66%)  
[  5]  23.00-24.00  sec  1.66 MBytes  13.9 Mbits/sec  0.340 ms  402/614 (65%)  
[  5]  24.00-25.00  sec  1.52 MBytes  12.7 Mbits/sec  0.693 ms  431/625 (69%)  
[  5]  25.00-26.00  sec  1.40 MBytes  11.7 Mbits/sec  0.491 ms  404/583 (69%)  
[  5]  26.00-27.00  sec  1.32 MBytes  11.1 Mbits/sec  1.028 ms  456/625 (73%)  
[  5]  27.00-28.00  sec  1.25 MBytes  10.5 Mbits/sec  0.870 ms  427/587 (73%)  
[  5]  28.00-29.00  sec  1.20 MBytes  10.1 Mbits/sec  0.660 ms  479/633 (76%)  
[  5]  29.00-30.00  sec  1.19 MBytes  9.96 Mbits/sec  0.773 ms  466/618 (75%)  
[  5]  30.00-31.00  sec  1.05 MBytes  8.85 Mbits/sec  1.103 ms  455/590 (77%)  
[  5]  31.00-32.00  sec  1.03 MBytes  8.65 Mbits/sec  0.559 ms  488/620 (79%)  
[  5]  32.00-33.00  sec   888 KBytes  7.27 Mbits/sec  0.415 ms  494/605 (82%)  
[  5]  33.00-34.00  sec   896 KBytes  7.34 Mbits/sec  1.023 ms  489/601 (81%)  
[  5]  34.00-35.00  sec   880 KBytes  7.21 Mbits/sec  0.986 ms  519/629 (83%)  
[  5]  35.00-36.00  sec   776 KBytes  6.36 Mbits/sec  0.414 ms  493/590 (84%)  
[  5]  36.00-37.00  sec   800 KBytes  6.55 Mbits/sec  0.845 ms  506/606 (83%)  
[  5]  37.00-38.00  sec   832 KBytes  6.82 Mbits/sec  1.124 ms  536/640 (84%)  
[  5]  38.00-39.00  sec   768 KBytes  6.29 Mbits/sec  0.577 ms  515/611 (84%)  
[  5]  39.00-40.00  sec   728 KBytes  5.96 Mbits/sec  1.269 ms  496/587 (84%)  
[  5]  40.00-41.00  sec   752 KBytes  6.16 Mbits/sec  0.834 ms  544/638 (85%)  
[  5]  41.00-42.00  sec   528 KBytes  4.32 Mbits/sec  1.533 ms  346/412 (84%)  
[  5]  42.00-43.00  sec   552 KBytes  4.52 Mbits/sec  2.008 ms  722/791 (91%)  
[  5]  43.00-44.00  sec   416 KBytes  3.41 Mbits/sec  2.202 ms  528/580 (91%)  
[  5]  44.00-45.00  sec   408 KBytes  3.34 Mbits/sec  2.075 ms  566/617 (92%)  
[  5]  45.00-46.00  sec   512 KBytes  4.19 Mbits/sec  1.629 ms  517/581 (89%)  
[  5]  46.00-47.00  sec   400 KBytes  3.28 Mbits/sec  1.750 ms  584/634 (92%)  
[  5]  47.00-48.00  sec   408 KBytes  3.34 Mbits/sec  1.587 ms  541/592 (91%)  
[  5]  48.00-49.00  sec   504 KBytes  4.13 Mbits/sec  1.344 ms  587/650 (90%)  
[  5]  49.00-50.00  sec   600 KBytes  4.91 Mbits/sec  1.338 ms  522/597 (87%)  
[  5]  50.00-51.00  sec   384 KBytes  3.15 Mbits/sec  2.033 ms  592/640 (92%)  
[  5]  51.00-52.00  sec   504 KBytes  4.13 Mbits/sec  1.566 ms  529/592 (89%)  
[  5]  52.00-53.00  sec   400 KBytes  3.28 Mbits/sec  1.883 ms  508/558 (91%)  
[  5]  53.00-54.00  sec   424 KBytes  3.47 Mbits/sec  1.833 ms  639/692 (92%)  
[  5]  54.00-55.00  sec   280 KBytes  2.29 Mbits/sec  2.004 ms  539/574 (94%)  
[  5]  55.00-56.00  sec   288 KBytes  2.36 Mbits/sec  2.161 ms  522/558 (94%)  
[  5]  56.00-57.00  sec   288 KBytes  2.36 Mbits/sec  2.693 ms  635/671 (95%)  
[  5]  57.00-58.00  sec   264 KBytes  2.16 Mbits/sec  2.225 ms  554/587 (94%)  
[  5]  58.00-59.00  sec   256 KBytes  2.10 Mbits/sec  2.065 ms  571/603 (95%)  
[  5]  59.00-60.00  sec   696 KBytes  5.70 Mbits/sec  0.154 ms  588/675 (87%)  
[  5]  60.00-60.04  sec  24.0 KBytes  4.98 Mbits/sec  0.136 ms  0/3 (0%)  

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03 13:02           ` Pete Heist
@ 2019-01-03 13:20             ` Toke Høiland-Jørgensen
  2019-01-03 16:35               ` Pete Heist
  2019-01-04 11:34             ` Pete Heist
  1 sibling, 1 reply; 29+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-01-03 13:20 UTC (permalink / raw)
  To: Pete Heist; +Cc: Jonathan Morton, Cake List

Pete Heist <pete@heistp.net> writes:

>> On Jan 3, 2019, at 12:03 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>> 
>>> Jon, is there anything I can check by instrumenting the code somewhere
>>> specific?
>> 
>> Is there any way you could test with a bulk UDP flow? I'm wondering
>> whether this is a second-order effect where TCP ACKs are limited in a
>> way that cause the imbalance? Are you using ACK compression?
>
>
> Not using ack-filter, if that’s what’s meant by ACK compression. I
> thought about the TCP ACK traffic, but would be very surprised if that
> amount of ACK traffic could cause that large of an imbalance, although
> it’s worth trying to find out.
>
> I tried iperf3 in UDP mode, but cake is treating these flows
> aggressively. I get the impression that cake penalizes flows heavily
> that do not respond to congestion control signals. If I pit one 8 TCP
> flows against a single UDP flow at 40mbit, the UDP flow goes into a
> death spiral with increasing drops over time (iperf3 output attached).
>
> I’m not sure there’d be any way I can test fairness with iperf3 in UDP
> mode. We’d need something that has some congestion control feedback,
> right? Otherwise, I don’t think there are any rates I can choose to
> both reach saturation and not be severely punished. And if it has
> congestion control feedback, it has the ACK-like traffic we’re trying
> to avoid for the test. :)

Try setting cake to 'interplanetary' - that should basically turn off
the AQM dropping...

-Toke

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03 13:20             ` Toke Høiland-Jørgensen
@ 2019-01-03 16:35               ` Pete Heist
  2019-01-03 18:24                 ` Georgios Amanakis
  2019-01-03 22:06                 ` Pete Heist
  0 siblings, 2 replies; 29+ messages in thread
From: Pete Heist @ 2019-01-03 16:35 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Jonathan Morton, Cake List


> On Jan 3, 2019, at 2:20 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> 
> Pete Heist <pete@heistp.net> writes:
> 
>> I’m not sure there’d be any way I can test fairness with iperf3 in UDP
>> mode. We’d need something that has some congestion control feedback,
>> right? Otherwise, I don’t think there are any rates I can choose to
>> both reach saturation and not be severely punished. And if it has
>> congestion control feedback, it has the ACK-like traffic we’re trying
>> to avoid for the test. :)
> 
> Try setting cake to 'interplanetary' - that should basically turn off
> the AQM dropping...

Ok, so long as we know that we’re not testing any possible interactions between AQM and host fairness, but we may learn more from it anyway. I’m using my client to server rig here (two APU2s on kernel 4.9.0-8), not the APU1 one-armed router middle box.

So, basic single client rig tests (OK):

	IP1 8-flow TCP up: 95.8
	IP2 1-flow 48mbit UDP up: 48.0 (0% loss)
	IP1 8-flow x 6mbit/flow = 48mbit UDP down: 48.0 (0% loss)
	IP2 1-flow TCP down: 96.0

Competition up (OK):

	IP1 8-flow TCP up: 59.5
	IP2 1-flow 48mbit UDP up: 36.7 (0% loss)
		Note: I don’t know why the UDP send rate slowed down here. It’s probably not the CPU, as it occurs at lower rates also. I’ll forge on.

Competition down (not OK, high UDP loss):

	IP1 1-flow TCP down: 53.3
	IP2 8-flow x 6mbit/flow 48mbit UDP down: 8.6 (82% loss)
		Note: I have no idea what happened with the UDP loss rate here, so I’ll go back to a single IP1 UDP test.

Back to single client (weird, still seeing loss):

	IP2 8-flow x 6mbit/flow 48mbit UDP down: 48.0 (5.6% loss)

Ok, I know that was working with no loss before. Stop and restart cake, then (loss stops after restart):

	IP2 8-flow x 6mbit/flow 48mbit UDP down: 48.0 (0% loss)

That’s better, now stop and restart cake and try the "competition down" test again (second trial):

	IP1 1-flow TCP down: 55.3
	IP2 8-flow x 6mbit/flow 48mbit UDP down: 5.8 (88% loss)
		Note: I have no idea what happened with the UDP loss rate here, so I’ll go back to a single IP1 UDP test.

Since this rig hasn’t passed the two-host uni-directional test because of the high loss rate on the “competition down” test, I’m not going to go any further. I’ll rather go back to my one-armed router rig and send those results in a separate email.

However, I consider it strange that I still see UDP loss after the "competition down” test has run and is completed, then it stops happening after restarting cake. That’s another issue I don’t have time to explore at the moment, unless someone has a good idea of what’s going on there.


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03 16:35               ` Pete Heist
@ 2019-01-03 18:24                 ` Georgios Amanakis
  2019-01-03 22:06                 ` Pete Heist
  1 sibling, 0 replies; 29+ messages in thread
From: Georgios Amanakis @ 2019-01-03 18:24 UTC (permalink / raw)
  To: Pete Heist, Jonathan Morton; +Cc: Toke Høiland-Jørgensen, Cake List

In my previous test the clients communicated to different flent
servers (flent-newark, flent-newark.bufferbloat.net). Iproute2 was
iproute2-ss4.18.0-4-openwrt. I will try to test on latest 4.20, will
take some time though.

I have the feeling we have discussed a similar issue in the past
(https://lists.bufferbloat.net/pipermail/cake/2017-November/002985.html).
I understand what Jonathan says. However I cannot explain why
*without* bidirectional traffic the "dual- host" mode behaves like
"src/dst-host", but *with* bidirectional traffic it behaves like
"triple-isolate".

The cake instances on the two interfaces are separate, right? So what
happens on one interface should not influence the other. Even with
bidirectional traffic the "dual- host" mode should still behave like
the "src/dst-host" mode in terms of host fairness, or not? At least
this is what I would intuitively expect.


On Thu, Jan 3, 2019 at 11:35 AM Pete Heist <pete@heistp.net> wrote:
>
>
> > On Jan 3, 2019, at 2:20 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> >
> > Pete Heist <pete@heistp.net> writes:
> >
> >> I’m not sure there’d be any way I can test fairness with iperf3 in UDP
> >> mode. We’d need something that has some congestion control feedback,
> >> right? Otherwise, I don’t think there are any rates I can choose to
> >> both reach saturation and not be severely punished. And if it has
> >> congestion control feedback, it has the ACK-like traffic we’re trying
> >> to avoid for the test. :)
> >
> > Try setting cake to 'interplanetary' - that should basically turn off
> > the AQM dropping...
>
> Ok, so long as we know that we’re not testing any possible interactions between AQM and host fairness, but we may learn more from it anyway. I’m using my client to server rig here (two APU2s on kernel 4.9.0-8), not the APU1 one-armed router middle box.
>
> So, basic single client rig tests (OK):
>
>         IP1 8-flow TCP up: 95.8
>         IP2 1-flow 48mbit UDP up: 48.0 (0% loss)
>         IP1 8-flow x 6mbit/flow = 48mbit UDP down: 48.0 (0% loss)
>         IP2 1-flow TCP down: 96.0
>
> Competition up (OK):
>
>         IP1 8-flow TCP up: 59.5
>         IP2 1-flow 48mbit UDP up: 36.7 (0% loss)
>                 Note: I don’t know why the UDP send rate slowed down here. It’s probably not the CPU, as it occurs at lower rates also. I’ll forge on.
>
> Competition down (not OK, high UDP loss):
>
>         IP1 1-flow TCP down: 53.3
>         IP2 8-flow x 6mbit/flow 48mbit UDP down: 8.6 (82% loss)
>                 Note: I have no idea what happened with the UDP loss rate here, so I’ll go back to a single IP1 UDP test.
>
> Back to single client (weird, still seeing loss):
>
>         IP2 8-flow x 6mbit/flow 48mbit UDP down: 48.0 (5.6% loss)
>
> Ok, I know that was working with no loss before. Stop and restart cake, then (loss stops after restart):
>
>         IP2 8-flow x 6mbit/flow 48mbit UDP down: 48.0 (0% loss)
>
> That’s better, now stop and restart cake and try the "competition down" test again (second trial):
>
>         IP1 1-flow TCP down: 55.3
>         IP2 8-flow x 6mbit/flow 48mbit UDP down: 5.8 (88% loss)
>                 Note: I have no idea what happened with the UDP loss rate here, so I’ll go back to a single IP1 UDP test.
>
> Since this rig hasn’t passed the two-host uni-directional test because of the high loss rate on the “competition down” test, I’m not going to go any further. I’ll rather go back to my one-armed router rig and send those results in a separate email.
>
> However, I consider it strange that I still see UDP loss after the "competition down” test has run and is completed, then it stops happening after restarting cake. That’s another issue I don’t have time to explore at the moment, unless someone has a good idea of what’s going on there.
>
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03 16:35               ` Pete Heist
  2019-01-03 18:24                 ` Georgios Amanakis
@ 2019-01-03 22:06                 ` Pete Heist
  2019-01-04  2:08                   ` Georgios Amanakis
  2019-01-04  7:37                   ` Pete Heist
  1 sibling, 2 replies; 29+ messages in thread
From: Pete Heist @ 2019-01-03 22:06 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Jonathan Morton, Cake List

[-- Attachment #1: Type: text/plain, Size: 3632 bytes --]

I have a simpler setup now to remove some variables, both hosts are APU2 on Debian 9.6, kernel 4.9.0-8:

apu2a (iperf3 client) <— default VLAN —>  apu2b (iperf3 server)

Both have cake at 100mbit only on egress, with dual-srchost on client and dual-dsthost on server. With this setup (and probably previous ones, I just didn’t test it this way), bi-directional fairness with these flow counts works:

	IP1 8-flow TCP up: 46.4
	IP2 1-flow TCP up: 47.3
	IP1 8-flow TCP down: 46.8
	IP2 1-flow TCP down: 46.7

but with the original flow counts reported it’s still similarly imbalanced as before:

	IP1 8-flow TCP up: 82.9
	IP2 1-flow TCP up: 10.9
	IP1 1-flow TCP down: 10.8
	IP2 8-flow TCP down: 83.3

and now with ack-filter on both ends (not much change):

	IP1 8-flow TCP up: 82.8
	IP2 1-flow TCP up: 10.9
	IP1 1-flow TCP down: 10.5
	IP2 8-flow TCP down: 83.2

Before I go further, what I’m seeing with this rig is that when “interplanetary” is used and the number of iperf3 TCP flows goes above the number of CPUs minus one (in my case, 4 cores), the UDP send rate starts dropping. This only happens with interplanetary for some reason, but such as it is, I’m changed my tests to pit 8 UDP flows against 1 TCP flow instead, giving the UDP senders more CPU, as this seems to work much better. All tests except the last are with “interplanetary”.

UDP upload competition (looks good):

	IP1 1-flow TCP up: 48.6
	IP2 8-flow UDP 48-mbit up: 48.2 (0% loss)

UDP download competition (some imbalance, maybe a difference in how iperf3 reverse mode works?):

	IP1 8-flow UDP 48-mbit down: 43.1 (0% loss)
	IP2 1-flow TCP down: 53.4 (0% loss)

All four at once (looks similar to previous two tests not impacting one another, which is good):

	IP1 1-flow TCP up: 47.7
	IP2 8-flow UDP 48-mbit up: 48.2 (0% loss)
	IP1 8-flow UDP 48-mbit down: 43.3 (0% loss)
	IP2 1-flow TCP down: 52.3

All four at once, up IPs flipped (less fair):

	IP1 8-flow UDP 48-mbit up: 37.7 (0% loss)
	IP2 1-flow TCP up: 57.9
	IP1 8-flow UDP 48-mbit down: 38.9 (0% loss)
	IP2 1-flow TCP down: 56.3

All four at once, interplanetary off again, to double check it, and yes, UDP gets punished in this case:

	IP1 1-flow TCP up: 60.6
	IP2 8-flow UDP 48-mbit up: 6.7 (86% loss)
	IP1 8-flow UDP 48-mbit down: 2.9 (94% loss)
	IP2 1-flow TCP down: 63.1

So have we learned something from this? Yes, fairness is improved when using UDP instead of TCP for the 8-flow clients, but by turning AQM off we’re also testing a very different scenario, one that’s not too realistic. Does this prove the cause of the problem is TCP ack traffic?

Thanks again for the help on this. After a whole day on it, I’ll have to shift gears tomorrow to FreeNet router changes. I’ll show them the progress on Monday so of course I’d like to have a great host fairness story for Cake, as this is one of the main reasons to use it instead of fq_codel, but perhaps this will get sorted out before then. :)

I agree with George that we’ve been through this before, and also with how he explained it in his latest email, but there have been many changes to Cake since we tested in 2017, so this could be a regression. I’m almost sure I tested this exact scenario, and would not have put 8 up / 8 down on one IP and 1 up / 1 down on the other, which works with fairness for some reason.

FWIW, I also reproduced it in flent between the same APU2s used above, to be sure iperf3 wasn’t somehow causing it:

https://www.heistp.net/downloads/fairness_8_1/ <https://www.heistp.net/downloads/fairness_8_1/>


[-- Attachment #2: Type: text/html, Size: 7328 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03 22:06                 ` Pete Heist
@ 2019-01-04  2:08                   ` Georgios Amanakis
  2019-01-04  8:09                     ` Pete Heist
  2019-01-04  7:37                   ` Pete Heist
  1 sibling, 1 reply; 29+ messages in thread
From: Georgios Amanakis @ 2019-01-04  2:08 UTC (permalink / raw)
  To: Pete Heist, Toke Høiland-Jørgensen; +Cc: Cake List

On Thu, 2019-01-03 at 23:06 +0100, Pete Heist wrote:
> Both have cake at 100mbit only on egress, with dual-srchost on client
> and dual-dsthost on server. With this setup (and probably previous
> ones, I just didn’t test it this way), bi-directional fairness with
> these flow counts works:
> 
> 	IP1 8-flow TCP up: 46.4
> 	IP2 1-flow TCP up: 47.3
> 	IP1 8-flow TCP down: 46.8
> 	IP2 1-flow TCP down: 46.7
> 
> but with the original flow counts reported it’s still similarly
> imbalanced as before:
> 
> 	IP1 8-flow TCP up: 82.9
> 	IP2 1-flow TCP up: 10.9
> 	IP1 1-flow TCP down: 10.8
> 	IP2 8-flow TCP down: 83.3

I just tested on archlinux, latest 4.20 on the router, iproute2 4.19.0,
using flent 1.2.2/netserver in a setup similar to Pete's:

client 1,2 <----> router <----> server

The results are the same with Pete's.





^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03 22:06                 ` Pete Heist
  2019-01-04  2:08                   ` Georgios Amanakis
@ 2019-01-04  7:37                   ` Pete Heist
  1 sibling, 0 replies; 29+ messages in thread
From: Pete Heist @ 2019-01-04  7:37 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Jonathan Morton, Cake List


> On Jan 3, 2019, at 11:06 PM, Pete Heist <pete@heistp.net> wrote:
> 
> I’m almost sure I tested this exact scenario, and would not have put 8 up / 8 down on one IP and 1 up / 1 down on the other, which works with fairness for some reason.

I’m going to dial this statement back. I went back through my old tests and in my main series of a thousand tests or so, I was splitting the two uploads and downloads across four IPs, so that’s different. Then when we were testing fairness in combination with rtt keywords, I was in fact testing 2 up / 2 down on one IP and 8 up / 8 down on the other, which is a scenario that produces the expected results.

So unless I can find some other past tests, or build an old enough version to show that the behavior was different, I can’t be sure I ever tested it this way, and don’t know if it’s a regression or it just works as designed and I never realized it.

On the one hand the IP1=1/8, IP2=8/1 results are “fair” in the sense that one client gets his wish for 8 uploads and the other gets his wish for 8 downloads, like “hey, I’ll let you drown out my 1 download if you let me drown out your 1 upload” :) but on the other hand, when Jon says there should be a difference between the triple-isolate and dual modes, that’s not what we’re seeing here.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-04  2:08                   ` Georgios Amanakis
@ 2019-01-04  8:09                     ` Pete Heist
  0 siblings, 0 replies; 29+ messages in thread
From: Pete Heist @ 2019-01-04  8:09 UTC (permalink / raw)
  To: Georgios Amanakis; +Cc: Toke Høiland-Jørgensen, Cake List

[-- Attachment #1: Type: text/plain, Size: 1828 bytes --]


> On Jan 4, 2019, at 3:08 AM, Georgios Amanakis <gamanakis@gmail.com> wrote:
> 
> On Thu, 2019-01-03 at 23:06 +0100, Pete Heist wrote:
>> Both have cake at 100mbit only on egress, with dual-srchost on client
>> and dual-dsthost on server. With this setup (and probably previous
>> ones, I just didn’t test it this way), bi-directional fairness with
>> these flow counts works:
>> 
>> 	IP1 8-flow TCP up: 46.4
>> 	IP2 1-flow TCP up: 47.3
>> 	IP1 8-flow TCP down: 46.8
>> 	IP2 1-flow TCP down: 46.7
>> 
>> but with the original flow counts reported it’s still similarly
>> imbalanced as before:
>> 
>> 	IP1 8-flow TCP up: 82.9
>> 	IP2 1-flow TCP up: 10.9
>> 	IP1 1-flow TCP down: 10.8
>> 	IP2 8-flow TCP down: 83.3
> 
> I just tested on archlinux, latest 4.20 on the router, iproute2 4.19.0,
> using flent 1.2.2/netserver in a setup similar to Pete's:
> 
> client 1,2 <----> router <----> server
> 
> The results are the same with Pete's.

One more scenario to add, IP1: 1 up / 1 down, IP2: 1 up / 8 down. In the graph, IP1 = host1, IP2 = host2, sorry for the longer labels, and watch out that the position of the hosts changes.

dual keywords: https://www.heistp.net/downloads/fairness_1_1_1_8/bar_combine_fairness_1_1_1_8.svg <https://www.heistp.net/downloads/fairness_1_1_1_8/bar_combine_fairness_1_1_1_8.svg>

host keywords: https://www.heistp.net/downloads/fairness_1_1_1_8_host/bar_combine_fairness_1_1_1_8_host.svg <https://www.heistp.net/downloads/fairness_1_1_1_8_host/bar_combine_fairness_1_1_1_8_host.svg>

Also not what I’d expect, but host 2’s upload does get slowed down, even disproportionately, in response to the extra aggregate download he gets. Up and down are more balanced with the “host” keywords, but without flow fairness there’s higher inter-flow latency.

[-- Attachment #2: Type: text/html, Size: 3221 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-03 13:02           ` Pete Heist
  2019-01-03 13:20             ` Toke Høiland-Jørgensen
@ 2019-01-04 11:34             ` Pete Heist
  2019-01-15 19:22               ` George Amanakis
  1 sibling, 1 reply; 29+ messages in thread
From: Pete Heist @ 2019-01-04 11:34 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Jonathan Morton, Cake List


> On Jan 3, 2019, at 2:02 PM, Pete Heist <pete@heistp.net> wrote:
> 
> I tried iperf3 in UDP mode, but cake is treating these flows aggressively. I get the impression that cake penalizes flows heavily that do not respond to congestion control signals. If I pit one 8 TCP flows against a single UDP flow at 40mbit, the UDP flow goes into a death spiral with increasing drops over time (iperf3 output attached).

Sigh, this spiraling was partly because iperf3 in UDP mode sends 8k buffers by default. If I use “-l 1472” with the iperf3 client, the send rates are the same, but the packet loss is much lower, without interplanetary. So one more result:

	IP1 1-flow TCP up: 49 - 59.5
	IP2 8-flow UDP 48-mbit up: 48 - 36 (loss 0% - 25%)
	IP1 8-flow UDP 48-mbit down: 47.5 - 35.8 (loss 0% - 25%)
	IP2 1-flow TCP down: 21.8 - 61.5

I do see the rates and loss gradually change over 60 seconds, so numbers are shown at t=0 and t=60 seconds.

I’ve read that nuttcp does UDP bulk flows better than iperf3, so one day I may try that.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [Cake]  dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-04 11:34             ` Pete Heist
@ 2019-01-15 19:22               ` George Amanakis
  2019-01-15 22:42                 ` Georgios Amanakis
  0 siblings, 1 reply; 29+ messages in thread
From: George Amanakis @ 2019-01-15 19:22 UTC (permalink / raw)
  To: cake


I think what is happening here is that if a client has flows such as "a
(bulk upload)" and "b (bulk download)", the incoming ACKs of flow "a"
compete with the incoming bulk traffic on flow "b". With compete I mean
in terms of flow selection.

So if we adjust the host_load to be the same with the bulk_flow_count of
*each* host, the problem seems to be resolved.
I drafted a patch below.

Pete's setup, tested with the patch (ingress in mbit/s):
IP1: 8down  49.18mbit/s
IP1: 1up    46.73mbit/s
IP2: 1down  47.39mbit/s
IP2: 8up    49.21mbit/s


---
 sch_cake.c | 34 ++++++++++++++++++++++++++++------
 1 file changed, 28 insertions(+), 6 deletions(-)

diff --git a/sch_cake.c b/sch_cake.c
index d434ae0..5c0f0e1 100644
--- a/sch_cake.c
+++ b/sch_cake.c
@@ -148,6 +148,7 @@ struct cake_host {
 	u32 dsthost_tag;
 	u16 srchost_refcnt;
 	u16 dsthost_refcnt;
+	u16 bulk_flow_count;
 };
 
 struct cake_heap_entry {
@@ -1897,10 +1898,10 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 		q->last_packet_time = now;
 	}
 
+	struct cake_host *srchost = &b->hosts[flow->srchost];
+	struct cake_host *dsthost = &b->hosts[flow->dsthost];
 	/* flowchain */
 	if (!flow->set || flow->set == CAKE_SET_DECAYING) {
-		struct cake_host *srchost = &b->hosts[flow->srchost];
-		struct cake_host *dsthost = &b->hosts[flow->dsthost];
 		u16 host_load = 1;
 
 		if (!flow->set) {
@@ -1927,6 +1928,11 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 		flow->set = CAKE_SET_BULK;
 		b->sparse_flow_count--;
 		b->bulk_flow_count++;
+		if (cake_dsrc(q->flow_mode))
+			srchost->bulk_flow_count++;
+
+		if (cake_ddst(q->flow_mode))
+			dsthost->bulk_flow_count++;
 	}
 
 	if (q->buffer_used > q->buffer_max_used)
@@ -2101,7 +2107,7 @@ retry:
 		host_load = max(host_load, srchost->srchost_refcnt);
 
 	if (cake_ddst(q->flow_mode))
-		host_load = max(host_load, dsthost->dsthost_refcnt);
+		host_load = max(host_load, dsthost->bulk_flow_count);
 
 	WARN_ON(host_load > CAKE_QUEUES);
 
@@ -2110,8 +2116,6 @@ retry:
 		/* The shifted prandom_u32() is a way to apply dithering to
 		 * avoid accumulating roundoff errors
 		 */
-		flow->deficit += (b->flow_quantum * quantum_div[host_load] +
-				  (prandom_u32() >> 16)) >> 16;
 		list_move_tail(&flow->flowchain, &b->old_flows);
 
 		/* Keep all flows with deficits out of the sparse and decaying
@@ -2122,6 +2126,11 @@ retry:
 			if (flow->head) {
 				b->sparse_flow_count--;
 				b->bulk_flow_count++;
+				if (cake_dsrc(q->flow_mode))
+					srchost->bulk_flow_count++;
+
+				if (cake_ddst(q->flow_mode))
+					dsthost->bulk_flow_count++;
 				flow->set = CAKE_SET_BULK;
 			} else {
 				/* we've moved it to the bulk rotation for
@@ -2131,6 +2140,8 @@ retry:
 				flow->set = CAKE_SET_SPARSE_WAIT;
 			}
 		}
+		flow->deficit += (b->flow_quantum * quantum_div[host_load] +
+				  (prandom_u32() >> 16)) >> 16;
 		goto retry;
 	}
 
@@ -2151,6 +2162,11 @@ retry:
 					       &b->decaying_flows);
 				if (flow->set == CAKE_SET_BULK) {
 					b->bulk_flow_count--;
+					if (cake_dsrc(q->flow_mode))
+						srchost->bulk_flow_count--;
+
+					if (cake_ddst(q->flow_mode))
+						dsthost->bulk_flow_count--;
 					b->decaying_flow_count++;
 				} else if (flow->set == CAKE_SET_SPARSE ||
 					   flow->set == CAKE_SET_SPARSE_WAIT) {
@@ -2164,8 +2180,14 @@ retry:
 				if (flow->set == CAKE_SET_SPARSE ||
 				    flow->set == CAKE_SET_SPARSE_WAIT)
 					b->sparse_flow_count--;
-				else if (flow->set == CAKE_SET_BULK)
+				else if (flow->set == CAKE_SET_BULK) {
 					b->bulk_flow_count--;
+					if (cake_dsrc(q->flow_mode))
+						srchost->bulk_flow_count--;
+
+					if (cake_ddst(q->flow_mode))
+						dsthost->bulk_flow_count--;
+				}
 				else
 					b->decaying_flow_count--;
 
-- 
2.20.1


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-15 19:22               ` George Amanakis
@ 2019-01-15 22:42                 ` Georgios Amanakis
  2019-01-16  3:34                   ` George Amanakis
  0 siblings, 1 reply; 29+ messages in thread
From: Georgios Amanakis @ 2019-01-15 22:42 UTC (permalink / raw)
  To: Cake List

[-- Attachment #1: Type: text/plain, Size: 5880 bytes --]

The patch I previously sent had the host_load manipulated only when
dual-dsthost is sent, since that was what I was primarily testing. For
dual-srchost to behave the same way line 2107 has to be changed, too. Will
resubmit in case anybody wants to test, later today.

On Tue, Jan 15, 2019, 2:22 PM George Amanakis <gamanakis@gmail.com wrote:

>
> I think what is happening here is that if a client has flows such as "a
> (bulk upload)" and "b (bulk download)", the incoming ACKs of flow "a"
> compete with the incoming bulk traffic on flow "b". With compete I mean
> in terms of flow selection.
>
> So if we adjust the host_load to be the same with the bulk_flow_count of
> *each* host, the problem seems to be resolved.
> I drafted a patch below.
>
> Pete's setup, tested with the patch (ingress in mbit/s):
> IP1: 8down  49.18mbit/s
> IP1: 1up    46.73mbit/s
> IP2: 1down  47.39mbit/s
> IP2: 8up    49.21mbit/s
>
>
> ---
>  sch_cake.c | 34 ++++++++++++++++++++++++++++------
>  1 file changed, 28 insertions(+), 6 deletions(-)
>
> diff --git a/sch_cake.c b/sch_cake.c
> index d434ae0..5c0f0e1 100644
> --- a/sch_cake.c
> +++ b/sch_cake.c
> @@ -148,6 +148,7 @@ struct cake_host {
>         u32 dsthost_tag;
>         u16 srchost_refcnt;
>         u16 dsthost_refcnt;
> +       u16 bulk_flow_count;
>  };
>
>  struct cake_heap_entry {
> @@ -1897,10 +1898,10 @@ static s32 cake_enqueue(struct sk_buff *skb,
> struct Qdisc *sch,
>                 q->last_packet_time = now;
>         }
>
> +       struct cake_host *srchost = &b->hosts[flow->srchost];
> +       struct cake_host *dsthost = &b->hosts[flow->dsthost];
>         /* flowchain */
>         if (!flow->set || flow->set == CAKE_SET_DECAYING) {
> -               struct cake_host *srchost = &b->hosts[flow->srchost];
> -               struct cake_host *dsthost = &b->hosts[flow->dsthost];
>                 u16 host_load = 1;
>
>                 if (!flow->set) {
> @@ -1927,6 +1928,11 @@ static s32 cake_enqueue(struct sk_buff *skb, struct
> Qdisc *sch,
>                 flow->set = CAKE_SET_BULK;
>                 b->sparse_flow_count--;
>                 b->bulk_flow_count++;
> +               if (cake_dsrc(q->flow_mode))
> +                       srchost->bulk_flow_count++;
> +
> +               if (cake_ddst(q->flow_mode))
> +                       dsthost->bulk_flow_count++;
>         }
>
>         if (q->buffer_used > q->buffer_max_used)
> @@ -2101,7 +2107,7 @@ retry:
>                 host_load = max(host_load, srchost->srchost_refcnt);
>
>         if (cake_ddst(q->flow_mode))
> -               host_load = max(host_load, dsthost->dsthost_refcnt);
> +               host_load = max(host_load, dsthost->bulk_flow_count);
>
>         WARN_ON(host_load > CAKE_QUEUES);
>
> @@ -2110,8 +2116,6 @@ retry:
>                 /* The shifted prandom_u32() is a way to apply dithering to
>                  * avoid accumulating roundoff errors
>                  */
> -               flow->deficit += (b->flow_quantum * quantum_div[host_load]
> +
> -                                 (prandom_u32() >> 16)) >> 16;
>                 list_move_tail(&flow->flowchain, &b->old_flows);
>
>                 /* Keep all flows with deficits out of the sparse and
> decaying
> @@ -2122,6 +2126,11 @@ retry:
>                         if (flow->head) {
>                                 b->sparse_flow_count--;
>                                 b->bulk_flow_count++;
> +                               if (cake_dsrc(q->flow_mode))
> +                                       srchost->bulk_flow_count++;
> +
> +                               if (cake_ddst(q->flow_mode))
> +                                       dsthost->bulk_flow_count++;
>                                 flow->set = CAKE_SET_BULK;
>                         } else {
>                                 /* we've moved it to the bulk rotation for
> @@ -2131,6 +2140,8 @@ retry:
>                                 flow->set = CAKE_SET_SPARSE_WAIT;
>                         }
>                 }
> +               flow->deficit += (b->flow_quantum * quantum_div[host_load]
> +
> +                                 (prandom_u32() >> 16)) >> 16;
>                 goto retry;
>         }
>
> @@ -2151,6 +2162,11 @@ retry:
>                                                &b->decaying_flows);
>                                 if (flow->set == CAKE_SET_BULK) {
>                                         b->bulk_flow_count--;
> +                                       if (cake_dsrc(q->flow_mode))
> +                                               srchost->bulk_flow_count--;
> +
> +                                       if (cake_ddst(q->flow_mode))
> +                                               dsthost->bulk_flow_count--;
>                                         b->decaying_flow_count++;
>                                 } else if (flow->set == CAKE_SET_SPARSE ||
>                                            flow->set ==
> CAKE_SET_SPARSE_WAIT) {
> @@ -2164,8 +2180,14 @@ retry:
>                                 if (flow->set == CAKE_SET_SPARSE ||
>                                     flow->set == CAKE_SET_SPARSE_WAIT)
>                                         b->sparse_flow_count--;
> -                               else if (flow->set == CAKE_SET_BULK)
> +                               else if (flow->set == CAKE_SET_BULK) {
>                                         b->bulk_flow_count--;
> +                                       if (cake_dsrc(q->flow_mode))
> +                                               srchost->bulk_flow_count--;
> +
> +                                       if (cake_ddst(q->flow_mode))
> +                                               dsthost->bulk_flow_count--;
> +                               }
>                                 else
>                                         b->decaying_flow_count--;
>
> --
> 2.20.1
>
>

[-- Attachment #2: Type: text/html, Size: 7738 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [Cake]  dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-15 22:42                 ` Georgios Amanakis
@ 2019-01-16  3:34                   ` George Amanakis
  2019-01-16  3:47                     ` gamanakis
  2019-01-18 10:06                     ` Toke Høiland-Jørgensen
  0 siblings, 2 replies; 29+ messages in thread
From: George Amanakis @ 2019-01-16  3:34 UTC (permalink / raw)
  To: cake

A better version of the patch for testing.

Setup:
IP{1,2}(flent) <----> Router <----> Server(netserver)

Router:
tc qdisc add dev enp1s0 root cake bandwidth 100mbit dual-srchost besteffort
tc qdisc add dev enp4s0 root cake bandwidth 100mbit dual-dsthost besteffort

IP1:
Data file written to ./tcp_8down-2019-01-15T222742.358874.flent.gz.
Summary of tcp_8down test run at 2019-01-16 03:27:42.358874:

                             avg       median          # data pts
 Ping (ms) ICMP   :         0.86         0.78 ms              342
 TCP download avg :         6.16         5.86 Mbits/s         301
 TCP download sum :        49.28        46.90 Mbits/s         301
 TCP download::1  :         6.23         5.86 Mbits/s         297
 TCP download::2  :         6.16         5.87 Mbits/s         297
 TCP download::3  :         6.15         5.87 Mbits/s         297
 TCP download::4  :         6.14         5.87 Mbits/s         297
 TCP download::5  :         6.15         5.87 Mbits/s         297
 TCP download::6  :         6.15         5.87 Mbits/s         297
 TCP download::7  :         6.15         5.87 Mbits/s         297
 TCP download::8  :         6.15         5.87 Mbits/s         297

Data file written to ./tcp_1up-2019-01-15T222743.387906.flent.gz.
Summary of tcp_1up test run at 2019-01-16 03:27:43.387906:

                           avg       median          # data pts
 Ping (ms) ICMP :         0.87         0.80 ms              343
 TCP upload     :        47.02        46.20 Mbits/s         265


IP2:
Data file written to ./tcp_1up-2019-01-15T222744.371050.flent.gz.
Summary of tcp_1up test run at 2019-01-16 03:27:44.371050:

                           avg       median          # data pts
 Ping (ms) ICMP :         0.89         0.77 ms              342
 TCP upload     :        46.89        46.36 Mbits/s         293
Data file written to ./tcp_8down-2019-01-15T222745.382941.flent.gz.
Summary of tcp_8down test run at 2019-01-16 03:27:45.382941:

                             avg       median          # data pts
 Ping (ms) ICMP   :         0.90         0.81 ms              343
 TCP download avg :         6.15         5.86 Mbits/s         301
 TCP download sum :        49.23        46.91 Mbits/s         301
 TCP download::1  :         6.15         5.87 Mbits/s         297
 TCP download::2  :         6.15         5.87 Mbits/s         297
 TCP download::3  :         6.15         5.87 Mbits/s         296
 TCP download::4  :         6.15         5.87 Mbits/s         297
 TCP download::5  :         6.15         5.87 Mbits/s         297
 TCP download::6  :         6.16         5.87 Mbits/s         297
 TCP download::7  :         6.16         5.87 Mbits/s         297
 TCP download::8  :         6.16         5.87 Mbits/s         297



---
 sch_cake.c | 67 ++++++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 50 insertions(+), 17 deletions(-)

diff --git a/sch_cake.c b/sch_cake.c
index d434ae0..962a090 100644
--- a/sch_cake.c
+++ b/sch_cake.c
@@ -148,6 +148,7 @@ struct cake_host {
 	u32 dsthost_tag;
 	u16 srchost_refcnt;
 	u16 dsthost_refcnt;
+	u16 bulk_flow_count;
 };
 
 struct cake_heap_entry {
@@ -1921,12 +1922,22 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 		flow->deficit = (b->flow_quantum *
 				 quantum_div[host_load]) >> 16;
 	} else if (flow->set == CAKE_SET_SPARSE_WAIT) {
+		struct cake_host *srchost = &b->hosts[flow->srchost];
+		struct cake_host *dsthost = &b->hosts[flow->dsthost];
+
 		/* this flow was empty, accounted as a sparse flow, but actually
 		 * in the bulk rotation.
 		 */
 		flow->set = CAKE_SET_BULK;
 		b->sparse_flow_count--;
 		b->bulk_flow_count++;
+
+		if (cake_dsrc(q->flow_mode))
+			srchost->bulk_flow_count++;
+
+		if (cake_ddst(q->flow_mode))
+			dsthost->bulk_flow_count++;
+
 	}
 
 	if (q->buffer_used > q->buffer_max_used)
@@ -2097,23 +2108,8 @@ retry:
 	dsthost = &b->hosts[flow->dsthost];
 	host_load = 1;
 
-	if (cake_dsrc(q->flow_mode))
-		host_load = max(host_load, srchost->srchost_refcnt);
-
-	if (cake_ddst(q->flow_mode))
-		host_load = max(host_load, dsthost->dsthost_refcnt);
-
-	WARN_ON(host_load > CAKE_QUEUES);
-
 	/* flow isolation (DRR++) */
 	if (flow->deficit <= 0) {
-		/* The shifted prandom_u32() is a way to apply dithering to
-		 * avoid accumulating roundoff errors
-		 */
-		flow->deficit += (b->flow_quantum * quantum_div[host_load] +
-				  (prandom_u32() >> 16)) >> 16;
-		list_move_tail(&flow->flowchain, &b->old_flows);
-
 		/* Keep all flows with deficits out of the sparse and decaying
 		 * rotations.  No non-empty flow can go into the decaying
 		 * rotation, so they can't get deficits
@@ -2122,6 +2118,13 @@ retry:
 			if (flow->head) {
 				b->sparse_flow_count--;
 				b->bulk_flow_count++;
+
+				if (cake_dsrc(q->flow_mode))
+					srchost->bulk_flow_count++;
+
+				if (cake_ddst(q->flow_mode))
+					dsthost->bulk_flow_count++;
+
 				flow->set = CAKE_SET_BULK;
 			} else {
 				/* we've moved it to the bulk rotation for
@@ -2131,6 +2134,22 @@ retry:
 				flow->set = CAKE_SET_SPARSE_WAIT;
 			}
 		}
+
+		if (cake_dsrc(q->flow_mode))
+			host_load = max(host_load, srchost->bulk_flow_count);
+
+		if (cake_ddst(q->flow_mode))
+			host_load = max(host_load, dsthost->bulk_flow_count);
+
+		WARN_ON(host_load > CAKE_QUEUES);
+
+		/* The shifted prandom_u32() is a way to apply dithering to
+		 * avoid accumulating roundoff errors
+		 */
+		flow->deficit += (b->flow_quantum * quantum_div[host_load] +
+				  (prandom_u32() >> 16)) >> 16;
+		list_move_tail(&flow->flowchain, &b->old_flows);
+
 		goto retry;
 	}
 
@@ -2151,6 +2170,13 @@ retry:
 					       &b->decaying_flows);
 				if (flow->set == CAKE_SET_BULK) {
 					b->bulk_flow_count--;
+
+					if (cake_dsrc(q->flow_mode))
+						srchost->bulk_flow_count--;
+
+					if (cake_ddst(q->flow_mode))
+						dsthost->bulk_flow_count--;
+
 					b->decaying_flow_count++;
 				} else if (flow->set == CAKE_SET_SPARSE ||
 					   flow->set == CAKE_SET_SPARSE_WAIT) {
@@ -2164,9 +2190,16 @@ retry:
 				if (flow->set == CAKE_SET_SPARSE ||
 				    flow->set == CAKE_SET_SPARSE_WAIT)
 					b->sparse_flow_count--;
-				else if (flow->set == CAKE_SET_BULK)
+				else if (flow->set == CAKE_SET_BULK) {
 					b->bulk_flow_count--;
-				else
+
+					if (cake_dsrc(q->flow_mode))
+						srchost->bulk_flow_count--;
+
+					if (cake_ddst(q->flow_mode))
+						dsthost->bulk_flow_count--;
+
+				} else
 					b->decaying_flow_count--;
 
 				flow->set = CAKE_SET_NONE;
-- 
2.20.1


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-16  3:34                   ` George Amanakis
@ 2019-01-16  3:47                     ` gamanakis
  2019-01-16  7:58                       ` Pete Heist
  2019-01-26  7:35                       ` Pete Heist
  2019-01-18 10:06                     ` Toke Høiland-Jørgensen
  1 sibling, 2 replies; 29+ messages in thread
From: gamanakis @ 2019-01-16  3:47 UTC (permalink / raw)
  To: cake

[-- Attachment #1: Type: text/plain, Size: 2738 bytes --]

Of course I pasted the results for IP1 and IP2 the wrong way. Sorry!
These are the correct results, along with the *.flent.gz files.

IP1: 
flent -H 192.168.1.2 tcp_8down &
Data file written to ./tcp_8down-2019-01-15T223703.709305.flent.gz.
Summary of tcp_8down test run at 2019-01-16 03:37:03.709305:

                             avg       median          # data pts
 Ping (ms) ICMP   :         0.78         0.72 ms              342
 TCP download avg :         6.03         5.83 Mbits/s         301
 TCP download sum :        48.24        46.65 Mbits/s         301
 TCP download::1  :         6.03         5.83 Mbits/s         298
 TCP download::2  :         6.03         5.83 Mbits/s         297
 TCP download::3  :         6.03         5.83 Mbits/s         297
 TCP download::4  :         6.03         5.83 Mbits/s         298
 TCP download::5  :         6.03         5.83 Mbits/s         298
 TCP download::6  :         6.03         5.83 Mbits/s         298
 TCP download::7  :         6.03         5.83 Mbits/s         297
 TCP download::8  :         6.03         5.83 Mbits/s         298


flent -H 192.168.1.2 tcp_1up &
Data file written to ./tcp_1up-2019-01-15T223704.711193.flent.gz.
Summary of tcp_1up test run at 2019-01-16 03:37:04.711193:

                           avg       median          # data pts
 Ping (ms) ICMP :         0.79         0.73 ms              342
 TCP upload     :        48.12        46.69 Mbits/s         294



IP2:
flent -H 192.168.1.2 tcp_1down &
Data file written to ./tcp_1down-2019-01-15T223705.693550.flent.gz.
Summary of tcp_1down test run at 2019-01-16 03:37:05.693550:

                           avg       median          # data pts
 Ping (ms) ICMP :         0.77         0.69 ms              341
 TCP download   :        48.10        46.65 Mbits/s         300


flent -H 192.168.1.2 tcp_8up &
Data file written to ./tcp_8up-2019-01-15T223706.706614.flent.gz.
Summary of tcp_8up test run at 2019-01-16 03:37:06.706614:

                           avg       median          # data pts
 Ping (ms) ICMP :         0.74         0.70 ms              341
 TCP upload avg :         6.03         5.83 Mbits/s         301
 TCP upload sum :        48.25        46.63 Mbits/s         301
 TCP upload::1  :         6.04         5.86 Mbits/s         226
 TCP upload::2  :         6.03         5.86 Mbits/s         226
 TCP upload::3  :         6.03         5.86 Mbits/s         226
 TCP upload::4  :         6.03         5.86 Mbits/s         225
 TCP upload::5  :         6.03         5.86 Mbits/s         226
 TCP upload::6  :         6.03         5.86 Mbits/s         226
 TCP upload::7  :         6.03         5.78 Mbits/s         220
 TCP upload::8  :         6.03         5.88 Mbits/s         277



[-- Attachment #2: tcp_8up-2019-01-15T223706.706614.flent.gz --]
[-- Type: application/octet-stream, Size: 51207 bytes --]

[-- Attachment #3: tcp_8down-2019-01-15T223703.709305.flent.gz --]
[-- Type: application/octet-stream, Size: 43192 bytes --]

[-- Attachment #4: tcp_1up-2019-01-15T223704.711193.flent.gz --]
[-- Type: application/octet-stream, Size: 16179 bytes --]

[-- Attachment #5: tcp_1down-2019-01-15T223705.693550.flent.gz --]
[-- Type: application/octet-stream, Size: 15376 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-16  3:47                     ` gamanakis
@ 2019-01-16  7:58                       ` Pete Heist
  2019-01-26  7:35                       ` Pete Heist
  1 sibling, 0 replies; 29+ messages in thread
From: Pete Heist @ 2019-01-16  7:58 UTC (permalink / raw)
  To: gamanakis; +Cc: cake, chromatix99, toke

Thanks for working on it, looks promising! I’d be interested in hearing some more feedback if this is the right approach, but it looks like it from the experiments. I should be able to put some more testing time into it in a few days...

> On Jan 16, 2019, at 4:47 AM, <gamanakis@gmail.com> <gamanakis@gmail.com> wrote:
> 
> Of course I pasted the results for IP1 and IP2 the wrong way. Sorry!
> These are the correct results, along with the *.flent.gz files.
> 
> IP1: 
> flent -H 192.168.1.2 tcp_8down &
> Data file written to ./tcp_8down-2019-01-15T223703.709305.flent.gz.
> Summary of tcp_8down test run at 2019-01-16 03:37:03.709305:
> 
>                             avg       median          # data pts
> Ping (ms) ICMP   :         0.78         0.72 ms              342
> TCP download avg :         6.03         5.83 Mbits/s         301
> TCP download sum :        48.24        46.65 Mbits/s         301
> TCP download::1  :         6.03         5.83 Mbits/s         298
> TCP download::2  :         6.03         5.83 Mbits/s         297
> TCP download::3  :         6.03         5.83 Mbits/s         297
> TCP download::4  :         6.03         5.83 Mbits/s         298
> TCP download::5  :         6.03         5.83 Mbits/s         298
> TCP download::6  :         6.03         5.83 Mbits/s         298
> TCP download::7  :         6.03         5.83 Mbits/s         297
> TCP download::8  :         6.03         5.83 Mbits/s         298
> 
> 
> flent -H 192.168.1.2 tcp_1up &
> Data file written to ./tcp_1up-2019-01-15T223704.711193.flent.gz.
> Summary of tcp_1up test run at 2019-01-16 03:37:04.711193:
> 
>                           avg       median          # data pts
> Ping (ms) ICMP :         0.79         0.73 ms              342
> TCP upload     :        48.12        46.69 Mbits/s         294
> 
> 
> 
> IP2:
> flent -H 192.168.1.2 tcp_1down &
> Data file written to ./tcp_1down-2019-01-15T223705.693550.flent.gz.
> Summary of tcp_1down test run at 2019-01-16 03:37:05.693550:
> 
>                           avg       median          # data pts
> Ping (ms) ICMP :         0.77         0.69 ms              341
> TCP download   :        48.10        46.65 Mbits/s         300
> 
> 
> flent -H 192.168.1.2 tcp_8up &
> Data file written to ./tcp_8up-2019-01-15T223706.706614.flent.gz.
> Summary of tcp_8up test run at 2019-01-16 03:37:06.706614:
> 
>                           avg       median          # data pts
> Ping (ms) ICMP :         0.74         0.70 ms              341
> TCP upload avg :         6.03         5.83 Mbits/s         301
> TCP upload sum :        48.25        46.63 Mbits/s         301
> TCP upload::1  :         6.04         5.86 Mbits/s         226
> TCP upload::2  :         6.03         5.86 Mbits/s         226
> TCP upload::3  :         6.03         5.86 Mbits/s         226
> TCP upload::4  :         6.03         5.86 Mbits/s         225
> TCP upload::5  :         6.03         5.86 Mbits/s         226
> TCP upload::6  :         6.03         5.86 Mbits/s         226
> TCP upload::7  :         6.03         5.78 Mbits/s         220
> TCP upload::8  :         6.03         5.88 Mbits/s         277
> 
> 
> <tcp_8up-2019-01-15T223706.706614.flent.gz><tcp_8down-2019-01-15T223703.709305.flent.gz><tcp_1up-2019-01-15T223704.711193.flent.gz><tcp_1down-2019-01-15T223705.693550.flent.gz>


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-16  3:34                   ` George Amanakis
  2019-01-16  3:47                     ` gamanakis
@ 2019-01-18 10:06                     ` Toke Høiland-Jørgensen
  2019-01-18 12:07                       ` Georgios Amanakis
  1 sibling, 1 reply; 29+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-01-18 10:06 UTC (permalink / raw)
  To: George Amanakis, cake

George Amanakis <gamanakis@gmail.com> writes:

> A better version of the patch for testing.

So basically, you're changing the host fairness algorithm to only
consider bulk flows instead of all active flows to that host, right?
Seems reasonable to me. Jonathan, any opinion?

-Toke

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-18 10:06                     ` Toke Høiland-Jørgensen
@ 2019-01-18 12:07                       ` Georgios Amanakis
  2019-01-18 13:33                         ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 29+ messages in thread
From: Georgios Amanakis @ 2019-01-18 12:07 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Cake List, Pete Heist, chromatix99

[-- Attachment #1: Type: text/plain, Size: 571 bytes --]

Yes, exactly. Would be interesting to hear what Jonathan, Toke and others
think. I want to see if fairness is preserved in this case with sparse
flows only. Could flent do this?

On Fri, Jan 18, 2019, 5:07 AM Toke Høiland-Jørgensen <toke@toke.dk wrote:

> George Amanakis <gamanakis@gmail.com> writes:
>
> > A better version of the patch for testing.
>
> So basically, you're changing the host fairness algorithm to only
> consider bulk flows instead of all active flows to that host, right?
> Seems reasonable to me. Jonathan, any opinion?
>
> -Toke
>

[-- Attachment #2: Type: text/html, Size: 981 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-18 12:07                       ` Georgios Amanakis
@ 2019-01-18 13:33                         ` Toke Høiland-Jørgensen
  2019-01-18 13:40                           ` Sebastian Moeller
  2019-01-18 13:45                           ` Jonathan Morton
  0 siblings, 2 replies; 29+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-01-18 13:33 UTC (permalink / raw)
  To: Georgios Amanakis; +Cc: Cake List

Georgios Amanakis <gamanakis@gmail.com> writes:

> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
> others think. I want to see if fairness is preserved in this case with
> sparse flows only. Could flent do this?

Well, sparse flows are (by definition) not building a queue, so it
doesn't really make sense to talk about fairness for them. How would you
measure that?

This is also the reason I agree that they shouldn't be counted for host
fairness calculation purposes, BTW...

-Toke

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-18 13:33                         ` Toke Høiland-Jørgensen
@ 2019-01-18 13:40                           ` Sebastian Moeller
  2019-01-18 14:30                             ` Toke Høiland-Jørgensen
  2019-01-18 13:45                           ` Jonathan Morton
  1 sibling, 1 reply; 29+ messages in thread
From: Sebastian Moeller @ 2019-01-18 13:40 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Georgios Amanakis, Cake List

Hi Toke,

> On Jan 18, 2019, at 14:33, Toke Høiland-Jørgensen <toke@redhat.com> wrote:
> 
> Georgios Amanakis <gamanakis@gmail.com> writes:
> 
>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>> others think. I want to see if fairness is preserved in this case with
>> sparse flows only. Could flent do this?
> 
> Well, sparse flows are (by definition) not building a queue, so it
> doesn't really make sense to talk about fairness for them. How would you
> measure that?
> 
> This is also the reason I agree that they shouldn't be counted for host
> fairness calculation purposes, BTW...

That leads to a question (revealing my lack of detailed knowledge) if there is a sufficient number of new flows (that should qualify as new/sparse) that servicing all of them takes longer than each queue accumulating new packets, at what point in time are these flows considered "unworthy" of sparse flow boosting? Or differetly how i cake going to deal with a UDP flood where the 5 tuple hash is different for all packets (say by spoofing ports or randomly picking dst addresses)? 

Best Regards


> 
> -Toke
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-18 13:33                         ` Toke Høiland-Jørgensen
  2019-01-18 13:40                           ` Sebastian Moeller
@ 2019-01-18 13:45                           ` Jonathan Morton
  2019-01-18 14:32                             ` Toke Høiland-Jørgensen
  1 sibling, 1 reply; 29+ messages in thread
From: Jonathan Morton @ 2019-01-18 13:45 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: Georgios Amanakis, Cake List

>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>> others think. I want to see if fairness is preserved in this case with
>> sparse flows only. Could flent do this?
> 
> Well, sparse flows are (by definition) not building a queue, so it
> doesn't really make sense to talk about fairness for them. How would you
> measure that?
> 
> This is also the reason I agree that they shouldn't be counted for host
> fairness calculation purposes, BTW...

The trick is that we need to keep fairness of the deficit replenishments, which occur for sparse flows as well as bulk ones, but in smaller amounts.  The number of active flows is presently the stand-in for this.  It's possible to have a host backlogged with hundreds of new flows which are, by definition, sparse.

I'm still trying to get my head around how the modified code works in detail.  It's possible that a different implementation would either be more concise and readable, or better model what is actually needed.  But I can't tell until I grok it.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-18 13:40                           ` Sebastian Moeller
@ 2019-01-18 14:30                             ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 29+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-01-18 14:30 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: Georgios Amanakis, Cake List

Sebastian Moeller <moeller0@gmx.de> writes:

> Hi Toke,
>
>> On Jan 18, 2019, at 14:33, Toke Høiland-Jørgensen <toke@redhat.com> wrote:
>> 
>> Georgios Amanakis <gamanakis@gmail.com> writes:
>> 
>>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>>> others think. I want to see if fairness is preserved in this case with
>>> sparse flows only. Could flent do this?
>> 
>> Well, sparse flows are (by definition) not building a queue, so it
>> doesn't really make sense to talk about fairness for them. How would you
>> measure that?
>> 
>> This is also the reason I agree that they shouldn't be counted for host
>> fairness calculation purposes, BTW...
>
> That leads to a question (revealing my lack of detailed knowledge) if
> there is a sufficient number of new flows (that should qualify as
> new/sparse) that servicing all of them takes longer than each queue
> accumulating new packets, at what point in time are these flows
> considered "unworthy" of sparse flow boosting? Or differetly how i
> cake going to deal with a UDP flood where the 5 tuple hash is
> different for all packets (say by spoofing ports or randomly picking
> dst addresses)?

Well, what is considered a sparse flow is a function of both the flow
rate itself, *as well as* the link rate, number of competing flows, etc.
So a flow can be sparse on one link, but turn into a bulk flow on
another because that link has less capacity.

I explore this in some detail here:
https://doi.org/10.1109/LCOMM.2018.2871457

-Toke

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-18 13:45                           ` Jonathan Morton
@ 2019-01-18 14:32                             ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 29+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-01-18 14:32 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: Georgios Amanakis, Cake List

Jonathan Morton <chromatix99@gmail.com> writes:

>>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>>> others think. I want to see if fairness is preserved in this case with
>>> sparse flows only. Could flent do this?
>> 
>> Well, sparse flows are (by definition) not building a queue, so it
>> doesn't really make sense to talk about fairness for them. How would you
>> measure that?
>> 
>> This is also the reason I agree that they shouldn't be counted for host
>> fairness calculation purposes, BTW...
>
> The trick is that we need to keep fairness of the deficit
> replenishments, which occur for sparse flows as well as bulk ones, but
> in smaller amounts. The number of active flows is presently the
> stand-in for this. It's possible to have a host backlogged with
> hundreds of new flows which are, by definition, sparse.

Right, there's some care needed to ensure we don't get weird behaviour
during transients such as flow startup.

> I'm still trying to get my head around how the modified code works in
> detail.  It's possible that a different implementation would either be
> more concise and readable, or better model what is actually needed.
> But I can't tell until I grok it.

Cool, good to know you are on it; I'm happy to wait until you've had
some time to form an opinion on this :)

-Toke

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-16  3:47                     ` gamanakis
  2019-01-16  7:58                       ` Pete Heist
@ 2019-01-26  7:35                       ` Pete Heist
  2019-01-28  1:34                         ` Georgios Amanakis
  1 sibling, 1 reply; 29+ messages in thread
From: Pete Heist @ 2019-01-26  7:35 UTC (permalink / raw)
  To: gamanakis; +Cc: Cake List, Jonathan Morton, toke

I ran my original iperf3 test with and without the patch, through my one-armed router with hfsc+cake on egress each direction at 100Mbit:

Unpatched:

IP1 1-flow TCP up: 11.3
IP2 8-flow TCP up: 90.1
IP1 8-flow TCP down: 89.8
IP2 1-flow TCP down: 11.3
Jain’s fairness index, directional: 0.623 up, 0.631 down
Jain’s fairness index, aggregate: 0.997

Patched:

IP1 1-flow TCP up: 51.0
IP2 8-flow TCP up: 51.0
IP1 8-flow TCP down: 50.7
IP2 1-flow TCP down: 50.6
Jain’s fairness index, directional: 1.0 up, 0.999 down
Jain’s fairness index, aggregate: 0.999

So this confirms George’s result. :)

Obviously if we look at _aggregate_ fairness it’s essentially the same in both cases. I think directional fairness is what users would expect though.

Can anyone think of any potentially pathological cases from considering only bulk flows for fairness, that I can test? Otherwise, I’d like to see this idea taken in...

> On Jan 16, 2019, at 4:47 AM, gamanakis@gmail.com wrote:
> 
> Of course I pasted the results for IP1 and IP2 the wrong way. Sorry!
> These are the correct results, along with the *.flent.gz files.
> 
> IP1: 
> flent -H 192.168.1.2 tcp_8down &
> Data file written to ./tcp_8down-2019-01-15T223703.709305.flent.gz.
> Summary of tcp_8down test run at 2019-01-16 03:37:03.709305:
> 
>                             avg       median          # data pts
> Ping (ms) ICMP   :         0.78         0.72 ms              342
> TCP download avg :         6.03         5.83 Mbits/s         301
> TCP download sum :        48.24        46.65 Mbits/s         301
> TCP download::1  :         6.03         5.83 Mbits/s         298
> TCP download::2  :         6.03         5.83 Mbits/s         297
> TCP download::3  :         6.03         5.83 Mbits/s         297
> TCP download::4  :         6.03         5.83 Mbits/s         298
> TCP download::5  :         6.03         5.83 Mbits/s         298
> TCP download::6  :         6.03         5.83 Mbits/s         298
> TCP download::7  :         6.03         5.83 Mbits/s         297
> TCP download::8  :         6.03         5.83 Mbits/s         298
> 
> 
> flent -H 192.168.1.2 tcp_1up &
> Data file written to ./tcp_1up-2019-01-15T223704.711193.flent.gz.
> Summary of tcp_1up test run at 2019-01-16 03:37:04.711193:
> 
>                           avg       median          # data pts
> Ping (ms) ICMP :         0.79         0.73 ms              342
> TCP upload     :        48.12        46.69 Mbits/s         294
> 
> 
> 
> IP2:
> flent -H 192.168.1.2 tcp_1down &
> Data file written to ./tcp_1down-2019-01-15T223705.693550.flent.gz.
> Summary of tcp_1down test run at 2019-01-16 03:37:05.693550:
> 
>                           avg       median          # data pts
> Ping (ms) ICMP :         0.77         0.69 ms              341
> TCP download   :        48.10        46.65 Mbits/s         300
> 
> 
> flent -H 192.168.1.2 tcp_8up &
> Data file written to ./tcp_8up-2019-01-15T223706.706614.flent.gz.
> Summary of tcp_8up test run at 2019-01-16 03:37:06.706614:
> 
>                           avg       median          # data pts
> Ping (ms) ICMP :         0.74         0.70 ms              341
> TCP upload avg :         6.03         5.83 Mbits/s         301
> TCP upload sum :        48.25        46.63 Mbits/s         301
> TCP upload::1  :         6.04         5.86 Mbits/s         226
> TCP upload::2  :         6.03         5.86 Mbits/s         226
> TCP upload::3  :         6.03         5.86 Mbits/s         226
> TCP upload::4  :         6.03         5.86 Mbits/s         225
> TCP upload::5  :         6.03         5.86 Mbits/s         226
> TCP upload::6  :         6.03         5.86 Mbits/s         226
> TCP upload::7  :         6.03         5.78 Mbits/s         220
> TCP upload::8  :         6.03         5.88 Mbits/s         277
> 
> 
> <tcp_8up-2019-01-15T223706.706614.flent.gz><tcp_8down-2019-01-15T223703.709305.flent.gz><tcp_1up-2019-01-15T223704.711193.flent.gz><tcp_1down-2019-01-15T223705.693550.flent.gz>


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic
  2019-01-26  7:35                       ` Pete Heist
@ 2019-01-28  1:34                         ` Georgios Amanakis
  0 siblings, 0 replies; 29+ messages in thread
From: Georgios Amanakis @ 2019-01-28  1:34 UTC (permalink / raw)
  To: Pete Heist; +Cc: Cake List, Jonathan Morton, Toke Høiland-Jørgensen

Thanks for testing Pete! I should note though, that patch is incorrect
in terms of triple-isolate. It can be further improved my
differentiating between srchost and dsthost. The results are the same
nevertheless. The same principle can also be applied to the sparse
flows.

However, I completely understand Jonathan when he says that this might
not be the optimal solution, and perhaps a different model of
flow-selection is necessary (e.g. doing exactly what the man page
says: first decide based on host priority, and then based on priority
among the flows of that host).

On Sat, Jan 26, 2019 at 2:35 AM Pete Heist <pete@heistp.net> wrote:
>
> I ran my original iperf3 test with and without the patch, through my one-armed router with hfsc+cake on egress each direction at 100Mbit:
>
> Unpatched:
>
> IP1 1-flow TCP up: 11.3
> IP2 8-flow TCP up: 90.1
> IP1 8-flow TCP down: 89.8
> IP2 1-flow TCP down: 11.3
> Jain’s fairness index, directional: 0.623 up, 0.631 down
> Jain’s fairness index, aggregate: 0.997
>
> Patched:
>
> IP1 1-flow TCP up: 51.0
> IP2 8-flow TCP up: 51.0
> IP1 8-flow TCP down: 50.7
> IP2 1-flow TCP down: 50.6
> Jain’s fairness index, directional: 1.0 up, 0.999 down
> Jain’s fairness index, aggregate: 0.999
>
> So this confirms George’s result. :)
>
> Obviously if we look at _aggregate_ fairness it’s essentially the same in both cases. I think directional fairness is what users would expect though.
>
> Can anyone think of any potentially pathological cases from considering only bulk flows for fairness, that I can test? Otherwise, I’d like to see this idea taken in...
>
> > On Jan 16, 2019, at 4:47 AM, gamanakis@gmail.com wrote:
> >
> > Of course I pasted the results for IP1 and IP2 the wrong way. Sorry!
> > These are the correct results, along with the *.flent.gz files.
> >
> > IP1:
> > flent -H 192.168.1.2 tcp_8down &
> > Data file written to ./tcp_8down-2019-01-15T223703.709305.flent.gz.
> > Summary of tcp_8down test run at 2019-01-16 03:37:03.709305:
> >
> >                             avg       median          # data pts
> > Ping (ms) ICMP   :         0.78         0.72 ms              342
> > TCP download avg :         6.03         5.83 Mbits/s         301
> > TCP download sum :        48.24        46.65 Mbits/s         301
> > TCP download::1  :         6.03         5.83 Mbits/s         298
> > TCP download::2  :         6.03         5.83 Mbits/s         297
> > TCP download::3  :         6.03         5.83 Mbits/s         297
> > TCP download::4  :         6.03         5.83 Mbits/s         298
> > TCP download::5  :         6.03         5.83 Mbits/s         298
> > TCP download::6  :         6.03         5.83 Mbits/s         298
> > TCP download::7  :         6.03         5.83 Mbits/s         297
> > TCP download::8  :         6.03         5.83 Mbits/s         298
> >
> >
> > flent -H 192.168.1.2 tcp_1up &
> > Data file written to ./tcp_1up-2019-01-15T223704.711193.flent.gz.
> > Summary of tcp_1up test run at 2019-01-16 03:37:04.711193:
> >
> >                           avg       median          # data pts
> > Ping (ms) ICMP :         0.79         0.73 ms              342
> > TCP upload     :        48.12        46.69 Mbits/s         294
> >
> >
> >
> > IP2:
> > flent -H 192.168.1.2 tcp_1down &
> > Data file written to ./tcp_1down-2019-01-15T223705.693550.flent.gz.
> > Summary of tcp_1down test run at 2019-01-16 03:37:05.693550:
> >
> >                           avg       median          # data pts
> > Ping (ms) ICMP :         0.77         0.69 ms              341
> > TCP download   :        48.10        46.65 Mbits/s         300
> >
> >
> > flent -H 192.168.1.2 tcp_8up &
> > Data file written to ./tcp_8up-2019-01-15T223706.706614.flent.gz.
> > Summary of tcp_8up test run at 2019-01-16 03:37:06.706614:
> >
> >                           avg       median          # data pts
> > Ping (ms) ICMP :         0.74         0.70 ms              341
> > TCP upload avg :         6.03         5.83 Mbits/s         301
> > TCP upload sum :        48.25        46.63 Mbits/s         301
> > TCP upload::1  :         6.04         5.86 Mbits/s         226
> > TCP upload::2  :         6.03         5.86 Mbits/s         226
> > TCP upload::3  :         6.03         5.86 Mbits/s         226
> > TCP upload::4  :         6.03         5.86 Mbits/s         225
> > TCP upload::5  :         6.03         5.86 Mbits/s         226
> > TCP upload::6  :         6.03         5.86 Mbits/s         226
> > TCP upload::7  :         6.03         5.78 Mbits/s         220
> > TCP upload::8  :         6.03         5.88 Mbits/s         277
> >
> >
> > <tcp_8up-2019-01-15T223706.706614.flent.gz><tcp_8down-2019-01-15T223703.709305.flent.gz><tcp_1up-2019-01-15T223704.711193.flent.gz><tcp_1down-2019-01-15T223705.693550.flent.gz>
>

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2019-01-28  1:34 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-01 23:04 [Cake] dual-src/dsthost unfairness, only with bi-directional traffic Pete Heist
2019-01-03  3:57 ` Georgios Amanakis
2019-01-03  4:15   ` Georgios Amanakis
2019-01-03  5:18     ` Jonathan Morton
2019-01-03 10:46       ` Pete Heist
2019-01-03 11:03         ` Toke Høiland-Jørgensen
2019-01-03 13:02           ` Pete Heist
2019-01-03 13:20             ` Toke Høiland-Jørgensen
2019-01-03 16:35               ` Pete Heist
2019-01-03 18:24                 ` Georgios Amanakis
2019-01-03 22:06                 ` Pete Heist
2019-01-04  2:08                   ` Georgios Amanakis
2019-01-04  8:09                     ` Pete Heist
2019-01-04  7:37                   ` Pete Heist
2019-01-04 11:34             ` Pete Heist
2019-01-15 19:22               ` George Amanakis
2019-01-15 22:42                 ` Georgios Amanakis
2019-01-16  3:34                   ` George Amanakis
2019-01-16  3:47                     ` gamanakis
2019-01-16  7:58                       ` Pete Heist
2019-01-26  7:35                       ` Pete Heist
2019-01-28  1:34                         ` Georgios Amanakis
2019-01-18 10:06                     ` Toke Høiland-Jørgensen
2019-01-18 12:07                       ` Georgios Amanakis
2019-01-18 13:33                         ` Toke Høiland-Jørgensen
2019-01-18 13:40                           ` Sebastian Moeller
2019-01-18 14:30                             ` Toke Høiland-Jørgensen
2019-01-18 13:45                           ` Jonathan Morton
2019-01-18 14:32                             ` Toke Høiland-Jørgensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox