I think I found out why using veth scripts this didn't show up: dslreports.com generates traffic that is classified as "bulk" by cake. I modified rrul_be_nflows.conf to classify all generated traffic as CS1. Ran a simulation where one client downloads simultaneously from 4 servers, 24 flows each. The latency is huge ~1000ms in this case. If traffic is classified as "best-effort (CS0)", the latency is <50ms. tc -s stats: qdisc cake 1: dev mbox.l root refcnt 2 bandwidth 1800Kbit diffserv3 triple-isolate ack-filter rtt 100.0ms raw Sent 5750764 bytes 75026 pkt (dropped 64, overlimits 10957 requeues 0) backlog 0b 0p requeues 0 memory used: 45324b of 4Mb capacity estimate: 1800Kbit Bulk Best Effort Voice thresh 112496bit 1800Kbit 450Kbit target 161.5ms 10.1ms 40.4ms interval 323.0ms 105.1ms 135.4ms pk_delay 278us 237us 1us av_delay 26us 66us 0us sp_delay 0us 2us 0us pkts 33229 41857 4 bytes 2526174 3228646 168 way_inds 1850 2538 0 way_miss 121 355 1 way_cols 0 0 0 drops 0 0 0 marks 0 0 0 ack_drop 43 21 0 sp_flows 0 1 1 bk_flows 0 0 0 un_flows 0 0 0 max_len 94 722 42 qdisc cake 1: dev mbox.r root refcnt 2 bandwidth 9Mbit diffserv3 triple-isolate ingress rtt 100.0ms raw Sent 107963942 bytes 77681 pkt (dropped 24977, overlimits 161469 requeues 0) backlog 0b 0p requeues 0 memory used: 4199760b of 4Mb capacity estimate: 9Mbit Bulk Best Effort Voice thresh 562496bit 9Mbit 2250Kbit target 32.3ms 5.0ms 8.1ms interval 127.3ms 100.0ms 103.1ms pk_delay 1.9s 14.5ms 754us av_delay 926.5ms 4.6ms 14us sp_delay 11.0ms 1us 11us pkts 48809 53846 3 bytes 73487282 72197216 126 way_inds 1894 1763 0 way_miss 100 348 1 way_cols 0 0 0 drops 12843 12134 0 marks 0 0 0 ack_drop 0 0 0 sp_flows 96 1 1 bk_flows 0 0 0 un_flows 96 0 0 max_len 3028 3028 42 George On Mon, 2017-12-25 at 00:35 -0500, George Amanakis wrote: > Dear All, > > merry christmas to the list members! > > I was doing some real-world testing using dslreports.com/speedtest > with > my 10/2mbps comcast cable line, and found out by bisecting that > commit 0d8f30faa3d4bb2bc87a382f18d8e0f3e4e56ea (Dynamically adjust > sojourn target according to flow count and serialisation delay) > deteriorates latency even in ingress mode. > > In real-world, with 16 flows downloading from dslreports.com ping > times > are >500ms. This has never been the case with ingress mode before. I > don't know yet why this didn't show up with the veth scripts. > > In sch_cake.c changing over_target to its pre 0d8f30 value "sojourn > > p->target" brings ping times down to <50ms. > > tc -s qdisc show: > qdisc cake 801f: dev ens4 root refcnt 2 bandwidth 12200Kbit diffserv3 > dual-dsthost wash ingress rtt 100.0ms noatm overhead 18 via-ethernet > mpu 64 > Sent 23963379 bytes 17137 pkt (dropped 35, overlimits 32166 requeues > 0) > backlog 0b 0p requeues 0 > memory used: 920832b of 4Mb > capacity estimate: 12200Kbit > Bulk Best Effort Voice > thresh 762496bit 12200Kbit 3050Kbit > target 23.8ms 5.0ms 6.0ms > interval 118.8ms 100.0ms 101.0ms > pk_delay 527.5ms 594us 42us > av_delay 315.3ms 144us 0us > sp_delay 1us 1us 0us > pkts 16197 954 21 > bytes 23412978 600744 2647 > way_inds 0 0 0 > way_miss 105 62 3 > way_cols 0 0 0 > drops 35 0 0 > marks 335 3 0 > ack_drop 0 0 0 > sp_flows 0 0 0 > bk_flows 0 0 1 > un_flows 0 0 0 > max_len 4542 3392 254 > > qdisc cake 8020: dev ens3 root refcnt 2 bandwidth 2500Kbit diffserv3 > dual-srchost nat wash ack-filter rtt 100.0ms noatm overhead 18 via- > ethernet mpu 64 > Sent 2890199 bytes 15496 pkt (dropped 20, overlimits 4773 requeues > 0) > backlog 0b 0p requeues 0 > memory used: 41764b of 4Mb > capacity estimate: 2500Kbit > Bulk Best Effort Voice > thresh 156248bit 2500Kbit 625Kbit > target 116.3ms 7.3ms 29.1ms > interval 232.6ms 102.3ms 124.1ms > pk_delay 0us 70.0ms 0us > av_delay 0us 45.2ms 0us > sp_delay 0us 5.7ms 0us > pkts 0 15515 1 > bytes 0 2891673 42 > way_inds 0 0 0 > way_miss 0 141 1 > way_cols 0 0 0 > drops 0 0 0 > marks 0 55 0 > ack_drop 0 20 0 > sp_flows 0 0 0 > bk_flows 0 1 0 > un_flows 0 0 0 > max_len 0 4542 42 > > Could somebody confirm this? > > > George