From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 12C613B2A4 for ; Mon, 27 Nov 2017 12:34:58 -0500 (EST) Received: from [10.11.12.46] ([134.76.241.253]) by mail.gmx.com (mrgmx102 [212.227.17.168]) with ESMTPSA (Nemesis) id 0M6874-1f4CVP1wzq-00y5jR; Mon, 27 Nov 2017 18:34:57 +0100 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\)) From: Sebastian Moeller In-Reply-To: Date: Mon, 27 Nov 2017 18:34:56 +0100 Cc: Georgios Amanakis , Cake List Content-Transfer-Encoding: quoted-printable Message-Id: <2A5F940F-F713-4578-8123-5CAD98A9C4C3@gmx.de> References: <85E1A7B2-8AA7-418A-BE43-209A1EC8881A@gmail.com> <87d1447z9w.fsf@toke.dk> <27F95EB1-490B-404C-8F77-98646B6159E7@gmail.com> <1C937A63-CEC1-4173-8812-EA2A85972B73@gmail.com> <20D304DC-494E-4A00-9B39-1E9F4B0F0CB6@gmail.com> <67B1612D-895D-4E3A-8CBD-21580B470696@gmail.com> To: =?utf-8?Q?Dave_T=C3=A4ht?= X-Mailer: Apple Mail (2.3273) X-Provags-ID: V03:K0:6OajXvJfFhfr1zp+WXJn2sThpQr8vR1cc2FzZsJlQJ/mwDAtLZh GPRxWElvgmc6zon4BMush/kpFo6SWhMjGkl9AQozHl3+VLvkPalpHtpRE9PPtJ5xa75AMix a7BcfnR/gH4A3nvV+NCTaRv3Vo/vGosIv080mXV1f7fqUIgPqI8eVm85EFpqLanFn4zko2Z wPSd7aQsSn2Ybr7jw6FFQ== X-UI-Out-Filterresults: notjunk:1;V01:K0:hHav2BsAigI=:dsq6kL5c0H+ozAf61CjbQ1 9bu3TR03G7n2aTWR57dxGKjCMnPh0Uhr8UDphOTKPsPlcvKdzOaaQmSjOxzBS/F2gz39bNyT5 Z46yXZLq7wlh8qhcsK1Io9f09tpj18T8RNyKabP68aEBYyMPBTRF+fhkKLMAMpezQqYTM9rKB a0RzmGDSqPUEt8ZlUxlC2UskJyU2LCoBtbFm2NZls8kX6bPN/1uIllsrj5rW/JHgYlh1E7kE8 40ShVWK2tXVQ/G/C2BhO8wzFG7T7LxgR0W5VyRRqic9FsF5Zn5rLintgHGF64sCFKpns7xl1s khGH7SD9TRGqU98h9O4CPki3zFpD3EzZhLbDmyuDaQ1YLwlkz+iE61s3l8XgAb54u8hrNTd1z Hrm+wTn2QXUiqfP1cmVNeHjWlaqbSivXjPyLI2Wv9Vn9zHpfJiSZPraZZunMQeJnF4kUhCW7F SHrsWGUP0s2SrD+ABnxjXWnyrSiqouI54mIx8by1ejwnGJJUR37E2p9O4Ob11pXQQ0J7LbKsH CIzT1oQ+/0wDQLLkErktrjU/f1KbVDjX4lm3wADJakpmyYOowuNecBssUeE5YpTFU8HFxLOpr z9UxbXp4Z+5MeI76YX5/I0DoC0rvvC6kLPdpiSGY8n8RJXJC5M0b+Q91cQxbq1XZV+fWwsPtD VUtV9Th72Mg/69pbX5/X5NaoRFrwSY7HC8Nr4jkOjTmazQYTWMhqakw5ECpkA2dmcrYT+tU35 vNuFpt591/7GwbI1eYmFuTqTVPEPzCyCBLF7Umf+6jQ6luUiYp/v86ex6U72Jb+5m8jPA8jfb BRORPIU7ANJ7I7p5MxT6gzlJX+yUP13FV96fd9UC5W2aBaFyyE= Subject: Re: [Cake] cake flenter results round 1 X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Nov 2017 17:34:59 -0000 But 444.35 + 443.65 =3D 888, no? > On Nov 27, 2017, at 18:33, Dave Taht wrote: >=20 > georgios >=20 > the result you got was "fair", but you shoul have seen something > closer to 900mbit than 400. >=20 > On Mon, Nov 27, 2017 at 8:17 AM, Georgios Amanakis = wrote: >> Dear Pete, >>=20 >> I am trying to replicate the unfair behaviour you are seeing with >> dual-{src,dst}host, albeit on different hardware and I am getting a = fair >> distribution. Hardware are Xeon E3-1220Lv2 (router), = i3-3110M(Clients). All >> running Archlinux, latest cake and patched iproute2-4.14.1, connected = with >> Gbit ethernet, TSO/GSO/GRO enabled. >>=20 >> Qdisc setup: >> ---------------- >> Router: >> qdisc cake 8003: dev ens4 root refcnt 2 bandwidth 900Mbit diffserv3 >> dual-dsthost rtt 100.0ms raw >>=20 >> Client A(kernel default): >> qdisc fq_codel 0: dev eno2 root refcnt 2 limit 10240p flows 1024 = quantum >> 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn >>=20 >> Client B (kernel default): >> qdisc fq_codel 0: dev enp1s0 root refcnt 2 limit 10240p flows 1024 = quantum >> 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn >> ---------------- >>=20 >>=20 >> Cli: >> ---------------- >> Router: >> netserver & >>=20 >> Client A: >> flent tcp_1down -H router >>=20 >> Client B: >> flent tcp_12down -H router >> ---------------- >>=20 >>=20 >> Results: >> ---------------- >> Router: >> qdisc cake 8003: root refcnt 2 bandwidth 900Mbit diffserv3 = dual-dsthost rtt >> 100.0ms raw >> Sent 7126680117 bytes 4725904 pkt (dropped 10, overlimits 4439745 = requeues >> 0) >> backlog 0b 0p requeues 0 >> memory used: 1224872b of 15140Kb >> capacity estimate: 900Mbit >> Bulk Best Effort Voice >> thresh 56250Kbit 900Mbit 225Mbit >> target 5.0ms 5.0ms 5.0ms >> interval 100.0ms 100.0ms 100.0ms >> pk_delay 14us 751us 7us >> av_delay 2us 642us 1us >> sp_delay 1us 1us 1us >> pkts 109948 4601651 14315 >> bytes 160183242 6964893773 1618242 >> way_inds 0 21009 0 >> way_miss 160 188 5 >> way_cols 0 0 0 >> drops 0 10 0 >> marks 0 0 0 >> ack_drop 0 0 0 >> sp_flows 0 1 1 >> bk_flows 1 0 0 >> un_flows 0 0 0 >> max_len 7570 68130 1022 >>=20 >>=20 >> Client A: >> avg median # data pts >> Ping (ms) ICMP : 0.11 0.08 ms 350 >> TCP download : 443.65 430.38 Mbits/s 301 >>=20 >>=20 >> Client B: >> avg median # data pts >> Ping (ms) ICMP : 0.09 0.06 ms 350 >> TCP download avg : 37.03 35.87 Mbits/s 301 >> TCP download sum : 444.35 430.40 Mbits/s 301 >> TCP download::1 : 37.00 35.87 Mbits/s 301 >> TCP download::10 : 37.01 35.87 Mbits/s 301 >> TCP download::11 : 37.02 35.87 Mbits/s 301 >> TCP download::12 : 37.00 35.87 Mbits/s 301 >> TCP download::2 : 37.03 35.87 Mbits/s 301 >> TCP download::3 : 36.99 35.87 Mbits/s 301 >> TCP download::4 : 37.03 35.87 Mbits/s 301 >> TCP download::5 : 37.07 35.87 Mbits/s 301 >> TCP download::6 : 37.00 35.87 Mbits/s 301 >> TCP download::7 : 37.12 35.87 Mbits/s 301 >> TCP download::8 : 37.05 35.87 Mbits/s 301 >> TCP download::9 : 37.03 35.87 Mbits/s 301 >> ---------------- >>=20 >> Does this suggest that it is indeed a problem of an underpowered CPU = in your >> case? >>=20 >> George >>=20 >>=20 >> On Mon, Nov 27, 2017 at 10:53 AM, Pete Heist = wrote: >>>=20 >>>=20 >>> On Nov 27, 2017, at 3:48 PM, Jonathan Morton >>> wrote: >>>=20 >>> It's not at all obvious how we'd detect that. Packets are staying = in the >>> queue for less time than the codel target, which is exactly what = you'd get >>> if you weren't saturated at all. >>>=20 >>> That makes complete sense when you put it that way. Cake has no way = of >>> knowing why the input rate is lower than expected, even if it=E2=80=99= s part of the >>> cause. >>>=20 >>> I don=E2=80=99t think flent can know this either. It can=E2=80=99t = easily know the cause >>> for its total output to be lower than expected. >>>=20 >>> All I know is, this is a common problem in deployments, particularly = on >>> low-end hardware like ER-Xs, that can be tricky for users to figure = out. >>>=20 >>> I don=E2=80=99t even think monitoring CPU in general would work. The = CPU could be >>> high because it=E2=80=99s doing other calculations, but there=E2=80=99= s still enough for >>> cake at a low rate, and there=E2=80=99s no need to warn in that = case. I=E2=80=99d be >>> interested in any ideas on how to know this is happening in the = system as a >>> whole. So far, there are just various clues that one needs to piece = together >>> (no or few drops or marks, less total throughput that expected, high = cpu >>> without other external usage, etc). Then it needs to be proven with = a test. >>>=20 >>> Anyway thanks, your clue was what I needed! I need to remember to = review >>> the qdisc stats when something unexpected happens. >>>=20 >>> _______________________________________________ >>> Cake mailing list >>> Cake@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/cake >>>=20 >>=20 >>=20 >> _______________________________________________ >> Cake mailing list >> Cake@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/cake >>=20 >=20 >=20 >=20 > --=20 >=20 > Dave T=C3=A4ht > CEO, TekLibre, LLC > http://www.teklibre.com > Tel: 1-669-226-2619 > _______________________________________________ > Cake mailing list > Cake@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cake