[Cake] cake flenter results round 1
Sebastian Moeller
moeller0 at gmx.de
Mon Nov 27 12:34:56 EST 2017
But 444.35 + 443.65 = 888, no?
> On Nov 27, 2017, at 18:33, Dave Taht <dave.taht at gmail.com> wrote:
>
> georgios
>
> the result you got was "fair", but you shoul have seen something
> closer to 900mbit than 400.
>
> On Mon, Nov 27, 2017 at 8:17 AM, Georgios Amanakis <gamanakis at gmail.com> wrote:
>> Dear Pete,
>>
>> I am trying to replicate the unfair behaviour you are seeing with
>> dual-{src,dst}host, albeit on different hardware and I am getting a fair
>> distribution. Hardware are Xeon E3-1220Lv2 (router), i3-3110M(Clients). All
>> running Archlinux, latest cake and patched iproute2-4.14.1, connected with
>> Gbit ethernet, TSO/GSO/GRO enabled.
>>
>> Qdisc setup:
>> ----------------
>> Router:
>> qdisc cake 8003: dev ens4 root refcnt 2 bandwidth 900Mbit diffserv3
>> dual-dsthost rtt 100.0ms raw
>>
>> Client A(kernel default):
>> qdisc fq_codel 0: dev eno2 root refcnt 2 limit 10240p flows 1024 quantum
>> 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
>>
>> Client B (kernel default):
>> qdisc fq_codel 0: dev enp1s0 root refcnt 2 limit 10240p flows 1024 quantum
>> 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
>> ----------------
>>
>>
>> Cli:
>> ----------------
>> Router:
>> netserver &
>>
>> Client A:
>> flent tcp_1down -H router
>>
>> Client B:
>> flent tcp_12down -H router
>> ----------------
>>
>>
>> Results:
>> ----------------
>> Router:
>> qdisc cake 8003: root refcnt 2 bandwidth 900Mbit diffserv3 dual-dsthost rtt
>> 100.0ms raw
>> Sent 7126680117 bytes 4725904 pkt (dropped 10, overlimits 4439745 requeues
>> 0)
>> backlog 0b 0p requeues 0
>> memory used: 1224872b of 15140Kb
>> capacity estimate: 900Mbit
>> Bulk Best Effort Voice
>> thresh 56250Kbit 900Mbit 225Mbit
>> target 5.0ms 5.0ms 5.0ms
>> interval 100.0ms 100.0ms 100.0ms
>> pk_delay 14us 751us 7us
>> av_delay 2us 642us 1us
>> sp_delay 1us 1us 1us
>> pkts 109948 4601651 14315
>> bytes 160183242 6964893773 1618242
>> way_inds 0 21009 0
>> way_miss 160 188 5
>> way_cols 0 0 0
>> drops 0 10 0
>> marks 0 0 0
>> ack_drop 0 0 0
>> sp_flows 0 1 1
>> bk_flows 1 0 0
>> un_flows 0 0 0
>> max_len 7570 68130 1022
>>
>>
>> Client A:
>> avg median # data pts
>> Ping (ms) ICMP : 0.11 0.08 ms 350
>> TCP download : 443.65 430.38 Mbits/s 301
>>
>>
>> Client B:
>> avg median # data pts
>> Ping (ms) ICMP : 0.09 0.06 ms 350
>> TCP download avg : 37.03 35.87 Mbits/s 301
>> TCP download sum : 444.35 430.40 Mbits/s 301
>> TCP download::1 : 37.00 35.87 Mbits/s 301
>> TCP download::10 : 37.01 35.87 Mbits/s 301
>> TCP download::11 : 37.02 35.87 Mbits/s 301
>> TCP download::12 : 37.00 35.87 Mbits/s 301
>> TCP download::2 : 37.03 35.87 Mbits/s 301
>> TCP download::3 : 36.99 35.87 Mbits/s 301
>> TCP download::4 : 37.03 35.87 Mbits/s 301
>> TCP download::5 : 37.07 35.87 Mbits/s 301
>> TCP download::6 : 37.00 35.87 Mbits/s 301
>> TCP download::7 : 37.12 35.87 Mbits/s 301
>> TCP download::8 : 37.05 35.87 Mbits/s 301
>> TCP download::9 : 37.03 35.87 Mbits/s 301
>> ----------------
>>
>> Does this suggest that it is indeed a problem of an underpowered CPU in your
>> case?
>>
>> George
>>
>>
>> On Mon, Nov 27, 2017 at 10:53 AM, Pete Heist <peteheist at gmail.com> wrote:
>>>
>>>
>>> On Nov 27, 2017, at 3:48 PM, Jonathan Morton <chromatix99 at gmail.com>
>>> wrote:
>>>
>>> It's not at all obvious how we'd detect that. Packets are staying in the
>>> queue for less time than the codel target, which is exactly what you'd get
>>> if you weren't saturated at all.
>>>
>>> That makes complete sense when you put it that way. Cake has no way of
>>> knowing why the input rate is lower than expected, even if it’s part of the
>>> cause.
>>>
>>> I don’t think flent can know this either. It can’t easily know the cause
>>> for its total output to be lower than expected.
>>>
>>> All I know is, this is a common problem in deployments, particularly on
>>> low-end hardware like ER-Xs, that can be tricky for users to figure out.
>>>
>>> I don’t even think monitoring CPU in general would work. The CPU could be
>>> high because it’s doing other calculations, but there’s still enough for
>>> cake at a low rate, and there’s no need to warn in that case. I’d be
>>> interested in any ideas on how to know this is happening in the system as a
>>> whole. So far, there are just various clues that one needs to piece together
>>> (no or few drops or marks, less total throughput that expected, high cpu
>>> without other external usage, etc). Then it needs to be proven with a test.
>>>
>>> Anyway thanks, your clue was what I needed! I need to remember to review
>>> the qdisc stats when something unexpected happens.
>>>
>>> _______________________________________________
>>> Cake mailing list
>>> Cake at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cake
>>>
>>
>>
>> _______________________________________________
>> Cake mailing list
>> Cake at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cake
>>
>
>
>
> --
>
> Dave Täht
> CEO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-669-226-2619
> _______________________________________________
> Cake mailing list
> Cake at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
More information about the Cake
mailing list