From: Dave Taht <dave.taht@gmail.com>
To: Aaron Wood <woody77@gmail.com>
Cc: "Toke Høiland-Jørgensen" <toke@toke.dk>,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] Still seeing bloat with a DOCSIS 3.1 modem
Date: Wed, 25 Mar 2020 12:18:31 -0700 [thread overview]
Message-ID: <CAA93jw4vHQzQ9OBuzqqx5Xx5J15WKOPTr32OXcXrns+1ZJCgFw@mail.gmail.com> (raw)
In-Reply-To: <CALQXh-PWeBj_Wei4oceYTYzNMh=hNeX6PhTMRkLoRaPj0AUC+w@mail.gmail.com>
On Wed, Mar 25, 2020 at 8:58 AM Aaron Wood <woody77@gmail.com> wrote:
>
> One other thought I've had with this, is that the apu2 is multi-core, and the i210 is multi-queue.
>
> Cake/htb aren't, iirc, setup to run on multiple cores (as the rate limiters then don't talk to each other). But with the correct tuple hashing in the i210, I _should_ be able to split things and do two cores at 500Mbps each (with lots of compute left over).
>
> Obviously, that puts a limit on single-connection rates, but as the number of connections climb, they should more or less even out (I remember Dave Taht showing the oddities that happen with say 4 streams and 2 cores, where it's common to end up with 3 streams on the same core). But assuming that the hashing function results in even sharing of streams, it should be fairly balanced (after plotting some binomial distributions with higher "n" values). Still not perfect, especially since streams aren't likely to all be elephants.
We live with imperfect per core tcp flow behavior already.
What I wanted to happen was the "list" ingress improvement to become
more generally available ( I can't find the lwn link at the moment).
It has. I thought that then we could express a syntax of tc qdisc add
dev eth0 ingress cake-mq bandwidth whatever, and it would rock.
I figured getting rid of the cost of the existing ifb and tc mirred,
and having a fast path preserving each hardware queue, then using
rcu to do a sloppy allocate atomic lock for shaped bandwidth and merge
every ms or so might be then low-cost enough. Certainly folding
everything into a single queue has a cost!
I was (before money ran out) prototyping adding a shared shaper to mq
at one point (no rcu, just There have been so many other things
toss around (bpf?)
As for load balancing better, google "RSS++", if you must.
> On Wed, Mar 25, 2020 at 4:03 AM Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>
>> Sebastian Moeller <moeller0@gmx.de> writes:
>>
>> > Hi Toke,
>> >
>> >
>> >> On Mar 25, 2020, at 09:58, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>> >>
>> >> Aaron Wood <woody77@gmail.com> writes:
>> >>
>> >>> I recently upgraded service from 150up, 10dn Mbps to xfinity's gigabit
>> >>> (with 35Mbps up) tier, and picked up a DOCSIS 3.1 modem to go with it.
>> >>>
>> >>> Flent test results are here:
>> >>> https://burntchrome.blogspot.com/2020/03/bufferbloat-with-comcast-gigabit-with.html
>> >>>
>> >>> tl/dr; 1000ms of upstream bufferbloat
>> >>>
>> >>> But it's DOCSIS 3.1, so why isn't PIE working? Theory: It's in DOCSIS 3.0
>> >>> upstream mode based on the status LEDs. Hopefully it will go away if I can
>> >>> convince it to run in DOCSIS 3.1 mode.
>> >>
>> >> I think that while PIE is "mandatory to implement" in DOCSIS 3.1, the
>> >> ISP still has to turn it on? So maybe yelling at them will work? (ha!)
>> >>
>> >>> At the moment, however, my WRT1900AC isn't up to the task of dealing with
>> >>> these sorts of downstream rates.
>> >>>
>> >>> So I'm looking at the apu2, which from this post:
>> >>> https://forum.openwrt.org/t/comparative-throughput-testing-including-nat-sqm-wireguard-and-openvpn/44724
>> >>>
>> >>> Will certainly get most of the way there.
>> >>
>> >> My Turris Omnia is doing fine on my 1Gbps connection (although that
>> >> hardly suffers from bloat, so I'm not doing any shaping; did try it
>> >> though, and it has no problem with running CAKE at 1Gbps).
>> >
>> > Well, doing local network flent RRUL stress tests indicated that
>> > my omnia (at that time with TOS4/Openwrt18) only allowed up to
>> > 500/500 Mbps shaping with bi directionally saturating traffic
>> > with full MTU-sized packets. So I undirectional CAKE at 1Gbps
>> > can work, but under full load, I did not manage that, what did I
>> > wrong?
>>
>> Hmm, not sure I've actually done full bidirectional shaping. And trying
>> it now, it does seem to be struggling...
>>
>> -Toke
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Make Music, Not War
Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-435-0729
next prev parent reply other threads:[~2020-03-25 19:18 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-25 5:01 Aaron Wood
2020-03-25 5:02 ` Aaron Wood
2020-03-25 5:29 ` Matt Taggart
2020-03-25 6:19 ` Sebastian Moeller
2020-03-25 15:46 ` Aaron Wood
2020-03-25 8:58 ` Toke Høiland-Jørgensen
2020-03-25 9:04 ` Sebastian Moeller
2020-03-25 11:03 ` Toke Høiland-Jørgensen
2020-03-25 15:44 ` Aaron Wood
2020-03-25 15:57 ` Aaron Wood
2020-03-25 19:18 ` Dave Taht [this message]
2020-03-28 22:46 ` Aaron Wood
2020-03-29 19:58 ` Dave Taht
2020-03-29 23:52 ` Aaron Wood
2020-03-25 18:13 ` Jim Gettys
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAA93jw4vHQzQ9OBuzqqx5Xx5J15WKOPTr32OXcXrns+1ZJCgFw@mail.gmail.com \
--to=dave.taht@gmail.com \
--cc=bloat@lists.bufferbloat.net \
--cc=toke@toke.dk \
--cc=woody77@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox