* [LibreQoS] Fwd: [PATCH 0/3] softirq: uncontroversial change
[not found] <20221222221244.1290833-1-kuba@kernel.org>
@ 2022-12-22 22:52 ` Dave Taht
0 siblings, 0 replies; only message in thread
From: Dave Taht @ 2022-12-22 22:52 UTC (permalink / raw)
To: libreqos
This is pretty neat.
---------- Forwarded message ---------
From: Jakub Kicinski <kuba@kernel.org>
Date: Thu, Dec 22, 2022 at 2:40 PM
Subject: [PATCH 0/3] softirq: uncontroversial change
To: <peterz@infradead.org>, <tglx@linutronix.de>
Cc: <jstultz@google.com>, <edumazet@google.com>,
<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>, Jakub
Kicinski <kuba@kernel.org>
Catching up on LWN I run across the article about softirq
changes, and then I noticed fresh patches in Peter's tree.
So probably wise for me to throw these out there.
My (can I say Meta's?) problem is the opposite to what the RT
sensitive people complain about. In the current scheme once
ksoftirqd is woken no network processing happens until it runs.
When networking gets overloaded - that's probably fair, the problem
is that we confuse latency tweaks with overload protection. We have
a needs_resched() in the loop condition (which is a latency tweak)
Most often we defer to ksoftirqd because we're trying to be nice
and let user space respond quickly, not because there is an
overload. But the user space may not be nice, and sit on the CPU
for 10ms+. Also the sirq's "work allowance" is 2ms, which is
uncomfortably close to the timer tick, but that's another story.
We have a sirq latency tracker in our prod kernel which catches
8ms+ stalls of net Tx (packets queued to the NIC but there is
no NAPI cleanup within 8ms) and with these patches applied
on 5.19 fully loaded web machine sees a drop in stalls from
1.8 stalls/sec to 0.16/sec. I also see a 50% drop in outgoing
TCP retransmissions and ~10% drop in non-TLP incoming ones.
This is not a network-heavy workload so most of the rtx are
due to scheduling artifacts.
The network latency in a datacenter is somewhere around neat
1000x lower than scheduling granularity (around 10us).
These patches (patch 2 is "the meat") change what we recognize
as overload. Instead of just checking if "ksoftirqd is woken"
it also caps how long we consider ourselves to be in overload,
a time limit which is different based on whether we yield due
to real resource exhaustion vs just hitting that needs_resched().
I hope the core concept is not entirely idiotic. It'd be great
if we could get this in or fold an equivalent concept into ongoing
work from others, because due to various "scheduler improvements"
every time we upgrade the production kernel this problem is getting
worse :(
Jakub Kicinski (3):
softirq: rename ksoftirqd_running() -> ksoftirqd_should_handle()
softirq: avoid spurious stalls due to need_resched()
softirq: don't yield if only expedited handlers are pending
kernel/softirq.c | 29 ++++++++++++++++++++++-------
1 file changed, 22 insertions(+), 7 deletions(-)
--
2.38.1
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2022-12-22 22:52 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20221222221244.1290833-1-kuba@kernel.org>
2022-12-22 22:52 ` [LibreQoS] Fwd: [PATCH 0/3] softirq: uncontroversial change Dave Taht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox