[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221222221244.1290833-1-kuba@kernel.org>
Date: Thu, 22 Dec 2022 14:12:41 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: peterz@...radead.org, tglx@...utronix.de
Cc: jstultz@...gle.com, edumazet@...gle.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, Jakub Kicinski <kuba@...nel.org>
Subject: [PATCH 0/3] softirq: uncontroversial change
Catching up on LWN I run across the article about softirq
changes, and then I noticed fresh patches in Peter's tree.
So probably wise for me to throw these out there.
My (can I say Meta's?) problem is the opposite to what the RT
sensitive people complain about. In the current scheme once
ksoftirqd is woken no network processing happens until it runs.
When networking gets overloaded - that's probably fair, the problem
is that we confuse latency tweaks with overload protection. We have
a needs_resched() in the loop condition (which is a latency tweak)
Most often we defer to ksoftirqd because we're trying to be nice
and let user space respond quickly, not because there is an
overload. But the user space may not be nice, and sit on the CPU
for 10ms+. Also the sirq's "work allowance" is 2ms, which is
uncomfortably close to the timer tick, but that's another story.
We have a sirq latency tracker in our prod kernel which catches
8ms+ stalls of net Tx (packets queued to the NIC but there is
no NAPI cleanup within 8ms) and with these patches applied
on 5.19 fully loaded web machine sees a drop in stalls from
1.8 stalls/sec to 0.16/sec. I also see a 50% drop in outgoing
TCP retransmissions and ~10% drop in non-TLP incoming ones.
This is not a network-heavy workload so most of the rtx are
due to scheduling artifacts.
The network latency in a datacenter is somewhere around neat
1000x lower than scheduling granularity (around 10us).
These patches (patch 2 is "the meat") change what we recognize
as overload. Instead of just checking if "ksoftirqd is woken"
it also caps how long we consider ourselves to be in overload,
a time limit which is different based on whether we yield due
to real resource exhaustion vs just hitting that needs_resched().
I hope the core concept is not entirely idiotic. It'd be great
if we could get this in or fold an equivalent concept into ongoing
work from others, because due to various "scheduler improvements"
every time we upgrade the production kernel this problem is getting
worse :(
Jakub Kicinski (3):
softirq: rename ksoftirqd_running() -> ksoftirqd_should_handle()
softirq: avoid spurious stalls due to need_resched()
softirq: don't yield if only expedited handlers are pending
kernel/softirq.c | 29 ++++++++++++++++++++++-------
1 file changed, 22 insertions(+), 7 deletions(-)
--
2.38.1
Powered by blists - more mailing lists