[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251123121827.1304-3-qq570070308@gmail.com>
Date: Sun, 23 Nov 2025 20:18:26 +0800
From: Xie Yuanbin <qq570070308@...il.com>
To: tglx@...utronix.de,
peterz@...radead.org,
david@...nel.org,
riel@...riel.com,
segher@...nel.crashing.org,
hpa@...or.com,
arnd@...db.de,
mingo@...hat.com,
juri.lelli@...hat.com,
vincent.guittot@...aro.org,
dietmar.eggemann@....com,
rostedt@...dmis.org,
bsegall@...gle.com,
mgorman@...e.de,
vschneid@...hat.com,
bp@...en8.de,
dave.hansen@...ux.intel.com,
luto@...nel.org,
linux@...linux.org.uk,
mathieu.desnoyers@...icios.com,
paulmck@...nel.org,
pjw@...nel.org,
palmer@...belt.com,
aou@...s.berkeley.edu,
alex@...ti.fr,
hca@...ux.ibm.com,
gor@...ux.ibm.com,
agordeev@...ux.ibm.com,
borntraeger@...ux.ibm.com,
svens@...ux.ibm.com,
davem@...emloft.net,
andreas@...sler.com,
acme@...nel.org,
namhyung@...nel.org,
mark.rutland@....com,
alexander.shishkin@...ux.intel.com,
jolsa@...nel.org,
irogers@...gle.com,
adrian.hunter@...el.com,
james.clark@...aro.org,
anna-maria@...utronix.de,
frederic@...nel.org,
nathan@...nel.org,
nick.desaulniers+lkml@...il.com,
morbo@...gle.com,
justinstitt@...gle.com,
thuth@...hat.com,
akpm@...ux-foundation.org,
lorenzo.stoakes@...cle.com,
anshuman.khandual@....com,
nysal@...ux.ibm.com,
max.kellermann@...os.com,
urezki@...il.com,
ryan.roberts@....com
Cc: linux-kernel@...r.kernel.org,
x86@...nel.org,
linux-arm-kernel@...ts.infradead.org,
linux-riscv@...ts.infradead.org,
linux-s390@...r.kernel.org,
sparclinux@...r.kernel.org,
linux-perf-users@...r.kernel.org,
llvm@...ts.linux.dev,
Xie Yuanbin <qq570070308@...il.com>
Subject: [PATCH v4 2/3] sched: Make raw_spin_rq_unlock() inline
raw_spin_rq_unlock() is short, and is called in some hot code paths
such as finish_lock_switch.
Make raw_spin_rq_unlock() inline to optimize performance.
Signed-off-by: Xie Yuanbin <qq570070308@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Rik van Riel <riel@...riel.com>
Cc: Segher Boessenkool <segher@...nel.crashing.org>
Cc: David Hildenbrand (Red Hat) <david@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: H. Peter Anvin (Intel) <hpa@...or.com>
---
V3->V4: https://lore.kernel.org/20251113105227.57650-3-qq570070308@gmail.com
-- Revise commit message
kernel/sched/core.c | 5 -----
kernel/sched/sched.h | 6 +++++-
2 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f2931af76405..0f9e9f54d0a8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -678,11 +678,6 @@ bool raw_spin_rq_trylock(struct rq *rq)
}
}
-void raw_spin_rq_unlock(struct rq *rq)
-{
- raw_spin_unlock(rq_lockp(rq));
-}
-
/*
* double_rq_lock - safely lock two runqueues
*/
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index bbf513b3e76c..a60b238cb0f5 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1551,13 +1551,17 @@ static inline void lockdep_assert_rq_held(struct rq *rq)
extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
extern bool raw_spin_rq_trylock(struct rq *rq);
-extern void raw_spin_rq_unlock(struct rq *rq);
static inline void raw_spin_rq_lock(struct rq *rq)
{
raw_spin_rq_lock_nested(rq, 0);
}
+static inline void raw_spin_rq_unlock(struct rq *rq)
+{
+ raw_spin_unlock(rq_lockp(rq));
+}
+
static inline void raw_spin_rq_lock_irq(struct rq *rq)
{
local_irq_disable();
--
2.51.0
Powered by blists - more mailing lists