[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260124171546.43398-3-qq570070308@gmail.com>
Date: Sun, 25 Jan 2026 01:15:45 +0800
From: Xie Yuanbin <qq570070308@...il.com>
To: peterz@...radead.org,
tglx@...nel.org,
riel@...riel.com,
segher@...nel.crashing.org,
david@...nel.org,
hpa@...or.com,
arnd@...db.de,
mingo@...hat.com,
juri.lelli@...hat.com,
vincent.guittot@...aro.org,
dietmar.eggemann@....com,
rostedt@...dmis.org,
bsegall@...gle.com,
mgorman@...e.de,
vschneid@...hat.com,
bp@...en8.de,
dave.hansen@...ux.intel.com,
luto@...nel.org,
houwenlong.hwl@...group.com
Cc: linux-kernel@...r.kernel.org,
x86@...nel.org,
Xie Yuanbin <qq570070308@...il.com>
Subject: [PATCH v6 2/3] sched: Make raw_spin_rq_unlock() inline
raw_spin_rq_unlock() is short, and is called in some hot code paths
such as finish_lock_switch.
Make raw_spin_rq_unlock() inline to optimize performance.
Signed-off-by: Xie Yuanbin <qq570070308@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...nel.org>
Cc: Rik van Riel <riel@...riel.com>
Cc: Segher Boessenkool <segher@...nel.crashing.org>
Cc: David Hildenbrand (Red Hat) <david@...nel.org>
Cc: H. Peter Anvin (Intel) <hpa@...or.com>
---
kernel/sched/core.c | 5 -----
kernel/sched/sched.h | 9 ++++++---
2 files changed, 6 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7de5ceb9878b..12d3c42960f2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -687,11 +687,6 @@ bool raw_spin_rq_trylock(struct rq *rq)
}
}
-void raw_spin_rq_unlock(struct rq *rq)
-{
- raw_spin_unlock(rq_lockp(rq));
-}
-
/*
* double_rq_lock - safely lock two runqueues
*/
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b0d920aa0acb..2daa63b760dd 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1604,15 +1604,18 @@ extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
extern bool raw_spin_rq_trylock(struct rq *rq)
__cond_acquires(true, __rq_lockp(rq));
-extern void raw_spin_rq_unlock(struct rq *rq)
- __releases(__rq_lockp(rq));
-
static inline void raw_spin_rq_lock(struct rq *rq)
__acquires(__rq_lockp(rq))
{
raw_spin_rq_lock_nested(rq, 0);
}
+static inline void raw_spin_rq_unlock(struct rq *rq)
+ __releases(__rq_lockp(rq))
+{
+ raw_spin_unlock(rq_lockp(rq));
+}
+
static inline void raw_spin_rq_lock_irq(struct rq *rq)
__acquires(__rq_lockp(rq))
{
--
2.51.0
Powered by blists - more mailing lists