[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <164526608700.16921.17683501386513808570.tip-bot2@tip-bot2>
Date: Sat, 19 Feb 2022 10:21:27 -0000
From: "tip-bot2 for Mark Rutland" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Mark Rutland <mark.rutland@....com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Ard Biesheuvel <ardb@...nel.org>,
Frederic Weisbecker <frederic@...nel.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/core] sched/preempt: Simplify
irqentry_exit_cond_resched() callers
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 4624a14f4daa8ab4578d274555fd8847254ce339
Gitweb: https://git.kernel.org/tip/4624a14f4daa8ab4578d274555fd8847254ce339
Author: Mark Rutland <mark.rutland@....com>
AuthorDate: Mon, 14 Feb 2022 16:52:12
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Sat, 19 Feb 2022 11:11:08 +01:00
sched/preempt: Simplify irqentry_exit_cond_resched() callers
Currently callers of irqentry_exit_cond_resched() need to be aware of
whether the function should be indirected via a static call, leading to
ugly ifdeffery in callers.
Save them the hassle with a static inline wrapper that does the right
thing. The raw_irqentry_exit_cond_resched() will also be useful in
subsequent patches which will add conditional wrappers for preemption
functions.
Note: in arch/x86/entry/common.c, xen_pv_evtchn_do_upcall() always calls
irqentry_exit_cond_resched() directly, even when PREEMPT_DYNAMIC is in
use. I believe this is a latent bug (which this patch corrects), but I'm
not entirely certain this wasn't deliberate.
Signed-off-by: Mark Rutland <mark.rutland@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Ard Biesheuvel <ardb@...nel.org>
Acked-by: Frederic Weisbecker <frederic@...nel.org>
Link: https://lore.kernel.org/r/20220214165216.2231574-4-mark.rutland@arm.com
---
include/linux/entry-common.h | 9 ++++++---
kernel/entry/common.c | 12 ++++--------
2 files changed, 10 insertions(+), 11 deletions(-)
diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index a01ac1a..dfd84c5 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -454,11 +454,14 @@ irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs);
*
* Conditional reschedule with additional sanity checks.
*/
-void irqentry_exit_cond_resched(void);
+void raw_irqentry_exit_cond_resched(void);
#ifdef CONFIG_PREEMPT_DYNAMIC
-#define irqentry_exit_cond_resched_dynamic_enabled irqentry_exit_cond_resched
+#define irqentry_exit_cond_resched_dynamic_enabled raw_irqentry_exit_cond_resched
#define irqentry_exit_cond_resched_dynamic_disabled NULL
-DECLARE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+DECLARE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
+#define irqentry_exit_cond_resched() static_call(irqentry_exit_cond_resched)()
+#else
+#define irqentry_exit_cond_resched() raw_irqentry_exit_cond_resched()
#endif
/**
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index bad7136..1739ca7 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -380,7 +380,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs)
return ret;
}
-void irqentry_exit_cond_resched(void)
+void raw_irqentry_exit_cond_resched(void)
{
if (!preempt_count()) {
/* Sanity check RCU and thread stack */
@@ -392,7 +392,7 @@ void irqentry_exit_cond_resched(void)
}
}
#ifdef CONFIG_PREEMPT_DYNAMIC
-DEFINE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+DEFINE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
#endif
noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
@@ -420,13 +420,9 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
}
instrumentation_begin();
- if (IS_ENABLED(CONFIG_PREEMPTION)) {
-#ifdef CONFIG_PREEMPT_DYNAMIC
- static_call(irqentry_exit_cond_resched)();
-#else
+ if (IS_ENABLED(CONFIG_PREEMPTION))
irqentry_exit_cond_resched();
-#endif
- }
+
/* Covers both tracing and lockdep */
trace_hardirqs_on();
instrumentation_end();
Powered by blists - more mailing lists