[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250815030633.448613-6-ruanjinjie@huawei.com>
Date: Fri, 15 Aug 2025 11:06:30 +0800
From: Jinjie Ruan <ruanjinjie@...wei.com>
To: <catalin.marinas@....com>, <will@...nel.org>, <oleg@...hat.com>,
<sstabellini@...nel.org>, <mark.rutland@....com>, <ada.coupriediaz@....com>,
<mbenes@...e.cz>, <broonie@...nel.org>, <anshuman.khandual@....com>,
<ryan.roberts@....com>, <chenl311@...natelecom.cn>, <liaochang1@...wei.com>,
<kristina.martsenko@....com>, <leitao@...ian.org>, <ardb@...nel.org>,
<linux-arm-kernel@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
<xen-devel@...ts.xenproject.org>
CC: <ruanjinjie@...wei.com>
Subject: [PATCH v8 5/8] entry: Add arch_irqentry_exit_need_resched() for arm64
Compared to the generic entry code, ARM64 does additional checks
when deciding to reschedule on return from interrupt. So introduce
arch_irqentry_exit_need_resched() in the need_resched()
condition of the generic raw_irqentry_exit_cond_resched(), with
a NOP default. This will allow ARM64 to implement the architecture
specific version for switching over to the generic entry code.
Suggested-by: Ada Couprie Diaz <ada.coupriediaz@....com>
Suggested-by: Mark Rutland <mark.rutland@....com>
Suggested-by: Kevin Brodsky <kevin.brodsky@....com>
Suggested-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Jinjie Ruan <ruanjinjie@...wei.com>
---
kernel/entry/common.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index 408d28b5179d..f62e1d1b2063 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -143,6 +143,20 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs)
return ret;
}
+/**
+ * arch_irqentry_exit_need_resched - Architecture specific need resched function
+ *
+ * Invoked from raw_irqentry_exit_cond_resched() to check if resched is needed.
+ * Defaults return true.
+ *
+ * The main purpose is to permit arch to avoid preemption of a task from an IRQ.
+ */
+static inline bool arch_irqentry_exit_need_resched(void);
+
+#ifndef arch_irqentry_exit_need_resched
+static inline bool arch_irqentry_exit_need_resched(void) { return true; }
+#endif
+
void raw_irqentry_exit_cond_resched(void)
{
if (!preempt_count()) {
@@ -150,7 +164,7 @@ void raw_irqentry_exit_cond_resched(void)
rcu_irq_exit_check_preempt();
if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
WARN_ON_ONCE(!on_thread_stack());
- if (need_resched())
+ if (need_resched() && arch_irqentry_exit_need_resched())
preempt_schedule_irq();
}
}
--
2.34.1
Powered by blists - more mailing lists