[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241025100700.3714552-9-ruanjinjie@huawei.com>
Date: Fri, 25 Oct 2024 18:06:49 +0800
From: Jinjie Ruan <ruanjinjie@...wei.com>
To: <oleg@...hat.com>, <linux@...linux.org.uk>, <will@...nel.org>,
<mark.rutland@....com>, <catalin.marinas@....com>, <sstabellini@...nel.org>,
<maz@...nel.org>, <tglx@...utronix.de>, <peterz@...radead.org>,
<luto@...nel.org>, <kees@...nel.org>, <wad@...omium.org>,
<akpm@...ux-foundation.org>, <samitolvanen@...gle.com>, <arnd@...db.de>,
<ojeda@...nel.org>, <rppt@...nel.org>, <hca@...ux.ibm.com>,
<aliceryhl@...gle.com>, <samuel.holland@...ive.com>, <paulmck@...nel.org>,
<aquini@...hat.com>, <petr.pavlu@...e.com>, <ruanjinjie@...wei.com>,
<viro@...iv.linux.org.uk>, <rmk+kernel@...linux.org.uk>, <ardb@...nel.org>,
<wangkefeng.wang@...wei.com>, <surenb@...gle.com>,
<linus.walleij@...aro.org>, <yangyj.ee@...il.com>, <broonie@...nel.org>,
<mbenes@...e.cz>, <puranjay@...nel.org>, <pcc@...gle.com>,
<guohanjun@...wei.com>, <sudeep.holla@....com>,
<Jonathan.Cameron@...wei.com>, <prarit@...hat.com>, <liuwei09@...tc.cn>,
<dwmw@...zon.co.uk>, <oliver.upton@...ux.dev>, <kristina.martsenko@....com>,
<ptosi@...gle.com>, <frederic@...nel.org>, <vschneid@...hat.com>,
<thiago.bauermann@...aro.org>, <joey.gouly@....com>,
<liuyuntao12@...wei.com>, <leobras@...hat.com>,
<linux-kernel@...r.kernel.org>, <linux-arm-kernel@...ts.infradead.org>,
<xen-devel@...ts.xenproject.org>
Subject: [PATCH -next v4 08/19] arm64: entry: Rework arm64_preempt_schedule_irq()
Rework arm64_preempt_schedule_irq() to check whether it need
resched in a check function arm64_irqentry_exit_need_resched().
No functional changes.
Signed-off-by: Jinjie Ruan <ruanjinjie@...wei.com>
---
arch/arm64/kernel/entry-common.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index b57f6dc66115..a3414fb599fa 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -69,10 +69,10 @@ DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
#define need_irq_preemption() (IS_ENABLED(CONFIG_PREEMPTION))
#endif
-static void __sched arm64_preempt_schedule_irq(void)
+static inline bool arm64_irqentry_exit_need_resched(void)
{
if (!need_irq_preemption())
- return;
+ return false;
/*
* Note: thread_info::preempt_count includes both thread_info::count
@@ -80,7 +80,7 @@ static void __sched arm64_preempt_schedule_irq(void)
* preempt_count().
*/
if (READ_ONCE(current_thread_info()->preempt_count) != 0)
- return;
+ return false;
/*
* DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
@@ -89,7 +89,7 @@ static void __sched arm64_preempt_schedule_irq(void)
* DAIF we must have handled an NMI, so skip preemption.
*/
if (system_uses_irq_prio_masking() && read_sysreg(daif))
- return;
+ return false;
/*
* Preempting a task from an IRQ means we leave copies of PSTATE
@@ -99,8 +99,10 @@ static void __sched arm64_preempt_schedule_irq(void)
* Only allow a task to be preempted once cpufeatures have been
* enabled.
*/
- if (system_capabilities_finalized())
- preempt_schedule_irq();
+ if (!system_capabilities_finalized())
+ return false;
+
+ return true;
}
/*
@@ -127,7 +129,8 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs,
return;
}
- arm64_preempt_schedule_irq();
+ if (arm64_irqentry_exit_need_resched())
+ preempt_schedule_irq();
trace_hardirqs_on();
} else {
--
2.34.1
Powered by blists - more mailing lists