[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200630111012.GM4800@hirez.programming.kicks-ass.net>
Date: Tue, 30 Jun 2020 13:10:12 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-kernel@...r.kernel.org, Andy Lutomirski <luto@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H . Peter Anvin " <hpa@...or.com>
Subject: Re: [PATCH 1/2] x86/entry: Use should_resched() in
idtentry_exit_cond_resched()
On Tue, Jun 30, 2020 at 12:22:08PM +0200, Sebastian Andrzej Siewior wrote:
> The TIF_NEED_RESCHED bit is inlined on x86 into the preemption counter.
> So instead checking the preemption counter against zero via
> preempt_count() and later checking the TIF_NEED_RESCHED bit via
> need_resched() we could use should_resched() which does both checks in
> one go.
> The functional difference is that we don't enter the if statement with
> preempt_count == 0 and TIF_NEED_RESCHED not set.
>
> Use should_resched() instead need_resched() + preempt_count().
>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> ---
> arch/x86/entry/common.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> index bd3f14175193c..212382f61b8e4 100644
> --- a/arch/x86/entry/common.c
> +++ b/arch/x86/entry/common.c
> @@ -612,13 +612,12 @@ bool noinstr idtentry_enter_cond_rcu(struct pt_regs *regs)
>
> static void idtentry_exit_cond_resched(struct pt_regs *regs, bool may_sched)
> {
> - if (may_sched && !preempt_count()) {
> + if (may_sched && should_resched(0)) {
> /* Sanity check RCU and thread stack */
> rcu_irq_exit_check_preempt();
> if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
> WARN_ON_ONCE(!on_thread_stack());
This was done on purpose, your change avoids hitting this WARN.
The thing is, if we could preempt (but not nessecarily have to) we want
to validate we're on the thread stack.
> - if (need_resched())
> - preempt_schedule_irq();
> + preempt_schedule_irq();
> }
> /* Covers both tracing and lockdep */
> trace_hardirqs_on();
> --
> 2.27.0
>
Powered by blists - more mailing lists