[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200313035201.GB190951@google.com>
Date: Thu, 12 Mar 2020 23:52:01 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: paulmck@...nel.org
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com, mingo@...nel.org, jiangshanlai@...il.com,
dipankar@...ibm.com, akpm@...ux-foundation.org,
mathieu.desnoyers@...icios.com, josh@...htriplett.org,
tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
oleg@...hat.com, "# 5 . 5 . x" <stable@...r.kernel.org>
Subject: Re: [PATCH RFC tip/core/rcu 1/2] rcu: Don't acquire lock in NMI
handler in rcu_nmi_enter_common()
On Thu, Mar 12, 2020 at 07:40:45PM -0700, paulmck@...nel.org wrote:
> From: "Paul E. McKenney" <paulmck@...nel.org>
>
> The rcu_nmi_enter_common() function can be invoked both in interrupt
> and NMI handlers. If it is invoked from process context (as opposed
> to userspace or idle context) on a nohz_full CPU, it might acquire the
> CPU's leaf rcu_node structure's ->lock. Because this lock is held only
> with interrupts disabled, this is safe from an interrupt handler, but
> doing so from an NMI handler can result in self-deadlock.
>
> This commit therefore adds "irq" to the "if" condition so as to only
> acquire the ->lock from irq handlers or process context, never from
> an NMI handler.
I think Peter's new lockdep changes for NMI would also catch this issue.
>
> Fixes: 5b14557b073c ("rcu: Avoid tick_dep_set_cpu() misordering")
Reviewed-by: Joel Fernandes (Google) <joel@...lfernandes.org>
thanks,
- Joel
> Reported-by: Thomas Gleixner <tglx@...utronix.de>
> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> Cc: <stable@...r.kernel.org> # 5.5.x
> ---
> kernel/rcu/tree.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index d3f52c3..f7d3e48 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -825,7 +825,7 @@ static __always_inline void rcu_nmi_enter_common(bool irq)
> rcu_cleanup_after_idle();
>
> incby = 1;
> - } else if (tick_nohz_full_cpu(rdp->cpu) &&
> + } else if (irq && tick_nohz_full_cpu(rdp->cpu) &&
> rdp->dynticks_nmi_nesting == DYNTICK_IRQ_NONIDLE &&
> READ_ONCE(rdp->rcu_urgent_qs) &&
> !READ_ONCE(rdp->rcu_forced_tick)) {
> --
> 2.9.5
>
Powered by blists - more mailing lists