[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190116164725.GC1910@brain-police>
Date: Wed, 16 Jan 2019 16:47:26 +0000
From: Will Deacon <will.deacon@....com>
To: Waiman Long <longman@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
Zhenzhong Duan <zhenzhong.duan@...cle.com>,
James Morse <james.morse@....com>,
Borislav Petkov <bp@...en8.de>,
SRINIVAS <srinivas.eeda@...cle.com>
Subject: Re: [PATCH] locking/qspinlock: Add bug check for exceeding MAX_NODES
On Tue, Jan 15, 2019 at 04:55:44PM -0500, Waiman Long wrote:
> On some architectures, it is possible to have nested NMIs taking
> spinlocks nestedly. Even though the chance of having more than 4 nested
> spinlocks with contention is extremely small, there could still be a
> possibility that it may happen some days leading to system panic.
>
> What we don't want is a silent corruption with system panic somewhere
> else. So add a BUG_ON() check to make sure that a system panic caused
> by this will show the correct root cause.
>
> Signed-off-by: Waiman Long <longman@...hat.com>
> ---
> kernel/locking/qspinlock.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 8a8c3c2..f823221 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -412,6 +412,16 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> idx = node->count++;
> tail = encode_tail(smp_processor_id(), idx);
>
> + /*
> + * 4 nodes are allocated based on the assumption that there will
> + * not be nested NMIs taking spinlocks. That may not be true in
> + * some architectures even though the chance of needing more than
> + * 4 nodes will still be extremely unlikely. Adding a bug check
> + * here to make sure there won't be a silent corruption in case
> + * this condition happens.
> + */
> + BUG_ON(idx >= MAX_NODES);
> +
Hmm, I really don't like the idea of putting a BUG_ON() on the spin_lock()
path. I'd prefer it if (a) we didn't add extra conditional code for the
common case and (b) didn't bring down the machine. Could we emit a
lockdep-style splat, instead?
Will
Powered by blists - more mailing lists