[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1547589344-11504-1-git-send-email-longman@redhat.com>
Date: Tue, 15 Jan 2019 16:55:44 -0500
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>
Cc: linux-kernel@...r.kernel.org,
Zhenzhong Duan <zhenzhong.duan@...cle.com>,
James Morse <james.morse@....com>,
Borislav Petkov <bp@...en8.de>,
SRINIVAS <srinivas.eeda@...cle.com>,
Waiman Long <longman@...hat.com>
Subject: [PATCH] locking/qspinlock: Add bug check for exceeding MAX_NODES
On some architectures, it is possible to have nested NMIs taking
spinlocks nestedly. Even though the chance of having more than 4 nested
spinlocks with contention is extremely small, there could still be a
possibility that it may happen some days leading to system panic.
What we don't want is a silent corruption with system panic somewhere
else. So add a BUG_ON() check to make sure that a system panic caused
by this will show the correct root cause.
Signed-off-by: Waiman Long <longman@...hat.com>
---
kernel/locking/qspinlock.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 8a8c3c2..f823221 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -412,6 +412,16 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
idx = node->count++;
tail = encode_tail(smp_processor_id(), idx);
+ /*
+ * 4 nodes are allocated based on the assumption that there will
+ * not be nested NMIs taking spinlocks. That may not be true in
+ * some architectures even though the chance of needing more than
+ * 4 nodes will still be extremely unlikely. Adding a bug check
+ * here to make sure there won't be a silent corruption in case
+ * this condition happens.
+ */
+ BUG_ON(idx >= MAX_NODES);
+
node = grab_mcs_node(node, idx);
/*
--
1.8.3.1
Powered by blists - more mailing lists