[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <159593340996.4006.11350923667085911342.tip-bot2@tip-bot2>
Date: Tue, 28 Jul 2020 10:50:09 -0000
From: "tip-bot2 for peterz@...radead.org" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, x86 <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: [tip: locking/nmi] locking/lockdep: Fix TRACE_IRQFLAGS vs. NMIs
The following commit has been merged into the locking/nmi branch of tip:
Commit-ID: ed00495333ccc80fc8fb86fb43773c3c2a499466
Gitweb: https://git.kernel.org/tip/ed00495333ccc80fc8fb86fb43773c3c2a499466
Author: peterz@...radead.org <peterz@...radead.org>
AuthorDate: Mon, 27 Jul 2020 14:48:52 +02:00
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Mon, 27 Jul 2020 15:13:29 +02:00
locking/lockdep: Fix TRACE_IRQFLAGS vs. NMIs
Prior to commit:
859d069ee1dd ("lockdep: Prepare for NMI IRQ state tracking")
IRQ state tracking was disabled in NMIs due to nmi_enter()
doing lockdep_off() -- with the obvious requirement that NMI entry
call nmi_enter() before trace_hardirqs_off().
[ AFAICT, PowerPC and SH violate this order on their NMI entry ]
However, that commit explicitly changed lockdep_hardirqs_*() to ignore
lockdep_off() and breaks every architecture that has irq-tracing in
it's NMI entry that hasn't been fixed up (x86 being the only fixed one
at this point).
The reason for this change is that by ignoring lockdep_off() we can:
- get rid of 'current->lockdep_recursion' in lockdep_assert_irqs*()
which was going to to give header-recursion issues with the
seqlock rework.
- allow these lockdep_assert_*() macros to function in NMI context.
Restore the previous state of things and allow an architecture to
opt-in to the NMI IRQ tracking support, however instead of relying on
lockdep_off(), rely on in_nmi(), both are part of nmi_enter() and so
over-all entry ordering doesn't need to change.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Link: https://lore.kernel.org/r/20200727124852.GK119549@hirez.programming.kicks-ass.net
---
arch/x86/Kconfig.debug | 3 +++
kernel/locking/lockdep.c | 8 +++++++-
lib/Kconfig.debug | 6 ++++++
3 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug
index 0dd319e..ee1d3c5 100644
--- a/arch/x86/Kconfig.debug
+++ b/arch/x86/Kconfig.debug
@@ -3,6 +3,9 @@
config TRACE_IRQFLAGS_SUPPORT
def_bool y
+config TRACE_IRQFLAGS_NMI_SUPPORT
+ def_bool y
+
config EARLY_PRINTK_USB
bool
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index d595623..8b0b28b 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3712,6 +3712,9 @@ void noinstr lockdep_hardirqs_on(unsigned long ip)
* and not rely on hardware state like normal interrupts.
*/
if (unlikely(in_nmi())) {
+ if (!IS_ENABLED(CONFIG_TRACE_IRQFLAGS_NMI))
+ return;
+
/*
* Skip:
* - recursion check, because NMI can hit lockdep;
@@ -3773,7 +3776,10 @@ void noinstr lockdep_hardirqs_off(unsigned long ip)
* they will restore the software state. This ensures the software
* state is consistent inside NMIs as well.
*/
- if (unlikely(!in_nmi() && (current->lockdep_recursion & LOCKDEP_RECURSION_MASK)))
+ if (in_nmi()) {
+ if (!IS_ENABLED(CONFIG_TRACE_IRQFLAGS_NMI))
+ return;
+ } else if (current->lockdep_recursion & LOCKDEP_RECURSION_MASK)
return;
/*
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 9ad9210..fa964b5 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1325,11 +1325,17 @@ config WW_MUTEX_SELFTEST
endmenu # lock debugging
config TRACE_IRQFLAGS
+ depends on TRACE_IRQFLAGS_SUPPORT
bool
help
Enables hooks to interrupt enabling and disabling for
either tracing or lock debugging.
+config TRACE_IRQFLAGS_NMI
+ def_bool y
+ depends on TRACE_IRQFLAGS
+ depends on TRACE_IRQFLAGS_NMI_SUPPORT
+
config STACKTRACE
bool "Stack backtrace support"
depends on STACKTRACE_SUPPORT
Powered by blists - more mailing lists