[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210504190526.22347-7-ricardo.neri-calderon@linux.intel.com>
Date: Tue, 4 May 2021 12:05:16 -0700
From: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...e.de>
Cc: "H. Peter Anvin" <hpa@...or.com>, Ashok Raj <ashok.raj@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
Tony Luck <tony.luck@...el.com>,
Nicholas Piggin <npiggin@...il.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Stephane Eranian <eranian@...gle.com>,
Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>,
"Ravi V. Shankar" <ravi.v.shankar@...el.com>,
Ricardo Neri <ricardo.neri@...el.com>, x86@...nel.org,
linux-kernel@...r.kernel.org,
Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>,
Andi Kleen <andi.kleen@...el.com>
Subject: [RFC PATCH v5 06/16] x86/nmi: Add an NMI_WATCHDOG NMI handler category
Add a NMI_WATCHDOG as a new category of NMI handler. This new category
is to be used with the HPET-based hardlockup detector. This detector
does not have a direct way of checking if the HPET timer is the source of
the NMI. Instead, it indirectly estimates it using the time-stamp counter.
Therefore, we may have false-positives in case another NMI occurs within
the estimated time window. For this reason, we want the handler of the
detector to be called after all the NMI_LOCAL handlers. A simple way
of achieving this with a new NMI handler category.
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: Ashok Raj <ashok.raj@...el.com>
Cc: Andi Kleen <andi.kleen@...el.com>
Cc: Tony Luck <tony.luck@...el.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: "Ravi V. Shankar" <ravi.v.shankar@...el.com>
Cc: x86@...nel.org
Signed-off-by: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
---
Changes since v4:
* None
Changes since v3:
* None
Changes since v2:
* Introduced this patch.
Changes since v1:
* N/A
---
arch/x86/include/asm/nmi.h | 1 +
arch/x86/kernel/nmi.c | 10 ++++++++++
2 files changed, 11 insertions(+)
diff --git a/arch/x86/include/asm/nmi.h b/arch/x86/include/asm/nmi.h
index 1cb9c17a4cb4..4a0d5b562c91 100644
--- a/arch/x86/include/asm/nmi.h
+++ b/arch/x86/include/asm/nmi.h
@@ -28,6 +28,7 @@ enum {
NMI_UNKNOWN,
NMI_SERR,
NMI_IO_CHECK,
+ NMI_WATCHDOG,
NMI_MAX
};
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index bf250a339655..5016bc45e16c 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -61,6 +61,10 @@ static struct nmi_desc nmi_desc[NMI_MAX] =
.lock = __RAW_SPIN_LOCK_UNLOCKED(&nmi_desc[3].lock),
.head = LIST_HEAD_INIT(nmi_desc[3].head),
},
+ {
+ .lock = __RAW_SPIN_LOCK_UNLOCKED(&nmi_desc[4].lock),
+ .head = LIST_HEAD_INIT(nmi_desc[4].head),
+ },
};
@@ -168,6 +172,8 @@ int __register_nmi_handler(unsigned int type, struct nmiaction *action)
*/
WARN_ON_ONCE(type == NMI_SERR && !list_empty(&desc->head));
WARN_ON_ONCE(type == NMI_IO_CHECK && !list_empty(&desc->head));
+ WARN_ON_ONCE(type == NMI_WATCHDOG && !list_empty(&desc->head));
+
/*
* some handlers need to be executed first otherwise a fake
@@ -380,6 +386,10 @@ static noinstr void default_do_nmi(struct pt_regs *regs)
}
raw_spin_unlock(&nmi_reason_lock);
+ handled = nmi_handle(NMI_WATCHDOG, regs);
+ if (handled == NMI_HANDLED)
+ return;
+
/*
* Only one NMI can be latched at a time. To handle
* this we may process multiple nmi handlers at once to
--
2.17.1
Powered by blists - more mailing lists