[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1557842534-4266-10-git-send-email-ricardo.neri-calderon@linux.intel.com>
Date: Tue, 14 May 2019 07:02:02 -0700
From: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...e.de>
Cc: Ashok Raj <ashok.raj@...el.com>, Joerg Roedel <joro@...tes.org>,
Andi Kleen <andi.kleen@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
"Ravi V. Shankar" <ravi.v.shankar@...el.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, iommu@...ts.linux-foundation.org,
Ricardo Neri <ricardo.neri@...el.com>,
Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
Subject: [RFC PATCH v3 09/21] x86/nmi: Add a NMI_WATCHDOG NMI handler category
Add a NMI_WATCHDOG as a new category of NMI handler. This new category
is to be used with the HPET-based hardlockup detector. This detector
does not have a direct way of checking if the HPET timer is the source of
the NMI. Instead it indirectly estimate it using the time-stamp counter.
Therefore, we may have false-positives in case another NMI occurs within
the estimated time window. For this reason, we want the handler of the
detector to be called after all the NMI_LOCAL handlers. A simple way
of achieving this with a new NMI handler category.
Signed-off-by: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
---
arch/x86/include/asm/nmi.h | 1 +
arch/x86/kernel/nmi.c | 10 ++++++++++
2 files changed, 11 insertions(+)
diff --git a/arch/x86/include/asm/nmi.h b/arch/x86/include/asm/nmi.h
index 75ded1d13d98..75aa98313cde 100644
--- a/arch/x86/include/asm/nmi.h
+++ b/arch/x86/include/asm/nmi.h
@@ -29,6 +29,7 @@ enum {
NMI_UNKNOWN,
NMI_SERR,
NMI_IO_CHECK,
+ NMI_WATCHDOG,
NMI_MAX
};
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index 3755d0310026..a43213f0ab26 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -62,6 +62,10 @@ static struct nmi_desc nmi_desc[NMI_MAX] =
.lock = __RAW_SPIN_LOCK_UNLOCKED(&nmi_desc[3].lock),
.head = LIST_HEAD_INIT(nmi_desc[3].head),
},
+ {
+ .lock = __RAW_SPIN_LOCK_UNLOCKED(&nmi_desc[4].lock),
+ .head = LIST_HEAD_INIT(nmi_desc[4].head),
+ },
};
@@ -172,6 +176,8 @@ int __register_nmi_handler(unsigned int type, struct nmiaction *action)
*/
WARN_ON_ONCE(type == NMI_SERR && !list_empty(&desc->head));
WARN_ON_ONCE(type == NMI_IO_CHECK && !list_empty(&desc->head));
+ WARN_ON_ONCE(type == NMI_WATCHDOG && !list_empty(&desc->head));
+
/*
* some handlers need to be executed first otherwise a fake
@@ -382,6 +388,10 @@ static void default_do_nmi(struct pt_regs *regs)
}
raw_spin_unlock(&nmi_reason_lock);
+ handled = nmi_handle(NMI_WATCHDOG, regs);
+ if (handled == NMI_HANDLED)
+ return;
+
/*
* Only one NMI can be latched at a time. To handle
* this we may process multiple nmi handlers at once to
--
2.17.1
Powered by blists - more mailing lists