lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 27 Feb 2019 08:05:16 -0800
From:   Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
To:     Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...e.de>
Cc:     Ashok Raj <ashok.raj@...el.com>, Andi Kleen <andi.kleen@...el.com>,
        Peter Zijlstra <peterz@...radead.org>,
        "Ravi V. Shankar" <ravi.v.shankar@...el.com>, x86@...nel.org,
        linux-kernel@...r.kernel.org,
        Ricardo Neri <ricardo.neri@...el.com>,
        Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>,
        "H. Peter Anvin" <hpa@...or.com>, Tony Luck <tony.luck@...el.com>,
        Clemens Ladisch <clemens@...isch.de>,
        Arnd Bergmann <arnd@...db.de>,
        Philippe Ombredanne <pombredanne@...b.com>,
        Kate Stewart <kstewart@...uxfoundation.org>,
        "Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
        Mimi Zohar <zohar@...ux.ibm.com>,
        Jan Kiszka <jan.kiszka@...mens.com>,
        Nick Desaulniers <ndesaulniers@...gle.com>,
        Masahiro Yamada <yamada.masahiro@...ionext.com>,
        Nayna Jain <nayna@...ux.ibm.com>
Subject: [RFC PATCH v2 12/14] x86/watchdog/hardlockup/hpet: Determine if HPET timer caused NMI

The only direct method to determine whether an HPET timer caused an
interrupt is to read the Interrupt Status register. Unfortunately,
reading HPET registers is slow and, therefore, it is not recommended to
read them while in NMI context. Furthermore, status is not available if
the interrupt is generated vi the Front Side Bus.

An indirect manner is to compute the expected value of the the time-stamp
counter and, at the time of the interrupt and verify that its actual
value is within a range of the expected value. Since the hardlockup
detector operates in seconds, high precision is not needed. This
implementation considers that the HPET caused the HMI if the time-stamp
counter reads the expected value -/+ 1.5%. This value is selected is it
is equivalent to 1/64 and the division can be performed using bit
shifts. Experimentally, the error in the estimation is consistently less
than 1%.

Also, only read the time-stamp counter of the handling CPU (the one
targeted by the HPET timer). This helps to avoid variability of the time
stamp across CPUs.

Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: Ashok Raj <ashok.raj@...el.com>
Cc: Andi Kleen <andi.kleen@...el.com>
Cc: Tony Luck <tony.luck@...el.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Clemens Ladisch <clemens@...isch.de>
Cc: Arnd Bergmann <arnd@...db.de>
Cc: Philippe Ombredanne <pombredanne@...b.com>
Cc: Kate Stewart <kstewart@...uxfoundation.org>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@...el.com>
Cc: Mimi Zohar <zohar@...ux.ibm.com>
Cc: Jan Kiszka <jan.kiszka@...mens.com>
Cc: Nick Desaulniers <ndesaulniers@...gle.com>
Cc: Masahiro Yamada <yamada.masahiro@...ionext.com>
Cc: Nayna Jain <nayna@...ux.ibm.com>
Cc: "Ravi V. Shankar" <ravi.v.shankar@...el.com>
Cc: x86@...nel.org
Suggested-by: Andi Kleen <andi.kleen@...el.com> 
Signed-off-by: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
---
 arch/x86/include/asm/hpet.h         |  2 ++
 arch/x86/kernel/watchdog_hld_hpet.c | 28 +++++++++++++++++++++++++---
 2 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/hpet.h b/arch/x86/include/asm/hpet.h
index 15dc3b576496..09763340c911 100644
--- a/arch/x86/include/asm/hpet.h
+++ b/arch/x86/include/asm/hpet.h
@@ -123,6 +123,8 @@ struct hpet_hld_data {
 	u32		num;
 	u32		flags;
 	u64		ticks_per_second;
+	u64		tsc_next;
+	u64		tsc_next_error;
 	u32		handling_cpu;
 	struct cpumask	cpu_monitored_mask;
 	struct msi_msg	msi_msg;
diff --git a/arch/x86/kernel/watchdog_hld_hpet.c b/arch/x86/kernel/watchdog_hld_hpet.c
index cfa284da4bf6..65b4699f249a 100644
--- a/arch/x86/kernel/watchdog_hld_hpet.c
+++ b/arch/x86/kernel/watchdog_hld_hpet.c
@@ -55,6 +55,11 @@ static inline void set_comparator(struct hpet_hld_data *hdata,
  *
  * Reprogram the timer to expire within watchdog_thresh seconds in the future.
  *
+ * Also compute the expected value of the time-stamp counter at the time of
+ * expiration as well as a deviation from the expected value. The maximum
+ * deviation is of ~1.5%. This deviation can be easily computed by shifting
+ * by 6 positions the delta between the current and expected time-stamp values.
+ *
  * Returns:
  *
  * None
@@ -62,7 +67,18 @@ static inline void set_comparator(struct hpet_hld_data *hdata,
 static void kick_timer(struct hpet_hld_data *hdata, bool force)
 {
 	bool kick_needed = force || !(hdata->flags & HPET_DEV_PERI_CAP);
-	unsigned long new_compare, count;
+	unsigned long tsc_curr, tsc_delta, new_compare, count;
+
+	/* Start obtaining the current TSC and HPET counts. */
+	tsc_curr = rdtsc();
+
+	if (kick_needed)
+		count = get_count();
+
+	tsc_delta = (unsigned long)watchdog_thresh * (unsigned long)tsc_khz
+		    * 1000L;
+	hdata->tsc_next = tsc_curr + tsc_delta;
+	hdata->tsc_next_error = tsc_delta >> 6;
 
 	/*
 	 * Update the comparator in increments of watch_thresh seconds relative
@@ -74,8 +90,6 @@ static void kick_timer(struct hpet_hld_data *hdata, bool force)
 	 */
 
 	if (kick_needed) {
-		count = get_count();
-
 		new_compare = count + watchdog_thresh * hdata->ticks_per_second;
 
 		set_comparator(hdata, new_compare);
@@ -147,6 +161,14 @@ static void set_periodic(struct hpet_hld_data *hdata)
  */
 static bool is_hpet_wdt_interrupt(struct hpet_hld_data *hdata)
 {
+	if (smp_processor_id() == hdata->handling_cpu) {
+		unsigned long tsc_curr;
+
+		tsc_curr = rdtsc();
+		if (abs(tsc_curr - hdata->tsc_next) < hdata->tsc_next_error)
+			return true;
+	}
+
 	return false;
 }
 
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ