lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190625142135.22112-1-kan.liang@linux.intel.com>
Date:   Tue, 25 Jun 2019 07:21:35 -0700
From:   kan.liang@...ux.intel.com
To:     mingo@...hat.com, jolsa@...nel.org, peterz@...radead.org,
        linux-kernel@...r.kernel.org
Cc:     ak@...ux.intel.com, Kan Liang <kan.liang@...ux.intel.com>
Subject: [PATCH] perf/x86/intel: Fix spurious NMI on fixed counter

From: Kan Liang <kan.liang@...ux.intel.com>

If a user first sample a PEBS event on a fixed counter, then sample a
non-PEBS event on the same fixed counter on Icelake, it will trigger
spurious NMI. For example,

  perf record -e 'cycles:p' -a
  perf record -e 'cycles' -a

The error message for spurious NMI.

  [June 21 15:38] Uhhuh. NMI received for unknown reason 30 on CPU 2.
  [  +0.000000] Do you have a strange power saving mode enabled?
  [  +0.000000] Dazed and confused, but trying to continue

The issue was introduced by the following commit:

  commit 6f55967ad9d9 ("perf/x86/intel: Fix race in intel_pmu_disable_event()")

The commit moves the intel_pmu_pebs_disable() after
intel_pmu_disable_fixed(), which returns immediately.
The related bit of PEBS_ENABLE MSR will never be cleared for the fixed
counter. Then a non-PEBS event runs on the fixed counter, but the bit
on PEBS_ENABLE is still set, which trigger spurious NMI.

Check and disable PEBS for fixed counter after intel_pmu_disable_fixed().

Reported-by: Yi, Ammy <ammy.yi@...el.com>
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
Fixes: 6f55967ad9d9 ("perf/x86/intel: Fix race in intel_pmu_disable_event()")
---
 arch/x86/events/intel/core.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 4377bf6a6f82..464316218b77 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2160,12 +2160,10 @@ static void intel_pmu_disable_event(struct perf_event *event)
 	cpuc->intel_ctrl_host_mask &= ~(1ull << hwc->idx);
 	cpuc->intel_cp_status &= ~(1ull << hwc->idx);
 
-	if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) {
+	if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL))
 		intel_pmu_disable_fixed(hwc);
-		return;
-	}
-
-	x86_pmu_disable_event(event);
+	else
+		x86_pmu_disable_event(event);
 
 	/*
 	 * Needs to be called after x86_pmu_disable_event,
-- 
2.14.5

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ