lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250115184318.2854459-1-kan.liang@linux.intel.com>
Date: Wed, 15 Jan 2025 10:43:16 -0800
From: kan.liang@...ux.intel.com
To: peterz@...radead.org,
	mingo@...hat.com,
	acme@...nel.org,
	namhyung@...nel.org,
	irogers@...gle.com,
	adrian.hunter@...el.com,
	linux-kernel@...r.kernel.org,
	linux-perf-users@...r.kernel.org
Cc: ak@...ux.intel.com,
	eranian@...gle.com,
	dapeng1.mi@...ux.intel.com,
	Kan Liang <kan.liang@...ux.intel.com>,
	stable@...r.kernel.org
Subject: [PATCH V9 1/3] perf/x86/intel: Avoid pmu_disable/enable if !cpuc->enabled in sample read

From: Kan Liang <kan.liang@...ux.intel.com>

The WARN_ON(this_cpu_read(cpu_hw_events.enabled)) in the
intel_pmu_save_and_restart_reload() is triggered, when sampling read
topdown events.

In a NMI handler, the cpu_hw_events.enabled is set and used to indicate
the status of core PMU. The generic pmu->pmu_disable_count, updated in
the perf_pmu_disable/enable pair, is not touched.
However, the perf_pmu_disable/enable pair is invoked when sampling read
in a NMI handler. The cpuc->enabled is mistakenly set by the
perf_pmu_enable().

Avoid perf_pmu_disable/enable() if the core PMU is already disabled.

Fixes: 7b2c05a15d29 ("perf/x86/intel: Generic support for hardware TopDown metrics")
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
Cc: stable@...r.kernel.org
---

A new patch to fix the issue found on a legacy platform.
(Not related to counters snapshotting feature)

But since it also touches the sampling read code, the patches to enable
the counters snapshotting feature must be on top of the patch.
The patch itself can be applied separately.


 arch/x86/events/intel/core.c | 7 +++++--
 arch/x86/events/intel/ds.c   | 9 ++++++---
 2 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 2a2824e9c50d..bce423ad3fad 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2778,15 +2778,18 @@ DEFINE_STATIC_CALL(intel_pmu_update_topdown_event, x86_perf_event_update);
 static void intel_pmu_read_topdown_event(struct perf_event *event)
 {
 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+	int pmu_enabled = cpuc->enabled;
 
 	/* Only need to call update_topdown_event() once for group read. */
 	if ((cpuc->txn_flags & PERF_PMU_TXN_READ) &&
 	    !is_slots_event(event))
 		return;
 
-	perf_pmu_disable(event->pmu);
+	if (pmu_enabled)
+		perf_pmu_disable(event->pmu);
 	static_call(intel_pmu_update_topdown_event)(event);
-	perf_pmu_enable(event->pmu);
+	if (pmu_enabled)
+		perf_pmu_enable(event->pmu);
 }
 
 static void intel_pmu_read_event(struct perf_event *event)
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index ba74e1198328..81b6ec8e824e 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -2096,11 +2096,14 @@ get_next_pebs_record_by_bit(void *base, void *top, int bit)
 
 void intel_pmu_auto_reload_read(struct perf_event *event)
 {
-	WARN_ON(!(event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD));
+	int pmu_enabled = this_cpu_read(cpu_hw_events.enabled);
 
-	perf_pmu_disable(event->pmu);
+	WARN_ON(!(event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD));
+	if (pmu_enabled)
+		perf_pmu_disable(event->pmu);
 	intel_pmu_drain_pebs_buffer();
-	perf_pmu_enable(event->pmu);
+	if (pmu_enabled)
+		perf_pmu_enable(event->pmu);
 }
 
 /*
-- 
2.38.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ