lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230421184529.3320912-2-kan.liang@linux.intel.com>
Date:   Fri, 21 Apr 2023 11:45:29 -0700
From:   kan.liang@...ux.intel.com
To:     peterz@...radead.org, mingo@...hat.com,
        linux-kernel@...r.kernel.org
Cc:     eranian@...gle.com, ak@...ux.intel.com,
        Kan Liang <kan.liang@...ux.intel.com>
Subject: [PATCH V4 2/2] perf/x86/intel/ds: Delay the threshold update

From: Kan Liang <kan.liang@...ux.intel.com>

The update of the pebs_record_size has been delayed to the place right
before the new pebs_data_cfg takes effect for the adaptive PEBS. But the
update of the DS threshold is still in the event_add stage. The
threshold is calculated from the pebs_record_size. So it may contain
inaccurate data. The data will be corrected in the event_enable stage.
So there is no real harm. But the logic is quite a mess and hard to
follow.

Move the threshold update to the event_enable stage where all the
configures have been settled down.

Steal the highest bit of cpuc->pebs_data_cfg to track whether the
threshold update is required. Just need to update the threshold once.

It's possible that the first event is eligible for the large PEBS,
while the second event is not. The current perf implementation may
update the threshold twice in the event_add stage. This patch could
also improve such kind of cases by avoiding the extra update.

No functional change.

Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
---

This is a cleanup patch to address the comment.
https://lore.kernel.org/lkml/20230414102908.GC83892@hirez.programming.kicks-ass.net/
It doesn't fix any real issues. It just tries to make the logic clear and
consistent.

 arch/x86/events/intel/ds.c        | 34 ++++++++++++-------------------
 arch/x86/include/asm/perf_event.h |  8 ++++++++
 2 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 94043232991c..554a58318787 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1229,12 +1229,14 @@ pebs_update_state(bool needed_cb, struct cpu_hw_events *cpuc,
 		  struct perf_event *event, bool add)
 {
 	struct pmu *pmu = event->pmu;
+
 	/*
 	 * Make sure we get updated with the first PEBS
 	 * event. It will trigger also during removal, but
 	 * that does not hurt:
 	 */
-	bool update = cpuc->n_pebs == 1;
+	if (cpuc->n_pebs == 1)
+		cpuc->pebs_data_cfg = PEBS_UPDATE_DS_SW;
 
 	if (needed_cb != pebs_needs_sched_cb(cpuc)) {
 		if (!needed_cb)
@@ -1242,7 +1244,7 @@ pebs_update_state(bool needed_cb, struct cpu_hw_events *cpuc,
 		else
 			perf_sched_cb_dec(pmu);
 
-		update = true;
+		cpuc->pebs_data_cfg |= PEBS_UPDATE_DS_SW;
 	}
 
 	/*
@@ -1252,28 +1254,15 @@ pebs_update_state(bool needed_cb, struct cpu_hw_events *cpuc,
 	if (x86_pmu.intel_cap.pebs_baseline && add) {
 		u64 pebs_data_cfg;
 
-		/* Clear pebs_data_cfg for first PEBS. */
-		if (cpuc->n_pebs == 1)
-			cpuc->pebs_data_cfg = 0;
-
 		pebs_data_cfg = pebs_update_adaptive_cfg(event);
 
 		/*
 		 * Only update the pebs_data_cfg here. The pebs_record_size
 		 * will be updated later when the new pebs_data_cfg takes effect.
 		 */
-		if (pebs_data_cfg & ~cpuc->pebs_data_cfg)
-			cpuc->pebs_data_cfg |= pebs_data_cfg;
+		if (pebs_data_cfg & ~get_pebs_datacfg_hw(cpuc->pebs_data_cfg))
+			cpuc->pebs_data_cfg |= pebs_data_cfg | PEBS_UPDATE_DS_SW;
 	}
-
-	/*
-	 * For the adaptive PEBS, the threshold will be updated later
-	 * when the new pebs_data_cfg takes effect.
-	 * The threshold may not be accurate before that, but that
-	 * does not hurt.
-	 */
-	if (update)
-		pebs_update_threshold(cpuc);
 }
 
 void intel_pmu_pebs_add(struct perf_event *event)
@@ -1355,7 +1344,7 @@ void intel_pmu_pebs_enable(struct perf_event *event)
 
 	if (x86_pmu.intel_cap.pebs_baseline) {
 		hwc->config |= ICL_EVENTSEL_ADAPTIVE;
-		if (cpuc->pebs_data_cfg != cpuc->active_pebs_data_cfg) {
+		if (get_pebs_datacfg_hw(cpuc->pebs_data_cfg) != cpuc->active_pebs_data_cfg) {
 			/*
 			 * drain_pebs() assumes uniform record size;
 			 * hence we need to drain when changing said
@@ -1363,11 +1352,14 @@ void intel_pmu_pebs_enable(struct perf_event *event)
 			 */
 			intel_pmu_drain_large_pebs(cpuc);
 			adaptive_pebs_record_size_update();
-			pebs_update_threshold(cpuc);
-			wrmsrl(MSR_PEBS_DATA_CFG, cpuc->pebs_data_cfg);
-			cpuc->active_pebs_data_cfg = cpuc->pebs_data_cfg;
+			wrmsrl(MSR_PEBS_DATA_CFG, get_pebs_datacfg_hw(cpuc->pebs_data_cfg));
+			cpuc->active_pebs_data_cfg = get_pebs_datacfg_hw(cpuc->pebs_data_cfg);
 		}
 	}
+	if (cpuc->pebs_data_cfg & PEBS_UPDATE_DS_SW) {
+		cpuc->pebs_data_cfg &= ~PEBS_UPDATE_DS_SW;
+		pebs_update_threshold(cpuc);
+	}
 
 	if (idx >= INTEL_PMC_IDX_FIXED) {
 		if (x86_pmu.intel_cap.pebs_format < 5)
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 8fc15ed5e60b..259a2a8afe2b 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -121,6 +121,14 @@
 #define PEBS_DATACFG_LBRS	BIT_ULL(3)
 #define PEBS_DATACFG_LBR_SHIFT	24
 
+/* Steal the highest bit of pebs_data_cfg for SW usage */
+#define PEBS_UPDATE_DS_SW	BIT_ULL(63)
+
+static inline u64 get_pebs_datacfg_hw(u64 config)
+{
+	return config & ~PEBS_UPDATE_DS_SW;
+}
+
 /*
  * Intel "Architectural Performance Monitoring" CPUID
  * detection/enumeration details:
-- 
2.35.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ