lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241217133707.GB2354@noisy.programming.kicks-ass.net>
Date: Tue, 17 Dec 2024 14:37:07 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: kan.liang@...ux.intel.com
Cc: mingo@...hat.com, acme@...nel.org, namhyung@...nel.org,
	irogers@...gle.com, linux-kernel@...r.kernel.org,
	linux-perf-users@...r.kernel.org, ak@...ux.intel.com,
	eranian@...gle.com
Subject: Re: [PATCH V5 4/4] perf/x86/intel: Support PEBS counters snapshotting

On Mon, Dec 16, 2024 at 12:45:05PM -0800, kan.liang@...ux.intel.com wrote:

> @@ -2049,6 +2102,40 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
>  		}
>  	}
>  
> +	if (format_group & (PEBS_DATACFG_CNTR | PEBS_DATACFG_METRICS)) {
> +		struct pebs_cntr_header *cntr = next_record;
> +		int bit;
> +
> +		next_record += sizeof(struct pebs_cntr_header);
> +
> +		for_each_set_bit(bit, (unsigned long *)&cntr->cntr, INTEL_PMC_MAX_GENERIC) {
> +			x86_perf_event_update(cpuc->events[bit], (u64 *)next_record);
> +			next_record += sizeof(u64);
> +		}

I still don't much like any of this -- the next_record value might be in
the past relative to what is already in the event.

Why can't you use something like the below -- that gives you a count
value matching the pmc value you put in, as long as it is 'near' the
current value.

---
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 8f218ac0d445..3cf8b4f2b2c1 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -154,6 +154,26 @@ u64 x86_perf_event_update(struct perf_event *event)
 	return new_raw_count;
 }
 
+u64 x86_perf_event_pmc_to_count(struct perf_event *event, u64 pmc)
+{
+	struct hw_perf_event *hwc = &event->hw;
+	int shift = 64 - x86_pmu.cntval_bits;
+	u64 prev_pmc, prev_count;
+	u64 delta;
+
+	do {
+		prev_pmc = local64_read(&hwc->prev_count);
+		barrier();
+		prev_count = local64_read(&event->count);
+		barrier();
+	} while (prev_pmc != local64_read(&hwc->prev_count));
+
+	delta = (pmc << shift) - (prev_pmc << shift);
+	delta >>= shift;
+
+	return prev_count + delta;
+}
+
 /*
  * Find and validate any extra registers to set up.
  */

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ