lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250410082041.GY9833@noisy.programming.kicks-ass.net>
Date: Thu, 10 Apr 2025 10:20:41 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Sandipan Das <sandipan.das@....com>
Cc: linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
	mingo@...hat.com, acme@...nel.org, namhyung@...nel.org,
	mark.rutland@....com, alexander.shishkin@...ux.intel.com,
	jolsa@...nel.org, irogers@...gle.com, adrian.hunter@...el.com,
	kan.liang@...ux.intel.com, tglx@...utronix.de, bp@...en8.de,
	dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
	eranian@...gle.com, songliubraving@...a.com, ravi.bangoria@....com,
	ananth.narayan@....com
Subject: Re: [PATCH 2/4] perf/x86/amd/uncore: Use hrtimer for handling
 overflows

On Wed, Apr 09, 2025 at 01:27:07PM +0530, Sandipan Das wrote:
> Uncore counters do not provide mechanisms like interrupts to report
> overflows and the accumulated user-visible count is incorrect if there
> is more than one overflow between two successive read requests for the
> same event because the value of prev_count goes out-of-date for
> calculating the correct delta.
> 
> To avoid this, start a hrtimer to periodically initiate a pmu->read() of
> the active counters for keeping prev_count up-to-date. It should be
> noted that the hrtimer duration should be lesser than the shortest time
> it takes for a counter to overflow for this approach to be effective.
> 
> Signed-off-by: Sandipan Das <sandipan.das@....com>
> ---
>  arch/x86/events/amd/uncore.c | 72 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 72 insertions(+)
> 
> diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
> index 010024f09f2c..ee6528f2189f 100644
> --- a/arch/x86/events/amd/uncore.c
> +++ b/arch/x86/events/amd/uncore.c
> @@ -21,6 +21,7 @@
>  #define NUM_COUNTERS_NB		4
>  #define NUM_COUNTERS_L2		4
>  #define NUM_COUNTERS_L3		6
> +#define NUM_COUNTERS_MAX	64
>  
>  #define RDPMC_BASE_NB		6
>  #define RDPMC_BASE_LLC		10
> @@ -38,6 +39,10 @@ struct amd_uncore_ctx {
>  	int refcnt;
>  	int cpu;
>  	struct perf_event **events;
> +	unsigned long active_mask[BITS_TO_LONGS(NUM_COUNTERS_MAX)];
> +	int nr_active;
> +	struct hrtimer hrtimer;
> +	u64 hrtimer_duration;
>  };
>  
>  struct amd_uncore_pmu {
> @@ -87,6 +92,51 @@ static struct amd_uncore_pmu *event_to_amd_uncore_pmu(struct perf_event *event)
>  	return container_of(event->pmu, struct amd_uncore_pmu, pmu);
>  }
>  
> +static enum hrtimer_restart amd_uncore_hrtimer(struct hrtimer *hrtimer)
> +{
> +	struct amd_uncore_ctx *ctx;
> +	struct perf_event *event;
> +	unsigned long flags;
> +	int bit;
> +
> +	ctx = container_of(hrtimer, struct amd_uncore_ctx, hrtimer);
> +
> +	if (!ctx->nr_active || ctx->cpu != smp_processor_id())
> +		return HRTIMER_NORESTART;
> +
> +	/*
> +	 * Disable local interrupts to prevent pmu->start() or pmu->stop()
> +	 * from interrupting the update process
> +	 */
> +	local_irq_save(flags);
> +
> +	for_each_set_bit(bit, ctx->active_mask, NUM_COUNTERS_MAX) {
> +		event = ctx->events[bit];
> +		event->pmu->read(event);
> +	}
> +
> +	local_irq_restore(flags);
> +
> +	hrtimer_forward_now(hrtimer, ns_to_ktime(ctx->hrtimer_duration));
> +	return HRTIMER_RESTART;
> +}
> +
> +static void amd_uncore_start_hrtimer(struct amd_uncore_ctx *ctx)
> +{
> +	hrtimer_start(&ctx->hrtimer, ns_to_ktime(ctx->hrtimer_duration),
> +		      HRTIMER_MODE_REL_PINNED);
> +}

So I know you copied this from the Intel uncore driver; but would not
both be improved by using HRTIMER_MODE_HARD?

It makes no sense to me to bounce the thing to SoftIRQ only to then
disable IRQs in the handler again. Not to mention that the whole SoftIRQ
things delays things further, giving more room/time to reach overflow
wrap.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ