lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240806080757.GF12673@noisy.programming.kicks-ass.net>
Date: Tue, 6 Aug 2024 10:07:57 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Namhyung Kim <namhyung@...nel.org>
Cc: "Liang, Kan" <kan.liang@...ux.intel.com>,
	Ingo Molnar <mingo@...nel.org>, Mark Rutland <mark.rutland@....com>,
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
	Arnaldo Carvalho de Melo <acme@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Ravi Bangoria <ravi.bangoria@....com>,
	Stephane Eranian <eranian@...gle.com>,
	Ian Rogers <irogers@...gle.com>, Mingwei Zhang <mizhang@...gle.com>
Subject: Re: [PATCH v2] perf/core: Optimize event reschedule for a PMU

On Tue, Aug 06, 2024 at 09:56:30AM +0200, Peter Zijlstra wrote:

> Does this help? What would be an easy reproducer?
> 
> ---
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index c67fc43fe877..4a04611333d9 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -179,23 +179,27 @@ static void perf_ctx_lock(struct perf_cpu_context *cpuctx,
>  	}
>  }
>  
> +static inline void __perf_ctx_unlock(struct perf_event_context *ctx)
> +{
> +	/*
> +	 * If ctx_sched_in() didn't again set any ALL flags, clean up
> +	 * after ctx_sched_out() by clearing is_active.
> +	 */
> +	if (ctx->is_active & EVENT_FROZEN) {
> +		if (!(ctx->is_active & EVENT_ALL))
> +			ctx->is_active = 0;
> +		else
> +			ctx->is_active &= ~EVENT_FROZEN;
> +	}
> +	raw_spin_unlock(&ctx->lock);
> +}
> +
>  static void perf_ctx_unlock(struct perf_cpu_context *cpuctx,
>  			    struct perf_event_context *ctx)
>  {
> -	if (ctx) {
> -		/*
> -		 * If ctx_sched_in() didn't again set any ALL flags, clean up
> -		 * after ctx_sched_out() by clearing is_active.
> -		 */
> -		if (ctx->is_active & EVENT_FROZEN) {
> -			if (!(ctx->is_active & EVENT_ALL))
> -				ctx->is_active = 0;
> -			else
> -				ctx->is_active &= ~EVENT_FROZEN;
> -		}
> -		raw_spin_unlock(&ctx->lock);
> -	}
> -	raw_spin_unlock(&cpuctx->ctx.lock);
> +	if (ctx)
> +		__perf_ctx_unlock(ctx);
> +	__perf_ctx_unlock(&cpuctx->ctx.lock);

Obviously that wants to be just: &cpuctx->ctx :-)

>  }
>  
>  #define TASK_TOMBSTONE ((void *)-1L)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ