lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240730204048.GU33588@noisy.programming.kicks-ass.net>
Date: Tue, 30 Jul 2024 22:40:48 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Ingo Molnar <mingo@...nel.org>, Mark Rutland <mark.rutland@....com>,
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
	Arnaldo Carvalho de Melo <acme@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Ravi Bangoria <ravi.bangoria@....com>,
	Kan Liang <kan.liang@...ux.intel.com>,
	Stephane Eranian <eranian@...gle.com>,
	Ian Rogers <irogers@...gle.com>, Mingwei Zhang <mizhang@...gle.com>
Subject: Re: [PATCH] perf/core: Optimize event reschedule for a PMU

On Tue, Jul 30, 2024 at 12:19:25PM -0700, Namhyung Kim wrote:
> @@ -2728,13 +2727,62 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
>  		perf_ctx_enable(task_ctx, false);
>  }
>  
> +static void __perf_pmu_resched(struct pmu *pmu,
> +			       struct perf_event_context *task_ctx,
> +			       enum event_type_t event_type)
> +{
> +	bool cpu_event = !!(event_type & EVENT_CPU);
> +	struct perf_event_pmu_context *epc = NULL;
> +	struct perf_cpu_pmu_context *cpc = this_cpu_ptr(pmu->cpu_pmu_context);
> +
> +	/*
> +	 * If pinned groups are involved, flexible groups also need to be
> +	 * scheduled out.
> +	 */
> +	if (event_type & EVENT_PINNED)
> +		event_type |= EVENT_FLEXIBLE;
> +
> +	event_type &= EVENT_ALL;
> +
> +	perf_pmu_disable(pmu);
> +	if (task_ctx) {
> +		if (WARN_ON_ONCE(!cpc->task_epc || cpc->task_epc->ctx != task_ctx))
> +			goto out;
> +
> +		epc = cpc->task_epc;
> +		__pmu_ctx_sched_out(epc, event_type);
> +	}
> +
> +	/*
> +	 * Decide which cpu ctx groups to schedule out based on the types
> +	 * of events that caused rescheduling:
> +	 *  - EVENT_CPU: schedule out corresponding groups;
> +	 *  - EVENT_PINNED task events: schedule out EVENT_FLEXIBLE groups;
> +	 *  - otherwise, do nothing more.
> +	 */
> +	if (cpu_event)
> +		__pmu_ctx_sched_out(&cpc->epc, event_type);
> +	else if (event_type & EVENT_PINNED)
> +		__pmu_ctx_sched_out(&cpc->epc, EVENT_FLEXIBLE);
> +
> +	__pmu_ctx_sched_in(&cpc->epc, EVENT_PINNED);
> +	if (task_ctx)
> +		 __pmu_ctx_sched_in(epc, EVENT_PINNED);
> +	__pmu_ctx_sched_in(&cpc->epc, EVENT_FLEXIBLE);
> +	if (task_ctx)
> +		 __pmu_ctx_sched_in(epc, EVENT_FLEXIBLE);
> +
> +out:
> +	perf_pmu_enable(pmu);
> +}

I so dislike duplication...

So lets see, ctx_resched() has pmu_ctx iterations in:

  perf_ctx_{en,dis}able()
  ctx_sched_{in,out}()

Can't we punch a 'struct pmu *pmu' argument through those callchains and
short-circuit the iteration when !NULL?

The alternative would be to lift the iteration I suppose.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ