lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 2 Aug 2022 11:43:03 +0530
From:   Ravi Bangoria <ravi.bangoria@....com>
To:     peterz@...radead.org
Cc:     acme@...nel.org, alexander.shishkin@...ux.intel.com,
        jolsa@...hat.com, namhyung@...nel.org, songliubraving@...com,
        eranian@...gle.com, alexey.budankov@...ux.intel.com,
        ak@...ux.intel.com, mark.rutland@....com, megha.dey@...el.com,
        frederic@...nel.org, maddy@...ux.ibm.com, irogers@...gle.com,
        kim.phillips@....com, linux-kernel@...r.kernel.org,
        santosh.shukla@....com, ravi.bangoria@....com
Subject: Re: [RFC v2] perf: Rewrite core context handling

[...]

>  /*
> @@ -2718,7 +2706,6 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
>  			struct perf_event_context *task_ctx,
>  			enum event_type_t event_type)
>  {
> -	enum event_type_t ctx_event_type;
>  	bool cpu_event = !!(event_type & EVENT_CPU);
>  
>  	/*
> @@ -2728,11 +2715,13 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
>  	if (event_type & EVENT_PINNED)
>  		event_type |= EVENT_FLEXIBLE;
>  
> -	ctx_event_type = event_type & EVENT_ALL;
> +	event_type &= EVENT_ALL;
>  
> -	perf_pmu_disable(cpuctx->ctx.pmu);
> -	if (task_ctx)
> -		task_ctx_sched_out(cpuctx, task_ctx, event_type);
> +	perf_ctx_disable(&cpuctx->ctx);
> +	if (task_ctx) {
> +		perf_ctx_disable(task_ctx);
> +		task_ctx_sched_out(task_ctx, event_type);
> +	}
>  
>  	/*
>  	 * Decide which cpu ctx groups to schedule out based on the types
> @@ -2742,17 +2731,20 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
>  	 *  - otherwise, do nothing more.
>  	 */
>  	if (cpu_event)
> -		cpu_ctx_sched_out(cpuctx, ctx_event_type);
> -	else if (ctx_event_type & EVENT_PINNED)
> -		cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
> +		ctx_sched_out(&cpuctx->ctx, event_type);
> +	else if (event_type & EVENT_PINNED)
> +		ctx_sched_out(&cpuctx->ctx, EVENT_FLEXIBLE);
>  
>  	perf_event_sched_in(cpuctx, task_ctx, current);
> -	perf_pmu_enable(cpuctx->ctx.pmu);
> +
> +	perf_ctx_enable(&cpuctx->ctx);
> +	if (task_ctx)
> +		perf_ctx_enable(task_ctx);
>  }

ctx_resched() reschedule entire perf_event_context while adding new event
to the context or enabling existing event in the context. We can probably
optimize it by rescheduling only affected pmu_ctx.

Thanks,
Ravi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ