lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 1 Feb 2011 18:27:57 +0100
From:	Oleg Nesterov <oleg@...hat.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	Ingo Molnar <mingo@...e.hu>,
	Alan Stern <stern@...land.harvard.edu>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	Paul Mackerras <paulus@...ba.org>,
	Prasad <prasad@...ux.vnet.ibm.com>,
	Roland McGrath <roland@...hat.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf: Cure task_oncpu_function_call() races

On 02/01, Peter Zijlstra wrote:
>
> Oleg, I've actually run-tested the below and all seems well (clearly
> I've never actually hit the races found before either, so in that
> respect its not a conclusive test).
>
> Can you agree with this patch?

You know, I already wrote the i-think-it-is-correct email. But then
I decided to read it once again.

> -static void __perf_event_remove_from_context(void *info)
> +static int __perf_remove_from_context(void *info)
>  {
>  	struct perf_event *event = info;
>  	struct perf_event_context *ctx = event->ctx;
>  	struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
>
> -	/*
> -	 * If this is a task context, we need to check whether it is
> -	 * the current task context of this cpu. If not it has been
> -	 * scheduled out before the smp call arrived.
> -	 */
> -	if (ctx->task && cpuctx->task_ctx != ctx)
> -		return;

OK, I think this is right... event_sched_out() will see
PERF_EVENT_STATE_INACTIVE if perf_event_task_sched_in() was not
called yet.

But,

> -static void perf_event_remove_from_context(struct perf_event *event)
> +static void perf_remove_from_context(struct perf_event *event)
>  {
> ...
>  	raw_spin_lock_irq(&ctx->lock);
>  	/*
> -	 * If the context is active we need to retry the smp call.
> +	 * If we failed to find a running task, but find it running now that
> +	 * we've acquired the ctx->lock, retry.
>  	 */
> -	if (ctx->nr_active && !list_empty(&event->group_entry)) {
> +	if (task_curr(task)) {
>  		raw_spin_unlock_irq(&ctx->lock);
>  		goto retry;
>  	}
>
>  	/*
> -	 * The lock prevents that this context is scheduled in so we
> -	 * can remove the event safely, if the call above did not
> -	 * succeed.
> +	 * Since the task isn't running, its safe to remove the event, us
> +	 * holding the ctx->lock ensures the task won't get scheduled in.
>  	 */
> -	if (!list_empty(&event->group_entry))
> -		list_del_event(event, ctx);
> +	list_del_event(event, ctx);

this looks suspicious (the same for perf_install_in_context).

Unlike the IPI handler, this can see schedule-in-progress in any state.
In particular, we can see rq->curr == next (so that task_curr() == F),
but before "prev" has already called perf_event_task_sched_out().

So we have to check ctx->is_active, or schedule() should change rq->curr
after perf_event_task_sched_out().

> @@ -753,13 +819,13 @@ void perf_event_disable(struct perf_event *event)
> ...
>  	 */
>  	if (event->state == PERF_EVENT_STATE_ACTIVE) {
>  		raw_spin_unlock_irq(&ctx->lock);
> +		/*
> +		 * Reload the task pointer, it might have been changed by
> +		 * a concurrent perf_event_context_sched_out().
> +		 */
> +		task = ctx->task;
>  		goto retry;

I am wondering why only perf_event_disable() needs this...

Just curious, this is equally needed without this patch?

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ