[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110127165712.GC25060@redhat.com>
Date: Thu, 27 Jan 2011 17:57:12 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Frederic Weisbecker <fweisbec@...il.com>,
Ingo Molnar <mingo@...e.hu>,
Alan Stern <stern@...land.harvard.edu>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Paul Mackerras <paulus@...ba.org>,
Prasad <prasad@...ux.vnet.ibm.com>,
Roland McGrath <roland@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: Q: perf_install_in_context/perf_event_enable are racy?
On 01/27, Peter Zijlstra wrote:
>
> Right, so the fact of introducing extra scheduling makes me feel
> uncomfortable... the whole purpose is to observe without perturbing (as
> much as possible).
Yes, agreed.
Well, otoh the patch removes the code which sets ->task_ctx from
__perf_install_in_context() and __perf_event_enable(), and perhaps
we could simplify the things further, but anyway I agree.
> Should I think suffice to get the ctx in sync with the task state, we've
> got the following cases:
> 1) task is in the middle of scheduling in
> 2) task is in the middle of scheduling out
> 3) task is running
>
> Without __ARCH_WANT_INTERRUPT_ON_CTXSW everything is boring and works,
> 1: the IPI will be delayed until 3, 2: the IPI finds another task and
> the next schedule in will sort things.
>
> With, however, things are more interesting. 2 seems to be adequately
> covered by the patch I just send, the IPI will bail and the next
> sched-in of the relevant task will pick matters up. 1 otoh doesn't seem
> covered, the IPI will bail, leaving us stranded.
Hmm, yes... Strangely, I missed that when I was thinking about in_ctxsw.
Perhaps, we can change task_oncpu_function_call() so that it returns
-EAGAIN in case it hits in_ctxsw != 0? If the caller sees -EAGAIN, it
should always retry even if !ctx->is_active.
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists