[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110328151511.GA3608@redhat.com>
Date: Mon, 28 Mar 2011 17:15:11 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Jiri Olsa <jolsa@...hat.com>, Paul Mackerras <paulus@...ba.org>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH,RFC] perf: panic due to inclied cpu context task_ctx
value
On 03/28, Peter Zijlstra wrote:
>
> --- linux-2.6.orig/kernel/perf_event.c
> +++ linux-2.6/kernel/perf_event.c
> @@ -1767,7 +1767,6 @@ static void ctx_sched_out(struct perf_ev
> struct perf_event *event;
>
> raw_spin_lock(&ctx->lock);
> - perf_pmu_disable(ctx->pmu);
> ctx->is_active = 0;
> if (likely(!ctx->nr_events))
> goto out;
> @@ -1777,6 +1776,7 @@ static void ctx_sched_out(struct perf_ev
> if (!ctx->nr_active)
> goto out;
>
> + perf_pmu_disable(ctx->pmu);
> if (event_type & EVENT_PINNED) {
> list_for_each_entry(event, &ctx->pinned_groups, group_entry)
> group_sched_out(event, cpuctx, ctx);
> @@ -1786,8 +1786,8 @@ static void ctx_sched_out(struct perf_ev
> list_for_each_entry(event, &ctx->flexible_groups, group_entry)
> group_sched_out(event, cpuctx, ctx);
> }
> -out:
> perf_pmu_enable(ctx->pmu);
> +out:
> raw_spin_unlock(&ctx->lock);
Yes, thanks.
Probably this doesn't matter from the perfomance pov, but imho this
makes the code more understandable. This is important for occasional
readers like me ;)
Could you answer another question? It is not immediately clear why
ctx_sched_in() does not check nr_active != 0 before doing
ctx_XXX_sched_in(). I guess, the only reason is perf_rotate_context()
and the similar logic in perf_event_context_sched_in(). If we are
doing, say, cpu_ctx_sched_out(FLEXIBLE) + cpu_ctx_sched_in(FLEXIBLE)
then ->nr_active can be zero after cpu_ctx_sched_out().
Is my understanding correct? Or is there another reason?
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists