[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABPqkBSRTDbN7Sq9Pg0GOUFNiKGpVgOBY82ZMK--4Tgy9=iZDA@mail.gmail.com>
Date: Tue, 2 Oct 2012 15:34:42 +0200
From: Stephane Eranian <eranian@...gle.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Jiri Olsa <jolsa@...hat.com>, LKML <linux-kernel@...r.kernel.org>,
Frederic Weisbecker <fweisbec@...il.com>,
Ingo Molnar <mingo@...e.hu>, Paul Mackerras <paulus@...ba.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>
Subject: Re: [PATCH] perf cgroups: Fix perf_cgroup_switch schedule in warning
On Tue, Oct 2, 2012 at 3:10 PM, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
> On Tue, 2012-10-02 at 14:48 +0200, Stephane Eranian wrote:
>> Not sure, I understand what active_pmu represents.
>
> Its a 'random' pmu of those that share the cpuctx, exactly so you can
> limit pmu iterations to those with unique cpuctx instances.
>
> Its assigned when we create a cpuctx to the pmu creating it, its
> re-assigned on pmu destruction (if that ever were to happen).
>
Yeah, I saw that. active_pmu point to whatever was the last PMU
sharing the cpuctx.
But I guess what is confusing is the name. It has nothing to do
with active vs. inactive. They are all active.
In cgroup_switch(), we must go over all syswide events from all PMUs to
sched out all the ones monitoring ONLY the current cgroup. You must do
this only once per switch. So given that the events are linked off of cpuctx,
what matters is that we go over all unique cpuctx once. Hence, I think
your patch solves the problem, though this is kinda obscure why.
> I realize the name isn't really helping but at the time I couldn't come
> up with anything better :/
>
> If you've got a good suggestion I'd be glad to rename it.
how about unique_pmu? And adding a comment in cgroup_switch()
+ /* ensure we process each cpuctx only once */
+ if (cpuctx->active_pmu != pmu)
+ continue;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists