[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170120141834.GA22152@leverpostej>
Date: Fri, 20 Jan 2017 14:18:34 +0000
From: Mark Rutland <mark.rutland@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: David Carrillo-Cisneros <davidcc@...gle.com>,
linux-kernel@...r.kernel.org, "x86@...nel.org" <x86@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Andi Kleen <ak@...ux.intel.com>,
Kan Liang <kan.liang@...el.com>, Borislav Petkov <bp@...e.de>,
Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Vince Weaver <vince@...ter.net>, Paul Turner <pjt@...gle.com>,
Stephane Eranian <eranian@...gle.com>
Subject: Re: [PATCH v2 2/2] perf/core: Remove perf_cpu_context::unique_pmu
On Fri, Jan 20, 2017 at 10:20:33AM +0100, Peter Zijlstra wrote:
> On Wed, Jan 18, 2017 at 11:24:54AM -0800, David Carrillo-Cisneros wrote:
> > cpuctx->unique_pmu was originally introduced as a way to identify cpuctxs
> > with shared pmus in order to avoid visiting the same cpuctx more than once
> > in a for_each_pmu loop.
> >
> > cpuctx->unique_pmu == cpuctx->pmu in non-software task contexts since they
> > have only one pmu per cpuctx. Since perf_pmu_sched_task is only called in
> > hw contexts, this patch replaces cpuctx->unique_pmu by cpuctx->pmu in it.
> >
> > The change above, together with the previous patch in this series, removed
> > the remaining uses of cpuctx->unique_pmu, so we remove it altogether.
> >
> > Signed-off-by: David Carrillo-Cisneros <davidcc@...gle.com>
> > Acked-by: Mark Rutland <mark.rutland@....com>
>
> > @@ -8572,37 +8572,10 @@ static struct perf_cpu_context __percpu *find_pmu_context(int ctxn)
> > return NULL;
> > }
> >
> > -static void update_pmu_context(struct pmu *pmu, struct pmu *old_pmu)
> > -{
> > - int cpu;
> > -
> > - for_each_possible_cpu(cpu) {
> > - struct perf_cpu_context *cpuctx;
> > -
> > - cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu);
> > -
> > - if (cpuctx->unique_pmu == old_pmu)
> > - cpuctx->unique_pmu = pmu;
> > - }
> > -}
> > -
> > static void free_pmu_context(struct pmu *pmu)
> > {
> > - struct pmu *i;
> > -
> > mutex_lock(&pmus_lock);
> > - /*
> > - * Like a real lame refcount.
> > - */
> > - list_for_each_entry(i, &pmus, entry) {
> > - if (i->pmu_cpu_context == pmu->pmu_cpu_context) {
> > - update_pmu_context(i, pmu);
> > - goto out;
> > - }
> > - }
> > -
> > free_percpu(pmu->pmu_cpu_context);
> > -out:
> > mutex_unlock(&pmus_lock);
> > }
>
> This very much relies on us never calling perf_pmu_unregister() on the
> software PMUs afaict. A condition not mention in the Changelog.
Ah; I did not consider that that could leave perf_pmu_sched_task()
seeing a stale cpuctx->ctx.pmu.
That said, is this not already a problem elsewhere? We don't update
ctx->pmu in perf_pmu_unregister, so this would be a problem for any path
using ctx->pmu today (e.g. perf_event_context_sched_in()).
Thanks,
Mark.
Powered by blists - more mailing lists