[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161020085803.GA31721@krava>
Date: Thu, 20 Oct 2016 10:58:03 +0200
From: Jiri Olsa <jolsa@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: CAI Qian <caiqian@...hat.com>, Rob Herring <robh@...nel.org>,
Kan Liang <kan.liang@...el.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [4.9-rc1+] intel_uncore builtin +
CONFIG_DEBUG_TEST_DRIVER_REMOVE kernel panic
On Thu, Oct 20, 2016 at 07:39:44AM +0200, Peter Zijlstra wrote:
> On Wed, Oct 19, 2016 at 09:19:43PM +0200, Jiri Olsa wrote:
> > I think the reason here is that presume pmu devices are always added,
> > but we add them only if pmu_bus_running (in perf_event_sysfs_init)
> > is set which might happen after uncore initcall
> >
> > attached patch fixes the issue for me
>
> Right, we never expected to be unloaded before userspace runs.
>
> Strictly speaking we should only read pmu_bus_running while holding
> pmus_lock, that way we're serialized against perf_event_sysfs_init()
> flipping it while we're being removed etc..
>
> With the current setup the introduced race is harmless, but who knows
> what other crazy these device people will come up with ;-)
>
right, did not think of that ;-)
also I did not noticed device_remove_file call for pmu->nr_addr_filters
and we could save one lock/unlock call later.. I'm testing attached patch
now
thanks,
jirka
---
diff --git a/kernel/events/core.c b/kernel/events/core.c
index c6e47e97b33f..224dffbc3b9b 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -8581,24 +8581,24 @@ static void update_pmu_context(struct pmu *pmu, struct pmu *old_pmu)
}
}
+/*
+ * The pmus_lock lock must be taken.
+ */
static void free_pmu_context(struct pmu *pmu)
{
struct pmu *i;
- mutex_lock(&pmus_lock);
/*
* Like a real lame refcount.
*/
list_for_each_entry(i, &pmus, entry) {
if (i->pmu_cpu_context == pmu->pmu_cpu_context) {
update_pmu_context(i, pmu);
- goto out;
+ return;
}
}
free_percpu(pmu->pmu_cpu_context);
-out:
- mutex_unlock(&pmus_lock);
}
/*
@@ -8869,11 +8869,15 @@ void perf_pmu_unregister(struct pmu *pmu)
free_percpu(pmu->pmu_disable_count);
if (pmu->type >= PERF_TYPE_MAX)
idr_remove(&pmu_idr, pmu->type);
- if (pmu->nr_addr_filters)
- device_remove_file(pmu->dev, &dev_attr_nr_addr_filters);
- device_del(pmu->dev);
- put_device(pmu->dev);
+ mutex_lock(&pmus_lock);
+ if (pmu_bus_running) {
+ if (pmu->nr_addr_filters)
+ device_remove_file(pmu->dev, &dev_attr_nr_addr_filters);
+ device_del(pmu->dev);
+ put_device(pmu->dev);
+ }
free_pmu_context(pmu);
+ mutex_unlock(&pmus_lock);
}
EXPORT_SYMBOL_GPL(perf_pmu_unregister);
Powered by blists - more mailing lists