[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250312125753.GL19424@noisy.programming.kicks-ass.net>
Date: Wed, 12 Mar 2025 13:57:53 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Ravi Bangoria <ravi.bangoria@....com>
Cc: mingo@...nel.org, lucas.demarchi@...el.com,
linux-kernel@...r.kernel.org, acme@...nel.org, namhyung@...nel.org,
mark.rutland@....com, alexander.shishkin@...ux.intel.com,
jolsa@...nel.org, irogers@...gle.com, adrian.hunter@...el.com,
kan.liang@...ux.intel.com
Subject: Re: [PATCH v3 7/7] perf: Make perf_pmu_unregister() useable
On Mon, Mar 10, 2025 at 10:16:09PM +0530, Ravi Bangoria wrote:
> On 08-Mar-25 1:03 AM, Peter Zijlstra wrote:
> > Previously it was only safe to call perf_pmu_unregister() if there
> > were no active events of that pmu around -- which was impossible to
> > guarantee since it races all sorts against perf_init_event().
> >
> > Rework the whole thing by:
> >
> > - keeping track of all events for a given pmu
> >
> > - 'hiding' the pmu from perf_init_event()
> >
> > - waiting for the appropriate (s)rcu grace periods such that all
> > prior references to the PMU will be completed
> >
> > - detaching all still existing events of that pmu (see first point)
> > and moving them to a new REVOKED state.
> >
> > - actually freeing the pmu data.
> >
> > Where notably the new REVOKED state must inhibit all event actions
> > from reaching code that wants to use event->pmu.
>
> I think perf_event_init_task() failure path can still race with
> perf_pmu_unregister() and trigger a WARN():
>
> CPU 1 CPU 2
>
> perf_event_init_task()
> perf_event_free_task()
> perf_free_event(event1)
> /* event1->refcount is 1 */
> perf_pmu_unregister()
> pmu_detach_events()
> pmu_get_event(pmu) /* Picks event1 */
> atomic_long_inc_not_zero(&event1->refcount) /* event1 */
> /* event1->refcount became 2 (by CPU 2) */
> free_event(event1)
> WARN()
>
Yeah, I think most free_event() users are broken at this point. Let me
rip all that out. Not really worth the trouble anymore.
Powered by blists - more mailing lists