[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241217115219.GH12500@noisy.programming.kicks-ass.net>
Date: Tue, 17 Dec 2024 12:52:19 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Ravi Bangoria <ravi.bangoria@....com>
Cc: "mingo@...nel.org" <mingo@...nel.org>,
"lucas.demarchi@...el.com" <lucas.demarchi@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"willy@...radead.org" <willy@...radead.org>,
"acme@...nel.org" <acme@...nel.org>,
"namhyung@...nel.org" <namhyung@...nel.org>,
"mark.rutland@....com" <mark.rutland@....com>,
"alexander.shishkin@...ux.intel.com" <alexander.shishkin@...ux.intel.com>,
"jolsa@...nel.org" <jolsa@...nel.org>,
"irogers@...gle.com" <irogers@...gle.com>,
"adrian.hunter@...el.com" <adrian.hunter@...el.com>,
"kan.liang@...ux.intel.com" <kan.liang@...ux.intel.com>
Subject: Re: [PATCH 19/19] perf: Make perf_pmu_unregister() useable
On Tue, Dec 17, 2024 at 10:12:16AM +0100, Peter Zijlstra wrote:
>
>
> Oh sorry, I seem to have missed this email :/
>
> On Mon, Nov 25, 2024 at 09:40:28AM +0530, Ravi Bangoria wrote:
> > Hi Peter,
> >
> > > @@ -6507,6 +6540,7 @@ static void perf_mmap_close(struct vm_ar
> > > unsigned long size = perf_data_size(rb);
> > > bool detach_rest = false;
> > >
> > > + /* FIXIES vs perf_pmu_unregister() */
> > > if (event->pmu->event_unmapped)
> > > event->pmu->event_unmapped(event, vma->vm_mm);
> >
> > I assume you are already aware of the race between __pmu_detach_event()
> > and perf_mmap_close() since you have put that comment.
>
> That comment was mostly about how we can't fix up the whole
> ->event_unmapped() thing and have to abort pmu_unregister for it.
>
> > In any case, below sequence of operations triggers a splat when
> > perf_mmap_close() tries to access event->rb, event->pmu etc. which
> > were already freed by __pmu_detach_event().
> >
> > Sequence:
> >
> > Kernel Userspace
> > ------ ---------
> > perf_pmu_register()
> > fd = perf_event_open()
> > p = mmap(fd)
> > perf_pmu_unregister()
> > munmap(p)
> > close(fd)
>
> Right, let me go have a look. Thanks!
Bah, that's a right mess indeed, however did I miss all that.
The easiest solution is probably to leave the RB around on detach, but
now I need to remember why I did that in the first place :/
Oh.. I think I mostly that to serialize against perf_mmap(), which
should reject creating further maps. But we can do that without actually
detaching the RB -- we only need to acquire and release mmap_mutex.
Ah, there's that perf_event_stop() inside of ring_buffer_attach(), that
must not happen after detach, obviously. So that must be dealt with.
Hmm, also if we leave ->rb around, then we need to deal with
perf_event_set_output(), someone could try and redirect their things
into our buffer -- which isn't technically broken, but still weird.
Something like the below.
How did you test; perf-fuzzer or something?
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1742,7 +1742,7 @@ static inline bool needs_branch_stack(st
static inline bool has_aux(struct perf_event *event)
{
- return event->pmu->setup_aux;
+ return event->pmu && event->pmu->setup_aux;
}
static inline bool has_aux_action(struct perf_event *event)
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5409,7 +5409,6 @@ static void _free_event(struct perf_even
security_perf_event_free(event);
if (event->rb) {
- WARN_ON_ONCE(!event->pmu);
/*
* Can happen when we close an event with re-directed output.
*
@@ -12023,7 +12022,10 @@ static void __pmu_detach_event(struct pm
*/
scoped_guard (mutex, &event->mmap_mutex) {
WARN_ON_ONCE(pmu->event_unmapped);
- ring_buffer_attach(event, NULL);
+ /*
+ * Mostly an empy lock sequence, such that perf_mmap(), which
+ * relies on mmap_mutex, is sure to observe the state change.
+ */
}
perf_event_free_bpf_prog(event);
@@ -12823,6 +12825,9 @@ perf_event_set_output(struct perf_event
goto unlock;
if (output_event) {
+ if (output_event->state <= PERF_EVENT_STATE_REVOKED)
+ goto unlock;
+
/* get the rb we want to redirect to */
rb = ring_buffer_get(output_event);
if (!rb)
Powered by blists - more mailing lists