[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241104135519.463607258@infradead.org>
Date: Mon, 04 Nov 2024 14:39:26 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: mingo@...nel.org,
lucas.demarchi@...el.com
Cc: linux-kernel@...r.kernel.org,
peterz@...radead.org,
willy@...radead.org,
acme@...nel.org,
namhyung@...nel.org,
mark.rutland@....com,
alexander.shishkin@...ux.intel.com,
jolsa@...nel.org,
irogers@...gle.com,
adrian.hunter@...el.com,
kan.liang@...ux.intel.com
Subject: [PATCH 17/19] perf: Remove retry loop from perf_mmap()
AFAICT there is no actual benefit from the mutex drop on re-try. The
'worst' case scenario is that we instantly re-gain the mutex without
perf_mmap_close() getting it. So might as well make that the normal
case.
Reflow the code to make the ring buffer detach case naturally flow
into the no ring buffer case.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/events/core.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6665,26 +6665,31 @@ static int perf_mmap(struct file *file,
return -EINVAL;
WARN_ON_ONCE(event->ctx->parent_ctx);
-again:
mutex_lock(&event->mmap_mutex);
+
if (event->rb) {
if (data_page_nr(event->rb) != nr_pages) {
ret = -EINVAL;
goto unlock;
}
- if (!atomic_inc_not_zero(&event->rb->mmap_count)) {
+ if (atomic_inc_not_zero(&event->rb->mmap_count)) {
/*
- * Raced against perf_mmap_close(); remove the
- * event and try again.
+ * Success -- managed to mmap() the same buffer
+ * multiple times.
*/
- ring_buffer_attach(event, NULL);
- mutex_unlock(&event->mmap_mutex);
- goto again;
+ ret = 0;
+ goto unlock;
}
- goto unlock;
+ /*
+ * Raced against perf_mmap_close()'s
+ * atomic_dec_and_mutex_lock() remove the
+ * event and continue as if !event->rb
+ */
+ ring_buffer_attach(event, NULL);
}
+
} else {
/*
* AUX area mapping: if rb->aux_nr_pages != 0, it's already
Powered by blists - more mailing lists