[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240624152732.1231678-6-bigeasy@linutronix.de>
Date: Mon, 24 Jun 2024 17:15:18 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: linux-perf-users@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: Adrian Hunter <adrian.hunter@...el.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Daniel Bristot de Oliveira <bristot@...nel.org>,
Frederic Weisbecker <frederic@...nel.org>,
Ian Rogers <irogers@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
Jiri Olsa <jolsa@...nel.org>,
Kan Liang <kan.liang@...ux.intel.com>,
Marco Elver <elver@...gle.com>,
Mark Rutland <mark.rutland@....com>,
Namhyung Kim <namhyung@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: [PATCH v4 5/6] perf: Don't disable preemption in perf_pending_task().
perf_pending_task() is invoked in task context and disables preemption
because perf_swevent_get_recursion_context() used to access per-CPU
variables. The other reason is to create a RCU read section while
accessing the perf_event.
The recursion counter is no longer a per-CPU accounter so disabling
preemption is no longer required. The RCU section is needed and must be
created explicit.
Replace the preemption-disable section with a explicit RCU-read section.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
---
kernel/events/core.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index fc9a78e1fb4aa..f75aa9f14c979 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5208,10 +5208,9 @@ static void perf_pending_task_sync(struct perf_event *event)
}
/*
- * All accesses related to the event are within the same
- * non-preemptible section in perf_pending_task(). The RCU
- * grace period before the event is freed will make sure all
- * those accesses are complete by then.
+ * All accesses related to the event are within the same RCU section in
+ * perf_pending_task(). The RCU grace period before the event is freed
+ * will make sure all those accesses are complete by then.
*/
rcuwait_wait_event(&event->pending_work_wait, !event->pending_work, TASK_UNINTERRUPTIBLE);
}
@@ -6842,7 +6841,7 @@ static void perf_pending_task(struct callback_head *head)
* critical section as the ->pending_work reset. See comment in
* perf_pending_task_sync().
*/
- preempt_disable_notrace();
+ rcu_read_lock();
/*
* If we 'fail' here, that's OK, it means recursion is already disabled
* and we won't recurse 'further'.
@@ -6855,10 +6854,10 @@ static void perf_pending_task(struct callback_head *head)
local_dec(&event->ctx->nr_pending);
rcuwait_wake_up(&event->pending_work_wait);
}
+ rcu_read_unlock();
if (rctx >= 0)
perf_swevent_put_recursion_context(rctx);
- preempt_enable_notrace();
}
#ifdef CONFIG_GUEST_PERF_EVENTS
--
2.45.2
Powered by blists - more mailing lists