[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4Bzb7MCv87ZEPXvH7APk9yvmtCWvuUO5ShEaLvz_DLfNqpw@mail.gmail.com>
Date: Fri, 9 May 2025 14:49:37 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
bpf@...r.kernel.org, x86@...nel.org, Masami Hiramatsu <mhiramat@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Josh Poimboeuf <jpoimboe@...nel.org>,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...nel.org>, Jiri Olsa <jolsa@...nel.org>,
Namhyung Kim <namhyung@...nel.org>
Subject: Re: [PATCH v8 12/18] unwind deferred: Use SRCU unwind_deferred_task_work()
On Fri, May 9, 2025 at 9:54 AM Steven Rostedt <rostedt@...dmis.org> wrote:
>
> From: Steven Rostedt <rostedt@...dmis.org>
>
> Instead of using the callback_mutex to protect the link list of callbacks
> in unwind_deferred_task_work(), use SRCU instead. This gets called every
> time a task exits that has to record a stack trace that was requested.
> This can happen for many tasks on several CPUs at the same time. A mutex
> is a bottleneck and can cause a bit of contention and slow down performance.
>
> As the callbacks themselves are allowed to sleep, regular RCU can not be
> used to protect the list. Instead use SRCU, as that still allows the
> callbacks to sleep and the list can be read without needing to hold the
> callback_mutex.
>
> Link: https://lore.kernel.org/all/ca9bd83a-6c80-4ee0-a83c-224b9d60b755@efficios.com/
>
> Suggested-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
> Signed-off-by: Steven Rostedt (Google) <rostedt@...dmis.org>
> ---
> kernel/unwind/deferred.c | 33 +++++++++++++++++++++++++--------
> 1 file changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/unwind/deferred.c b/kernel/unwind/deferred.c
> index 7ae0bec5b36a..5d6976ee648f 100644
> --- a/kernel/unwind/deferred.c
> +++ b/kernel/unwind/deferred.c
> @@ -13,10 +13,11 @@
>
> #define UNWIND_MAX_ENTRIES 512
>
> -/* Guards adding to and reading the list of callbacks */
> +/* Guards adding to or removing from the list of callbacks */
> static DEFINE_MUTEX(callback_mutex);
> static LIST_HEAD(callbacks);
> static unsigned long unwind_mask;
> +DEFINE_STATIC_SRCU(unwind_srcu);
>
> /*
> * Read the task context timestamp, if this is the first caller then
> @@ -108,6 +109,7 @@ static void unwind_deferred_task_work(struct callback_head *head)
> struct unwind_work *work;
> u64 timestamp;
> struct task_struct *task = current;
> + int idx;
>
> if (WARN_ON_ONCE(!info->pending))
> return;
> @@ -133,13 +135,15 @@ static void unwind_deferred_task_work(struct callback_head *head)
>
> timestamp = info->timestamp;
>
> - guard(mutex)(&callback_mutex);
> - list_for_each_entry(work, &callbacks, list) {
> + idx = srcu_read_lock(&unwind_srcu);
nit: you could have used guard(srcu)(&unwind_srcu) ?
> + list_for_each_entry_srcu(work, &callbacks, list,
> + srcu_read_lock_held(&unwind_srcu)) {
> if (task->unwind_mask & (1UL << work->bit)) {
> work->func(work, &trace, timestamp);
> clear_bit(work->bit, ¤t->unwind_mask);
> }
> }
> + srcu_read_unlock(&unwind_srcu, idx);
> }
>
> static int unwind_deferred_request_nmi(struct unwind_work *work, u64 *timestamp)
[...]
Powered by blists - more mailing lists