[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250604100052.GH38114@noisy.programming.kicks-ass.net>
Date: Wed, 4 Jun 2025 12:00:52 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Luo Gengkun <luogengkun@...weicloud.com>
Cc: mingo@...hat.com, acme@...nel.org, namhyung@...nel.org,
mark.rutland@....com, alexander.shishkin@...ux.intel.com,
jolsa@...nel.org, irogers@...gle.com, adrian.hunter@...el.com,
kan.liang@...ux.intel.com, davidcc@...gle.com,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] perf/core: Fix
WARN_ON_ONCE(cpuctx->ctx.nr_cgroups == 0) in perf_cgroup_switch
On Wed, Jun 04, 2025 at 03:39:24AM +0000, Luo Gengkun wrote:
> There may be concurrency between perf_cgroup_switch and
> perf_cgroup_event_disable. Consider the following scenario: after a new
> perf cgroup event is created on CPU0, the new event may not trigger
> a reprogramming, causing ctx->is_active to be 0. In this case, when CPU1
> disables this perf event, it executes __perf_remove_from_context->
> list _del_event->perf_cgroup_event_disable on CPU1, which causes a race
> with perf_cgroup_switch running on CPU0.
>
> The following describes the details of this concurrency scenario:
>
> CPU0 CPU1
>
> perf_cgroup_switch:
> ...
> # cpuctx->cgrp is not NULL here
> if (READ_ONCE(cpuctx->cgrp) == NULL)
> return;
>
> perf_remove_from_context:
> ...
> raw_spin_lock_irq(&ctx->lock);
> ...
> # ctx->is_active == 0 because reprogramm is not
> # tigger, so CPU1 can do __perf_remove_from_context
> # for CPU0
> __perf_remove_from_context:
> perf_cgroup_event_disable:
> ...
> if (--ctx->nr_cgroups)
> ...
>
> # this warning will happened because CPU1 changed
> # ctx.nr_cgroups to 0.
> WARN_ON_ONCE(cpuctx->ctx.nr_cgroups == 0);
>
> To fix this problem, expand the lock-holding critical section in
> perf_cgroup_switch.
>
> Fixes: db4a835601b7 ("perf/core: Set cgroup in CPU contexts for new cgroup events")
> Signed-off-by: Luo Gengkun <luogengkun@...weicloud.com>
> ---
Right, so how about we simply re-check the condition once we take the
lock?
Also, take the opportunity to convert to guard instead of adding goto
unlock.
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -207,6 +207,19 @@ static void perf_ctx_unlock(struct perf_
__perf_ctx_unlock(&cpuctx->ctx);
}
+typedef struct {
+ struct perf_cpu_context *cpuctx;
+ struct perf_event_context *ctx;
+} class_perf_ctx_lock_t;
+
+static inline void class_perf_ctx_lock_destructor(class_perf_ctx_lock_t *_T)
+{ perf_ctx_unlock(_T->cpuctx, _T->ctx); }
+
+static inline class_perf_ctx_lock_t
+class_perf_ctx_lock_constructor(struct perf_cpu_context *cpuctx,
+ struct perf_event_context *ctx)
+{ perf_ctx_lock(cpuctx, ctx); return (class_perf_ctx_lock_t){ cpuctx, ctx }; }
+
#define TASK_TOMBSTONE ((void *)-1L)
static bool is_kernel_event(struct perf_event *event)
@@ -944,7 +957,13 @@ static void perf_cgroup_switch(struct ta
if (READ_ONCE(cpuctx->cgrp) == cgrp)
return;
- perf_ctx_lock(cpuctx, cpuctx->task_ctx);
+ guard(perf_ctx_lock)(cpuctx, cpuctx->task_ctx);
+ /*
+ * Re-check, could've raced vs perf_remove_from_context().
+ */
+ if (READ_ONCE(cpuctx->cgrp) == NULL)
+ return;
+
perf_ctx_disable(&cpuctx->ctx, true);
ctx_sched_out(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP);
@@ -962,7 +981,6 @@ static void perf_cgroup_switch(struct ta
ctx_sched_in(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP);
perf_ctx_enable(&cpuctx->ctx, true);
- perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
}
static int perf_cgroup_ensure_storage(struct perf_event *event,
Powered by blists - more mailing lists