[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <164914734200.389.12009651583412320175.tip-bot2@tip-bot2>
Date: Tue, 05 Apr 2022 08:29:02 -0000
From: "tip-bot2 for Chengming Zhou" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Chengming Zhou <zhouchengming@...edance.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: perf/urgent] perf/core: Always set cpuctx cgrp when enable cgroup event
The following commit has been merged into the perf/urgent branch of tip:
Commit-ID: e19cd0b6fa5938c51d7b928010d584f0de93913a
Gitweb: https://git.kernel.org/tip/e19cd0b6fa5938c51d7b928010d584f0de93913a
Author: Chengming Zhou <zhouchengming@...edance.com>
AuthorDate: Tue, 29 Mar 2022 23:45:23 +08:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Tue, 05 Apr 2022 09:59:45 +02:00
perf/core: Always set cpuctx cgrp when enable cgroup event
When enable a cgroup event, cpuctx->cgrp setting is conditional
on the current task cgrp matching the event's cgroup, so have to
do it for every new event. It brings complexity but no advantage.
To keep it simple, this patch would always set cpuctx->cgrp
when enable the first cgroup event, and reset to NULL when disable
the last cgroup event.
Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lore.kernel.org/r/20220329154523.86438-5-zhouchengming@bytedance.com
---
kernel/events/core.c | 18 ++----------------
1 file changed, 2 insertions(+), 16 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index bdeb41f..23bb197 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -967,22 +967,10 @@ perf_cgroup_event_enable(struct perf_event *event, struct perf_event_context *ct
*/
cpuctx = container_of(ctx, struct perf_cpu_context, ctx);
- /*
- * Since setting cpuctx->cgrp is conditional on the current @cgrp
- * matching the event's cgroup, we must do this for every new event,
- * because if the first would mismatch, the second would not try again
- * and we would leave cpuctx->cgrp unset.
- */
- if (ctx->is_active && !cpuctx->cgrp) {
- struct perf_cgroup *cgrp = perf_cgroup_from_task(current, ctx);
-
- if (cgroup_is_descendant(cgrp->css.cgroup, event->cgrp->css.cgroup))
- cpuctx->cgrp = cgrp;
- }
-
if (ctx->nr_cgroups++)
return;
+ cpuctx->cgrp = perf_cgroup_from_task(current, ctx);
list_add(&cpuctx->cgrp_cpuctx_entry,
per_cpu_ptr(&cgrp_cpuctx_list, event->cpu));
}
@@ -1004,9 +992,7 @@ perf_cgroup_event_disable(struct perf_event *event, struct perf_event_context *c
if (--ctx->nr_cgroups)
return;
- if (ctx->is_active && cpuctx->cgrp)
- cpuctx->cgrp = NULL;
-
+ cpuctx->cgrp = NULL;
list_del(&cpuctx->cgrp_cpuctx_entry);
}
Powered by blists - more mailing lists