[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210517195405.3079458-3-robh@kernel.org>
Date: Mon, 17 May 2021 14:54:02 -0500
From: Rob Herring <robh@...nel.org>
To: Will Deacon <will@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Mark Rutland <mark.rutland@....com>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
Jiri Olsa <jolsa@...hat.com>,
Kan Liang <kan.liang@...ux.intel.com>,
Ian Rogers <irogers@...gle.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
honnappa.nagarahalli@....com, Zachary.Leaf@....com,
Raphael Gault <raphael.gault@....com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Namhyung Kim <namhyung@...nel.org>,
Itaru Kitayama <itaru.kitayama@...il.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: [PATCH v8 2/5] perf: Track per-PMU sched_task() callback users
From: Kan Liang <kan.liang@...ux.intel.com>
Current perf only tracks the per-CPU sched_task() callback users, which
doesn't work if a callback user is a task. For example, the dirty
counters have to be cleared to prevent data leakage when a new userspace
access task is scheduled in. The task may be created on one CPU but
running on another CPU. It cannot be tracked by the per-CPU variable. A
global variable is not going to work either because of the hybrid PMUs.
Add a per-PMU variable to track the callback users.
Suggested-by: Rob Herring <robh@...nel.org>
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
[robh: Also call sched_task() got sched out cases]
Signed-off-by: Rob Herring <robh@...nel.org>
---
include/linux/perf_event.h | 3 +++
kernel/events/core.c | 8 +++++---
2 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 4cf081e22b76..a88d52e80864 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -300,6 +300,9 @@ struct pmu {
/* number of address filters this PMU can do */
unsigned int nr_addr_filters;
+ /* Track the per PMU sched_task() callback users */
+ atomic_t sched_cb_usage;
+
/*
* Fully disable/enable this PMU, can be used to protect from the PMI
* as well as for lazy/batch writing of the MSRs.
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2e947a485898..6d0507c23240 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3448,7 +3448,8 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn,
perf_pmu_disable(pmu);
- if (cpuctx->sched_cb_usage && pmu->sched_task)
+ if (pmu->sched_task &&
+ (cpuctx->sched_cb_usage || atomic_read(&pmu->sched_cb_usage)))
pmu->sched_task(ctx, false);
/*
@@ -3488,7 +3489,8 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn,
raw_spin_lock(&ctx->lock);
perf_pmu_disable(pmu);
- if (cpuctx->sched_cb_usage && pmu->sched_task)
+ if (pmu->sched_task &&
+ (cpuctx->sched_cb_usage || atomic_read(&pmu->sched_cb_usage)))
pmu->sched_task(ctx, false);
task_ctx_sched_out(cpuctx, ctx, EVENT_ALL);
@@ -3851,7 +3853,7 @@ static void perf_event_context_sched_in(struct perf_event_context *ctx,
cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
perf_event_sched_in(cpuctx, ctx, task);
- if (cpuctx->sched_cb_usage && pmu->sched_task)
+ if (pmu->sched_task && (cpuctx->sched_cb_usage || atomic_read(&pmu->sched_cb_usage)))
pmu->sched_task(cpuctx->task_ctx, true);
perf_pmu_enable(pmu);
--
2.27.0
Powered by blists - more mailing lists