[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210608145228.36595-1-leo.yan@linaro.org>
Date: Tue, 8 Jun 2021 22:52:27 +0800
From: Leo Yan <leo.yan@...aro.org>
To: Will Deacon <will@...nel.org>, Mark Rutland <mark.rutland@....com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Cc: Leo Yan <leo.yan@...aro.org>
Subject: [PATCH v1 1/2] arm64: perf: Correct per-thread mode if the event is not supported
When the perf tool runs in per-thread mode, armpmu_event_init() defers
to handle events in armpmu_add(), the main reason is the selected PMU in
the init phase can mismatch with the CPUs when the profiled task
is scheduled on.
For example, on an Arm big.LTTILE platform with two clusters, every
cluster has its dedicated PMU; the event initialization happens on the
LITTLE cluster and its corresponding PMU is selected, but the profiled
task is scheduled on big cluster, it's deferred to handle this case in
armpmu_add().
Usually, we should report failure in the first place so this can allow
users to easily locate the issue they are facing. For the per-thread
mode, the profiled task can be migrated on any CPU, therefore the event
can be enabled on any CPU. In other words, if a PMU detects it fails to
support the process-following event, it can directly returns -EOPNOTSUPP
so can stop profiling.
This patch adds the checking for per-thread mode, if the event is not
supported, return -EOPNOTSUPP.
Signed-off-by: Leo Yan <leo.yan@...aro.org>
---
drivers/perf/arm_pmu.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index d4f7f1f9cc77..aedea060ca8b 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -502,9 +502,9 @@ static int armpmu_event_init(struct perf_event *event)
/*
* Reject CPU-affine events for CPUs that are of a different class to
* that which this PMU handles. Process-following events (where
- * event->cpu == -1) can be migrated between CPUs, and thus we have to
- * reject them later (in armpmu_add) if they're scheduled on a
- * different class of CPU.
+ * event->cpu == -1) can be migrated between CPUs, and thus we will
+ * reject them when map_event() detects absent entry at below or later
+ * (in armpmu_add) if they're scheduled on a different class of CPU.
*/
if (event->cpu != -1 &&
!cpumask_test_cpu(event->cpu, &armpmu->supported_cpus))
@@ -514,8 +514,16 @@ static int armpmu_event_init(struct perf_event *event)
if (has_branch_stack(event))
return -EOPNOTSUPP;
- if (armpmu->map_event(event) == -ENOENT)
+ if (armpmu->map_event(event) == -ENOENT) {
+ /*
+ * Process-following event is not supported on current PMU,
+ * returns -EOPNOTSUPP to stop perf at the initialization
+ * phase.
+ */
+ if (event->cpu == -1)
+ return -EOPNOTSUPP;
return -ENOENT;
+ }
return __hw_perf_event_init(event);
}
--
2.25.1
Powered by blists - more mailing lists