[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230913125956.3652667-1-tero.kristo@linux.intel.com>
Date: Wed, 13 Sep 2023 15:59:56 +0300
From: Tero Kristo <tero.kristo@...ux.intel.com>
To: x86@...nel.org, bp@...en8.de, dave.hansen@...ux.intel.com,
tglx@...utronix.de
Cc: hpa@...or.com, irogers@...gle.com, jolsa@...nel.org,
namhyung@...nel.org, adrian.hunter@...el.com, acme@...nel.org,
mingo@...hat.com, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, alexander.shishkin@...ux.intel.com,
linux-perf-users@...r.kernel.org, peterz@...radead.org,
mark.rutland@....com
Subject: [PATCHv2 2/2] perf/core: Allow reading package events from perf_event_read_local
Per-package perf events are typically registered with a single CPU only,
however they can be read across all the CPUs within the package.
Currently perf_event_read maps the event CPU according to the topology
information to avoid an unnecessary SMP call, however
perf_event_read_local deals with hard values and rejects a read with a
failure if the CPU is not the one exactly registered. Allow similar
mapping within the perf_event_read_local if the perf event in question
can support this.
This allows users like BPF code to read the package perf events properly
across different CPUs within a package.
Signed-off-by: Tero Kristo <tero.kristo@...ux.intel.com>
---
v2:
* prevent illegal array access in case event->oncpu == -1
* split the event->cpu / event->oncpu handling to their own variables
kernel/events/core.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 4c72a41f11af..6b343bac0a71 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -4425,6 +4425,9 @@ static int __perf_event_read_cpu(struct perf_event *event, int event_cpu)
{
u16 local_pkg, event_pkg;
+ if (event_cpu < 0 || event_cpu >= nr_cpu_ids)
+ return event_cpu;
+
if (event->group_caps & PERF_EV_CAP_READ_ACTIVE_PKG) {
int local_cpu = smp_processor_id();
@@ -4528,6 +4531,8 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
{
unsigned long flags;
int ret = 0;
+ int event_cpu;
+ int event_oncpu;
/*
* Disabling interrupts avoids all counter scheduling (context
@@ -4551,15 +4556,22 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
goto out;
}
+ /*
+ * Get the event CPU numbers, and adjust them to local if the event is
+ * a per-package event that can be read locally
+ */
+ event_oncpu = __perf_event_read_cpu(event, event->oncpu);
+ event_cpu = __perf_event_read_cpu(event, event->cpu);
+
/* If this is a per-CPU event, it must be for this CPU */
if (!(event->attach_state & PERF_ATTACH_TASK) &&
- event->cpu != smp_processor_id()) {
+ event_cpu != smp_processor_id()) {
ret = -EINVAL;
goto out;
}
/* If this is a pinned event it must be running on this CPU */
- if (event->attr.pinned && event->oncpu != smp_processor_id()) {
+ if (event->attr.pinned && event_oncpu != smp_processor_id()) {
ret = -EBUSY;
goto out;
}
@@ -4569,7 +4581,7 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
* or local to this CPU. Furthermore it means its ACTIVE (otherwise
* oncpu == -1).
*/
- if (event->oncpu == smp_processor_id())
+ if (event_oncpu == smp_processor_id())
event->pmu->read(event);
*value = local64_read(&event->count);
--
2.40.1
Powered by blists - more mailing lists