[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <f337ed92d3e3d519ce4b5d4f23616053ca8a1726.1769063941.git.sandipan.das@amd.com>
Date: Thu, 22 Jan 2026 12:15:05 +0530
From: Sandipan Das <sandipan.das@....com>
To: <linux-perf-users@...r.kernel.org>, <linux-kernel@...r.kernel.org>
CC: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim
<namhyung@...nel.org>, Mark Rutland <mark.rutland@....com>, "Alexander
Shishkin" <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>,
Ian Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>,
James Clark <james.clark@...aro.org>, Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
<x86@...nel.org>, "H . Peter Anvin" <hpa@...or.com>,
<stable@...r.kernel.org>, Ravi Bangoria <ravi.bangoria@....com>, "Ananth
Narayan" <ananth.narayan@....com>, Sandipan Das <sandipan.das@....com>
Subject: [PATCH] perf/x86/amd/uncore: Use Node ID to identify DF and UMC domains
For DF and UMC PMUs, a single context is shared across all CPUs that are
connected to the same Data Fabric (DF) instance. Currently, Socket ID is
used to identify DF instances. This approach works for configurations
having a single IO Die (IOD) but fails in the following cases.
* Older Zen 1 processors, where each chiplet has its own DF instance
instead of a single IOD.
* Any configurations with multiple IODs in a single socket.
Address this by using the Node ID available in ECX[7:0] of CPUID leaf
0x8000001e which is already provided by topology_amd_node_id(). Replace
the use of topology_logical_package_id() with topology_amd_node_id() in
order to correctly identify domains for context sharing.
Fixes: 07888daa056e ("perf/x86/amd/uncore: Move discovery and registration")
Signed-off-by: Sandipan Das <sandipan.das@....com>
---
arch/x86/events/amd/uncore.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
index 9293ce50574d..9a13a9f21d2f 100644
--- a/arch/x86/events/amd/uncore.c
+++ b/arch/x86/events/amd/uncore.c
@@ -700,7 +700,7 @@ void amd_uncore_df_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
info.split.aux_data = 0;
info.split.num_pmcs = NUM_COUNTERS_NB;
info.split.gid = 0;
- info.split.cid = topology_logical_package_id(cpu);
+ info.split.cid = topology_amd_node_id(cpu);
if (pmu_version >= 2) {
ebx.full = cpuid_ebx(EXT_PERFMON_DEBUG_FEATURES);
@@ -999,8 +999,8 @@ void amd_uncore_umc_ctx_scan(struct amd_uncore *uncore, unsigned int cpu)
cpuid(EXT_PERFMON_DEBUG_FEATURES, &eax, &ebx.full, &ecx, &edx);
info.split.aux_data = ecx; /* stash active mask */
info.split.num_pmcs = ebx.split.num_umc_pmc;
- info.split.gid = topology_logical_package_id(cpu);
- info.split.cid = topology_logical_package_id(cpu);
+ info.split.gid = topology_amd_node_id(cpu);
+ info.split.cid = topology_amd_node_id(cpu);
*per_cpu_ptr(uncore->info, cpu) = info;
}
--
2.43.0
Powered by blists - more mailing lists