lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1498545653-6755-3-git-send-email-suravee.suthikulpanit@amd.com>
Date:   Tue, 27 Jun 2017 01:40:53 -0500
From:   Suravee Suthikulpanit <suravee.suthikulpanit@....com>
To:     x86@...nel.org, linux-kernel@...r.kernel.org,
        stable@...r.kernel.org
Cc:     bp@...en8.de, bp@...e.de, leo.duran@....com, yazen.ghannam@....com,
        Suravee Suthikulpanit <suravee.suthikulpanit@....com>
Subject: [PATCH 2/2] x86/CPU/AMD: Use L3 Cache info from CPUID to determine LLC ID

CPUID_Fn8000001D_EAX_x03: Cache Properties (L3) [NumSharingCache]
should be used to determine if the last-level cache (LLC) ID is
the same as die ID or the core-complex (CCX) ID. In the former case,
the number of cores within a die would be the same as the number of
sharing threads. This is available if CPU topology extension is
supported (e.g. since family15h).

Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@....com>
Signed-off-by: Leo Duran <leo.duran@....com>
Signed-off-by: Yazen Ghannam <yazen.ghannam@....com>
Cc: <stable@...r.kernel.org> # v4.10+
---
 arch/x86/kernel/cpu/amd.c | 40 ++++++++++++++++++++++++----------------
 1 file changed, 24 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 2f5869c..faa4ec3 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -306,6 +306,29 @@ static int nearby_node(int apicid)
 
 #ifdef CONFIG_SMP
 
+static void amd_get_llc_id(struct cpuinfo_x86 *c)
+{
+	int cpu = smp_processor_id();
+
+	/* Default LLC is at the node level. */
+	per_cpu(cpu_llc_id, cpu) = c->phys_proc_id;
+
+	/*
+	 * We may have multiple LLCs per die if L3 caches exist.
+	 * Currently, the only case where LLC (L3) is not
+	 * at the die level is when LLC is at the core complex (CCX) level.
+	 * So, enumerate cpu_llc_id using ccx_id.
+	 */
+	if (l3_num_threads_sharing &&
+	    l3_num_threads_sharing < (c->x86_max_cores * smp_num_siblings)) {
+		u32 cpu_id = (c->phys_proc_id * c->x86_max_cores) + c->cpu_core_id;
+		u32 ccx_id = cpu_id * smp_num_siblings / l3_num_threads_sharing;
+
+		per_cpu(cpu_llc_id, cpu) = ccx_id;
+		pr_debug("Use ccx ID as llc ID: %#x\n", ccx_id);
+	}
+}
+
 /*
  * Per Documentation/x86/topology.c, the kernel works with
  *  {packages, cores, threads}, and we will map:
@@ -321,12 +344,9 @@ static int nearby_node(int apicid)
  *     Assumption: Number of cores in each internal node is the same.
  * (2) cpu_core_id is derived from either CPUID topology extension
  *     or initial APIC_ID.
- * (3) cpu_llc_id is either L3 or per-node
  */
 static void amd_get_topology(struct cpuinfo_x86 *c)
 {
-	int cpu = smp_processor_id();
-
 	if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
 		u32 eax, ebx, ecx, edx;
 
@@ -405,19 +425,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
 
 	/* core id has to be in the [0 .. cores_per_die - 1] range */
 	c->cpu_core_id %= c->x86_max_cores;
-
-	/* Default LLC is at the die level. */
-	per_cpu(cpu_llc_id, cpu) = c->phys_proc_id;
-
-	/*
-	 * We may have multiple LLCs if L3 caches exist, so check if we
-	 * have an L3 cache by looking at the L3 cache CPUID leaf.
-	 * For family17h, LLC is at the core complex level.
-	 * Core complex id is ApicId[3].
-	 */
-	if (cpuid_edx(0x80000006) && c->x86 == 0x17)
-		per_cpu(cpu_llc_id, cpu) = c->apicid >> 3;
-
 }
 #endif
 
@@ -799,6 +806,7 @@ static void init_amd(struct cpuinfo_x86 *c)
 #ifdef CONFIG_SMP
 	if (c->extended_cpuid_level >= 0x80000008) {
 		amd_get_topology(c);
+		amd_get_llc_id(c);
 		srat_detect_node(c);
 	}
 #endif
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ