lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1501009980-2273-1-git-send-email-suravee.suthikulpanit@amd.com>
Date:   Tue, 25 Jul 2017 14:13:00 -0500
From:   Suravee Suthikulpanit <suravee.suthikulpanit@....com>
To:     linux-kernel@...r.kernel.org, x86@...nel.org
Cc:     tglx@...utronix.de, mingo@...hat.com, hpa@...or.com, bp@...e.de,
        peterz@...radead.org, Yazen.Ghannam@....com,
        Suravee Suthikulpanit <suravee.suthikulpanit@....com>
Subject: [PATCH] x86/amd: Only fixup cpu_core_id for pre-family17h

Current fixup causes cpu_core_id for family17 w/ downcore configuration
to be incorrect as shown here:

  NODE: 0
  processor  0 core id : 0
  processor  1 core id : 1
  processor  2 core id : 2
  processor  3 core id : 4
  processor  4 core id : 5
  processor  5 core id : 0

  NODE: 1
  processor  6 core id : 2
  processor  7 core id : 3
  processor  8 core id : 4
  processor  9 core id : 0
  processor 10 core id : 1
  processor 11 core id : 2

This could cause issue for code that relies on the cpu_core_id to be unique
(at least within the node).

Although the fixup is not needed, it has been around prior to family17h.
So, only apply the fixup for pre-family17h. The new value for cpu_core_id
for family17h and later will represent the CPUID_Fn8000001E_EBX[CoreId],
which is guaranteed to be unique for each core within a socket. Here are
example of the new cpu_core_id numbering scheme. 

  NODE: 0
  processor  0 core id : 0
  processor  1 core id : 1
  processor  2 core id : 2
  processor  3 core id : 4
  processor  4 core id : 5
  processor  5 core id : 6

  NODE: 1
  processor  6 core id : 8
  processor  7 core id : 9
  processor  8 core id : 10
  processor  9 core id : 12
  processor 10 core id : 13
  processor 11 core id : 14

Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@....com>
---
Note: This patch is reworked based on prior discussion of another
      patch series (https://lkml.org/lkml/2017/7/24/139), which is
      now taking a new approach.

 arch/x86/kernel/cpu/amd.c | 29 +++++++++++++++++++++++------
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index bb5abe8..223dd8c 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -297,6 +297,28 @@ static int nearby_node(int apicid)
 #endif
 
 /*
+ * Only fixup cpu_core_id for pre-family17h systems to be in the
+ * [0 .. cores_per_node - 1] range. This is not really needed,
+ * but mainly kept so that we do not break any existing code,
+ * which may make this assumption on older platforms.
+ *
+ * For family17h and later, this logic is not applicable as cpu_core_id
+ * is the CoreId from CPUID_Fn8000001E_EBX, which is non-contiguous for
+ * downcore system configuration. This could break the logic and result
+ * in invalid cpu_core_id.
+ */
+static void __fixup_multi_node(struct cpuinfo_x86 *c)
+{
+	u32 cus_per_node;
+
+	if (c->x86 >= 0x17)
+		return;
+
+	cus_per_node = c->x86_max_cores / nodes_per_socket;
+	c->cpu_core_id %= cus_per_node;
+}
+
+/*
  * Fixup core topology information for
  * (1) AMD multi-node processors
  *     Assumption: Number of cores in each internal node is the same.
@@ -353,15 +375,10 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
 	} else
 		return;
 
-	/* fixup multi-node processor information */
 	if (nodes_per_socket > 1) {
-		u32 cus_per_node;
-
 		set_cpu_cap(c, X86_FEATURE_AMD_DCM);
-		cus_per_node = c->x86_max_cores / nodes_per_socket;
 
-		/* core id has to be in the [0 .. cores_per_node - 1] range */
-		c->cpu_core_id %= cus_per_node;
+		__fixup_multi_node(c);
 	}
 }
 #endif
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ