lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 26 Jul 2017 16:17:36 +0200
From:   Borislav Petkov <bp@...e.de>
To:     Suravee Suthikulpanit <suravee.suthikulpanit@....com>
Cc:     linux-kernel@...r.kernel.org, x86@...nel.org, tglx@...utronix.de,
        mingo@...hat.com, hpa@...or.com, peterz@...radead.org,
        Yazen.Ghannam@....com
Subject: Re: [PATCH] x86/amd: Only fixup cpu_core_id for pre-family17h

On Tue, Jul 25, 2017 at 02:13:00PM -0500, Suravee Suthikulpanit wrote:
> Current fixup causes cpu_core_id for family17 w/ downcore configuration
> to be incorrect as shown here:

Applied but did some heavy massaging:

---
From: Suravee Suthikulpanit <suravee.suthikulpanit@....com>
Date: Tue, 25 Jul 2017 14:13:00 -0500
Subject: [PATCH] x86/amd: Limit cpu_core_id fixup to families older than F17h

Current cpu_core_id fixup causes downcored F17h configurations to be
incorrect:

  NODE: 0
  processor  0 core id : 0
  processor  1 core id : 1
  processor  2 core id : 2
  processor  3 core id : 4
  processor  4 core id : 5
  processor  5 core id : 0

  NODE: 1
  processor  6 core id : 2
  processor  7 core id : 3
  processor  8 core id : 4
  processor  9 core id : 0
  processor 10 core id : 1
  processor 11 core id : 2

Code that relies on the cpu_core_id, like match_smt(), for example,
which builds the thread siblings masks used by the scheduler, is
mislead.

So, limit the fixup to pre-F17h machines. The new value for cpu_core_id
for F17h and later will represent the CPUID_Fn8000001E_EBX[CoreId],
which is guaranteed to be unique for each core within a socket.

This way we have:

  NODE: 0
  processor  0 core id : 0
  processor  1 core id : 1
  processor  2 core id : 2
  processor  3 core id : 4
  processor  4 core id : 5
  processor  5 core id : 6

  NODE: 1
  processor  6 core id : 8
  processor  7 core id : 9
  processor  8 core id : 10
  processor  9 core id : 12
  processor 10 core id : 13
  processor 11 core id : 14

Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@....com>
Cc: Yazen Ghannam <Yazen.Ghannam@....com>
Cc: x86-ml <x86@...nel.org>
Link: http://lkml.kernel.org/r/1501009980-2273-1-git-send-email-suravee.suthikulpanit@amd.com
[ Heavily massaged. ]
Signed-off-by: 
---
 arch/x86/kernel/cpu/amd.c | 24 +++++++++++++++++-------
 1 file changed, 17 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 110ca5d2bb87..aa3cf163fd19 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -297,6 +297,22 @@ static int nearby_node(int apicid)
 }
 #endif
 
+/*
+ * Fix up cpu_core_id for pre-F17h systems to be in the
+ * [0 .. cores_per_node - 1] range. Not really needed but
+ * kept so as not to break existing setups.
+ */
+static void legacy_fixup_core_id(struct cpuinfo_x86 *c)
+{
+	u32 cus_per_node;
+
+	if (c->x86 >= 0x17)
+		return;
+
+	cus_per_node = c->x86_max_cores / nodes_per_socket;
+	c->cpu_core_id %= cus_per_node;
+}
+
 /*
  * Fixup core topology information for
  * (1) AMD multi-node processors
@@ -354,15 +370,9 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
 	} else
 		return;
 
-	/* fixup multi-node processor information */
 	if (nodes_per_socket > 1) {
-		u32 cus_per_node;
-
 		set_cpu_cap(c, X86_FEATURE_AMD_DCM);
-		cus_per_node = c->x86_max_cores / nodes_per_socket;
-
-		/* core id has to be in the [0 .. cores_per_node - 1] range */
-		c->cpu_core_id %= cus_per_node;
+		legacy_fixup_core_id(c);
 	}
 }
 #endif
-- 
2.14.0.rc0

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ