lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 08 Jun 2017 12:39:28 -0700
From:   Dave Hansen <dave.hansen@...ux.intel.com>
To:     linux-kernel@...r.kernel.org
Cc:     Dave Hansen <dave.hansen@...ux.intel.com>, tony.luck@...el.com,
        tim.c.chen@...ux.intel.com, peterz@...radead.org, bp@...en8.de,
        rientjes@...gle.com, imammedo@...hat.com,
        torvalds@...ux-foundation.org, prarit@...hat.com,
        toshi.kani@...com, brice.goglin@...il.com, hpa@...ux.intel.com,
        mingo@...nel.org
Subject: [PATCH] x86, sched: allow topolgies where NUMA nodes share an LLC


From: Dave Hansen <dave.hansen@...ux.intel.com>

Our SMP boot code has a series of assumptions about what NUMA
nodes are that are enforced via topology_sane().  Once upon a
time, we verified that a CPU package only contained a single node
(fixed in cebf15eb0).  Today, we verify that SMT siblings and
LLCs do not span nodes.

The SMT siblings assumption is safe, but the LLC is violated on
current hardware.

Remove the "sanity" check on LLC spanning NUMA nodes.  Also make
sure to set 'x86_has_numa_in_package = true' which ensures that
we use the x86_numa_in_package_topology[].  The default topology
layers NUMA "outside" of the cache, which is wrong when the cache
spans multiple nodes.

This fixes the warnings, but it does theoretically throw away the
LLC from being consulted in scheduling decisions, if the LLC is
shared at a boundary that is not also a NUMA node.

Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Luck, Tony <tony.luck@...el.com>
Cc: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Borislav Petkov <bp@...en8.de>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Igor Mammedov <imammedo@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Prarit Bhargava <prarit@...hat.com>
Cc: Toshi Kani <toshi.kani@...com>
Cc: brice.goglin@...il.com
Cc: "H. Peter Anvin" <hpa@...ux.intel.com>
Cc: Ingo Molnar <mingo@...nel.org>
---

 b/arch/x86/kernel/smpboot.c |   15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff -puN arch/x86/kernel/smpboot.c~x86-numa-nodes-share-llc arch/x86/kernel/smpboot.c
--- a/arch/x86/kernel/smpboot.c~x86-numa-nodes-share-llc	2017-06-01 14:46:40.562159566 -0700
+++ b/arch/x86/kernel/smpboot.c	2017-06-01 15:01:43.994157313 -0700
@@ -460,7 +460,7 @@ static bool match_llc(struct cpuinfo_x86
 
 	if (per_cpu(cpu_llc_id, cpu1) != BAD_APICID &&
 	    per_cpu(cpu_llc_id, cpu1) == per_cpu(cpu_llc_id, cpu2))
-		return topology_sane(c, o, "llc");
+		return true;
 
 	return false;
 }
@@ -520,7 +520,8 @@ static struct sched_domain_topology_leve
 
 /*
  * Set if a package/die has multiple NUMA nodes inside.
- * AMD Magny-Cours and Intel Cluster-on-Die have this.
+ * AMD Magny-Cours, Intel Cluster-on-Die, and Intel
+ * Sub-NUMA Clustering have this.
  */
 static bool x86_has_numa_in_package;
 
@@ -548,9 +549,13 @@ void set_cpu_sibling_map(int cpu)
 		if ((i == cpu) || (has_smt && match_smt(c, o)))
 			link_mask(topology_sibling_cpumask, cpu, i);
 
-		if ((i == cpu) || (has_mp && match_llc(c, o)))
-			link_mask(cpu_llc_shared_mask, cpu, i);
-
+		if ((i == cpu) || (has_mp && match_llc(c, o))) {
+			/* LLC may be shared across NUMA nodes */
+			if (topology_same_node(c, o))
+				link_mask(cpu_llc_shared_mask, cpu, i);
+			else
+				x86_has_numa_in_package = true;
+		}
 	}
 
 	/*
_

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ