lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 23 Dec 2019 16:16:19 +0800
From:   z00214469 <prime.zeng@...ilicon.com>
To:     <sudeep.holla@....com>
CC:     <linuxarm@...wei.com>, z00214469 <prime.zeng@...ilicon.com>,
        "Greg Kroah-Hartman" <gregkh@...uxfoundation.org>,
        "Rafael J. Wysocki" <rafael@...nel.org>,
        <linux-kernel@...r.kernel.org>
Subject: [PATCH] cpu-topology: warn if NUMA configurations conflicts with lower layer

As we know, from sched domain's perspective, the DIE layer should be
larger than or at least equal to the MC layer, and in some cases, MC
is defined by the arch specified hardware, MPIDR for example, but NUMA
can be defined by users, with the following system configrations:
*************************************
NUMA:      	 0-2,  3-7
core_siblings:   0-3,  4-7
*************************************
Per the current code, for core 3, its MC cpu map fallbacks to 3~7(its
core_sibings is 0~3 while its numa node map is 3~7).

For the sched MC, when we are build sched groups:
step1. core3 's sched groups chain is built like this: 3->4->5->6->7->3
step2. core4's sched groups chain is built like this: 4->5->6->7->4
so after step2, core3's sched groups for MC level is overlapped, more
importantly, it will fall to dead loop if while(sg != sg->groups)

Obviously, the NUMA node with cpu 3-7 conflict with the MC level cpu
map, but unfortunately, there is no way even detect such cases.

In this patch, prompt a warning message to help with the above cases.

Signed-off-by: Zeng Tao <prime.zeng@...ilicon.com>
---
 drivers/base/arch_topology.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 1eb81f11..5fe44b3 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -439,10 +439,18 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	if (cpumask_subset(&cpu_topology[cpu].core_sibling, core_mask)) {
 		/* not numa in package, lets use the package siblings */
 		core_mask = &cpu_topology[cpu].core_sibling;
-	}
+	} else
+		pr_warn_once("Warning: suspicous broken topology: cpu:[%d]'s core_sibling:[%*pbl] not a subset of numa node:[%*pbl]\n",
+			cpu, cpumask_pr_args(&cpu_topology[cpu].core_sibling),
+			cpumask_pr_args(core_mask));
+
 	if (cpu_topology[cpu].llc_id != -1) {
 		if (cpumask_subset(&cpu_topology[cpu].llc_sibling, core_mask))
 			core_mask = &cpu_topology[cpu].llc_sibling;
+		else
+			pr_warn_once("Warning: suspicous broken topology: cpu:[%d]'s llc_sibling:[%*pbl] not a subset of numa node:[%*pbl]\n",
+				cpu, cpumask_pr_args(&cpu_topology[cpu].llc_sibling),
+				cpumask_pr_args(core_mask));
 	}
 
 	return core_mask;
-- 
2.8.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ