lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241031073401.13034-1-arighi@nvidia.com>
Date: Thu, 31 Oct 2024 08:34:01 +0100
From: Andrea Righi <arighi@...dia.com>
To: Tejun Heo <tj@...nel.org>,
	David Vernet <void@...ifault.com>
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH sched_ext/for-6.13] sched_ext: Do not enable LLC/NUMA optimizations when domains overlap

When the LLC and NUMA domains fully overlap, enabling both optimizations
in the built-in idle CPU selection policy is redundant, as it leads to
searching for an idle CPU within the same domain twice.

Likewise, if all online CPUs are within a single LLC domain, LLC
optimization is unnecessary.

Therefore, detect overlapping domains and enable topology optimizations
only when necessary.

Fixes: 860a45219bce ("sched_ext: Introduce NUMA awareness to the default idle selection policy")
Signed-off-by: Andrea Righi <arighi@...dia.com>
---
 kernel/sched/ext.c | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index fc7f15eefe54..82acbaffd5a7 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -3140,7 +3140,7 @@ static void update_selcpu_topology(void)
 {
 	bool enable_llc = false, enable_numa = false;
 	struct sched_domain *sd;
-	const struct cpumask *cpus;
+	const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL;
 	s32 cpu = cpumask_first(cpu_online_mask);
 
 	/*
@@ -3154,16 +3154,29 @@ static void update_selcpu_topology(void)
 	rcu_read_lock();
 	sd = rcu_dereference(per_cpu(sd_llc, cpu));
 	if (sd) {
-		cpus = sched_domain_span(sd);
-		if (cpumask_weight(cpus) < num_possible_cpus())
+		llc_cpus = sched_domain_span(sd);
+		if (cpumask_weight(llc_cpus) < num_possible_cpus())
 			enable_llc = true;
 	}
 	sd = highest_flag_domain(cpu, SD_NUMA);
 	if (sd) {
-		cpus = sched_group_span(sd->groups);
-		if (cpumask_weight(cpus) < num_possible_cpus())
+		numa_cpus = sched_group_span(sd->groups);
+		if (cpumask_weight(numa_cpus) < num_possible_cpus())
 			enable_numa = true;
 	}
+	/*
+	 * If the NUMA domain perfectly overlaps with the LLC domain, enable
+	 * LLC optimization only, as checking for an idle CPU in the same
+	 * domain twice is redundant.
+	 */
+	if (enable_numa && enable_llc && cpumask_equal(numa_cpus, llc_cpus))
+		enable_numa = false;
+	/*
+	 * If all the online CPUs are in the same LLC domain, there is no
+	 * reason to enable LLC optimization.
+	 */
+	if (enable_llc && cpumask_equal(llc_cpus, cpu_online_mask))
+		enable_llc = false;
 	rcu_read_unlock();
 
 	pr_debug("sched_ext: LLC idle selection %s\n",
-- 
2.47.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ