lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <7453e3f901878608959f23dacaa36dfc0432c05b.1764801860.git.tim.c.chen@linux.intel.com>
Date: Wed,  3 Dec 2025 15:07:35 -0800
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>,
	K Prateek Nayak <kprateek.nayak@....com>,
	"Gautham R . Shenoy" <gautham.shenoy@....com>,
	Vincent Guittot <vincent.guittot@...aro.org>
Cc: Chen Yu <yu.c.chen@...el.com>,
	Juri Lelli <juri.lelli@...hat.com>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ben Segall <bsegall@...gle.com>,
	Mel Gorman <mgorman@...e.de>,
	Valentin Schneider <vschneid@...hat.com>,
	Madadi Vineeth Reddy <vineethr@...ux.ibm.com>,
	Hillf Danton <hdanton@...a.com>,
	Shrikanth Hegde <sshegde@...ux.ibm.com>,
	Jianyong Wu <jianyong.wu@...look.com>,
	Yangyu Chen <cyy@...self.name>,
	Tingyin Duan <tingyin.duan@...il.com>,
	Vern Hao <vernhao@...cent.com>,
	Vern Hao <haoxing990@...il.com>,
	Len Brown <len.brown@...el.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Aubrey Li <aubrey.li@...el.com>,
	Zhao Liu <zhao1.liu@...el.com>,
	Chen Yu <yu.chen.surf@...il.com>,
	Adam Li <adamli@...amperecomputing.com>,
	Aaron Lu <ziqianlu@...edance.com>,
	Tim Chen <tim.c.chen@...el.com>,
	linux-kernel@...r.kernel.org,
	Libo Chen <libo.chen@...cle.com>
Subject: [PATCH v2 16/23] sched/cache: Introduce sched_cache_present to enable cache aware scheduling for multi LLCs NUMA node

From: Chen Yu <yu.c.chen@...el.com>

Cache-aware load balancing should only be enabled if there are more
than 1 LLCs within 1 NUMA node. sched_cache_present is introduced to
indicate whether this platform supports this topology.

Suggested-by: Libo Chen <libo.chen@...cle.com>
Suggested-by: Adam Li <adamli@...amperecomputing.com>
Signed-off-by: Chen Yu <yu.c.chen@...el.com>
Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
---

Notes:
    v1->v2:
    	Use flag sched_cache_present to indicate whether a platform
    	supports cache aware scheduling. Change this flag from staic key.
    	There should be only 1 static key to control the cache aware
    	scheduling. (Peter Zijlstra)

 kernel/sched/topology.c | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index d583399fc6a1..9799e3a9a609 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -24,6 +24,8 @@ int max_llcs;
 
 #ifdef CONFIG_SCHED_CACHE
 
+static bool sched_cache_present;
+
 static unsigned int *alloc_new_pref_llcs(unsigned int *old, unsigned int **gc)
 {
 	unsigned int *new = NULL;
@@ -54,7 +56,7 @@ static void populate_new_pref_llcs(unsigned int *old, unsigned int *new)
 		new[i] = old[i];
 }
 
-static int resize_llc_pref(void)
+static int resize_llc_pref(bool has_multi_llcs)
 {
 	unsigned int *__percpu *tmp_llc_pref;
 	int i, ret = 0;
@@ -102,6 +104,11 @@ static int resize_llc_pref(void)
 		rq_unlock_irqrestore(rq, &rf);
 	}
 
+	if (has_multi_llcs) {
+		sched_cache_present = true;
+		pr_info_once("Cache aware load balance is enabled on the platform.\n");
+	}
+
 release_old:
 	/*
 	 * Load balance is done under rcu_lock.
@@ -124,7 +131,7 @@ static int resize_llc_pref(void)
 
 #else
 
-static int resize_llc_pref(void)
+static int resize_llc_pref(bool has_multi_llcs)
 {
 	max_llcs = new_max_llcs;
 	return 0;
@@ -2644,6 +2651,7 @@ static int
 build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *attr)
 {
 	enum s_alloc alloc_state = sa_none;
+	bool has_multi_llcs = false;
 	struct sched_domain *sd;
 	struct s_data d;
 	struct rq *rq = NULL;
@@ -2736,10 +2744,12 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
 				 * between LLCs and memory channels.
 				 */
 				nr_llcs = sd->span_weight / child->span_weight;
-				if (nr_llcs == 1)
+				if (nr_llcs == 1) {
 					imb = sd->span_weight >> 3;
-				else
+				} else {
 					imb = nr_llcs;
+					has_multi_llcs = true;
+				}
 				imb = max(1U, imb);
 				sd->imb_numa_nr = imb;
 
@@ -2787,7 +2797,7 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
 	if (has_cluster)
 		static_branch_inc_cpuslocked(&sched_cluster_active);
 
-	resize_llc_pref();
+	resize_llc_pref(has_multi_llcs);
 
 	if (rq && sched_debug_verbose)
 		pr_info("root domain span: %*pbl\n", cpumask_pr_args(cpu_map));
-- 
2.32.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ