lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <a541cdf9b97f523f6b8067271847a986db5ba768.1746611892.git.yu.c.chen@intel.com>
Date: Wed,  7 May 2025 19:17:15 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Peter Zijlstra <peterz@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Cc: mkoutny@...e.com,
	Ingo Molnar <mingo@...hat.com>,
	Tejun Heo <tj@...nel.org>,
	Johannes Weiner <hannes@...xchg.org>,
	Jonathan Corbet <corbet@....net>,
	Mel Gorman <mgorman@...e.de>,
	Michal Hocko <mhocko@...nel.org>,
	Muchun Song <muchun.song@...ux.dev>,
	Roman Gushchin <roman.gushchin@...ux.dev>,
	Shakeel Butt <shakeel.butt@...ux.dev>,
	"Chen, Tim C" <tim.c.chen@...el.com>,
	Aubrey Li <aubrey.li@...el.com>,
	Libo Chen <libo.chen@...cle.com>,
	K Prateek Nayak <kprateek.nayak@....com>,
	Madadi Vineeth Reddy <vineethr@...ux.ibm.com>,
	Venkat Rao Bagalkote <venkat88@...ux.ibm.com>,
	"Jain, Ayush" <ayushjai@....com>,
	cgroups@...r.kernel.org,
	linux-doc@...r.kernel.org,
	linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Chen Yu <yu.chen.surf@...mail.com>,
	Ayush Jain <Ayush.jain3@....com>,
	Chen Yu <yu.c.chen@...el.com>
Subject: [PATCH v4 1/2] sched/numa: fix task swap by skipping kernel threads

From: Libo Chen <libo.chen@...cle.com>

Task swapping is triggered when there are no idle CPUs in
task A's preferred node. In this case, the NUMA load balancer
chooses a task B on A's preferred node and swaps B with A. This
helps improve NUMA locality without introducing load imbalance
between nodes.

In the current implementation, B's NUMA node preference is not
mandatory, and it aims not to increase load imbalance. That is
to say, a kernel thread might be chosen as B. However, kernel
threads are not supposed to be covered by NUMA balancing because
NUMA balancing only considers user pages via VMAs.

Fix this by not considering kernel threads as swap targets in
task_numa_compare(). This can be extended beyond kernel threads
in the future by checking if a swap candidate has a valid NUMA
preference through checking the candidate's numa_preferred_nid
and numa_faults. For now, keep the code simple.

Suggested-by: Michal Koutny <mkoutny@...e.com>
Tested-by: Ayush Jain <Ayush.jain3@....com>
Signed-off-by: Libo Chen <libo.chen@...cle.com>
Signed-off-by: Chen Yu <yu.c.chen@...el.com>
---
 kernel/sched/fair.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0fb9bf995a47..d1af2e084a2a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2273,7 +2273,8 @@ static bool task_numa_compare(struct task_numa_env *env,
 
 	rcu_read_lock();
 	cur = rcu_dereference(dst_rq->curr);
-	if (cur && ((cur->flags & PF_EXITING) || is_idle_task(cur)))
+	if (cur && ((cur->flags & PF_EXITING) || is_idle_task(cur) ||
+		    !cur->mm))
 		cur = NULL;
 
 	/*
-- 
2.25.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ