lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241128084858.25220-1-jiahao.kernel@gmail.com>
Date: Thu, 28 Nov 2024 16:48:58 +0800
From: Hao Jia <jiahao.kernel@...il.com>
To: mingo@...hat.com,
	peterz@...radead.org,
	mingo@...nel.org,
	juri.lelli@...hat.com,
	vincent.guittot@...aro.org,
	dietmar.eggemann@....com,
	rostedt@...dmis.org,
	bsegall@...gle.com,
	mgorman@...e.de,
	bristot@...hat.com,
	vschneid@...hat.com
Cc: linux-kernel@...r.kernel.org,
	Hao Jia <jiahao1@...iang.com>
Subject: [PATCH] sched/core: Do not migrate ineligible tasks in sched_balance_rq()

From: Hao Jia <jiahao1@...iang.com>

When the PLACE_LAG scheduling feature is enabled, if a task
is ineligible (lag < 0) on the source cpu runqueue, it will
also be ineligible when it is migrated to the destination
cpu runqueue.

Because we will keep the original equivalent lag of
the task in place_entity(). So if the task was ineligible
before, it will still be ineligible after migration.

Therefore, we should skip the migration of ineligible tasks
to reduce ineffective task migrations, just like the task
throttled by cfs_bandwidth, until they become eligible.

Signed-off-by: Hao Jia <jiahao1@...iang.com>
---
 kernel/sched/fair.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fbdca89c677f..5564e16b6fdb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9358,13 +9358,14 @@ static inline int migrate_degrades_locality(struct task_struct *p,
 static
 int can_migrate_task(struct task_struct *p, struct lb_env *env)
 {
+	struct cfs_rq *cfs_rq = task_cfs_rq(p);
 	int tsk_cache_hot;
 
 	lockdep_assert_rq_held(env->src_rq);
 
 	/*
 	 * We do not migrate tasks that are:
-	 * 1) throttled_lb_pair, or
+	 * 1) throttled_lb_pair, or task ineligible, or
 	 * 2) cannot be migrated to this CPU due to cpus_ptr, or
 	 * 3) running (obviously), or
 	 * 4) are cache-hot on their current CPU.
@@ -9372,6 +9373,10 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
 	if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
 		return 0;
 
+	if (sched_feat(PLACE_LAG) && cfs_rq->nr_running &&
+			!entity_eligible(cfs_rq, &p->se))
+		return 0;
+
 	/* Disregard percpu kthreads; they are where they need to be. */
 	if (kthread_is_per_cpu(p))
 		return 0;
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ