lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220217154403.6497-6-wuyun.abel@bytedance.com>
Date:   Thu, 17 Feb 2022 23:44:01 +0800
From:   Abel Wu <wuyun.abel@...edance.com>
To:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>
Cc:     linux-kernel@...r.kernel.org
Subject: [RFC PATCH 5/5] sched/fair: favor cpu capacity for idle tasks

Unlike select_idle_sibling() in which we need to find a
not-so-bad candidate ASAP, the slowpath gives us more
tolerance: ignore sched-idle cpus for idle tasks since
they prefer cpu capacity rather than latency, and besides
spreading out idle tasks also good for latency of normal
tasks.

Signed-off-by: Abel Wu <wuyun.abel@...edance.com>
---
 kernel/sched/fair.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1d8f396e6f41..57f1d8c43228 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6007,6 +6007,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu);
 static int
 find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
 {
+	bool ignore_si = task_h_idle(p);
 	unsigned long load, min_load = ULONG_MAX;
 	unsigned int min_exit_latency = UINT_MAX;
 	u64 latest_idle_timestamp = 0;
@@ -6025,7 +6026,13 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this
 		if (!sched_core_cookie_match(rq, p))
 			continue;
 
-		if (sched_idle_cpu(i))
+		/*
+		 * The idle tasks prefer cpu capacity rather than
+		 * latency. Spreading out idle tasks also good for
+		 * latency of normal tasks since they won't suffer
+		 * high cpu wakeup delay.
+		 */
+		if (!ignore_si && sched_idle_cpu(i))
 			return i;
 
 		if (available_idle_cpu(i)) {
-- 
2.11.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ