lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 22 Feb 2021 15:33:20 +0800
From:   ultrachin@....com
To:     vincent.guittot@...aro.org
Cc:     linux-kernel@...r.kernel.org, mingo@...hat.com,
        peterz@...radead.org, juri.lelli@...hat.com,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        mgorman@...e.de, bristot@...hat.com, heddchen@...cent.com,
        xiaoggchen@...cent.com
Subject: [PATCH V2] sched: pull tasks when CPU is about to run SCHED_IDLE tasks

From: Chen Xiaoguang <xiaoggchen@...cent.com>

In order to use the computer efficiently we usually deploy online
tasks and offline tasks in the same computer.

The online tasks are more important than the offline tasks and are
latency sensitive we should make sure the online tasks preempt the
offline tasks as soon as possible while there are online tasks
waiting to run.

Online tasks using the SCHED_NORMAL policy and offline tasks using
the SCHED_ILDE policy. This patch decrease the latency of online
tasks by doing a load balance before a offline tasks to run.

Signed-off-by: Chen Xiaoguang <xiaoggchen@...cent.com>
Signed-off-by: Chen He <heddchen@...cent.com>
---
v1 -> v2:
 - Add checking in balance_fair
 - Remove task state checking in pick_next_task_fair
 - Add comment about the change
---
 kernel/sched/fair.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8a8bd7b..80b69a2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6833,7 +6833,13 @@ static void task_dead_fair(struct task_struct *p)
 static int
 balance_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 {
-	if (rq->nr_running)
+	/*
+	 * Return if SCHED_NORMAL tasks exist.
+	 * Else if only SCHED_IDLE tasks in rq then do a load balance trying
+	 * to pull SCHED_NORMAL tasks to run so as to reduce the latency of
+	 * SCHED_NORMAL task.
+	 */
+	if (rq->nr_running && !sched_idle_rq(rq))
 		return 1;
 
 	return newidle_balance(rq, rf) != 0;
@@ -7013,6 +7019,14 @@ struct task_struct *
 	struct task_struct *p;
 	int new_tasks;
 
+	/*
+	 * Before a CPU switches from running SCHED_NORMAL task to SCHED_IDLE
+	 * task, do a load balance trying to pull SCHED_NORMAL tasks to run
+	 * so as to reduce the latency of SCHED_NORMAL task.
+	 */
+	if (sched_idle_rq(rq) && prev && prev->policy != SCHED_IDLE)
+		goto idle;
+
 again:
 	if (!sched_fair_runnable(rq))
 		goto idle;
-- 
1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ