lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Tue, 14 Feb 2017 16:38:05 +0100
From:   Uladzislau Rezki <urezki@...il.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...hat.com>, Mike Galbraith <efault@....de>,
        Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>,
        Uladzislau 2 Rezki <uladzislau2.rezki@...ymobile.com>
Subject: [PATCH] sched: ignore task_h_load for CPU_NEWLY_IDLE

From: Uladzislau 2 Rezki <uladzislau2.rezki@...ymobile.com>

A load balancer calculates imbalance factor for particular sched
domain and tries to steal up the prescribed amount of weighted load.
However, a small imbalance factor would sometimes prevent us from
stealing any tasks at all. When a CPU is newly idle, it should
steal first task which meets the migration criteria.

There is a slight improvement when it comes to frame drops
(in my case drops per/two seconds). Basically a test case is
left finger swipe on the display (21 times, duration is 2
seconds + 1 second sleep between iterations):

0   Framedrops:  7    5
1   Framedrops:  5    3
2   Framedrops:  8    5
3   Framedrops:  4    5
4   Framedrops:  3    3
5   Framedrops:  6    4
6   Framedrops:  3    2
7   Framedrops:  3    4
8   Framedrops:  5    3
9   Framedrops:  3    3
10  Framedrops:  7    4
11  Framedrops:  3    4
12  Framedrops:  3    3
13  Framedrops:  3    3
14  Framedrops:  3    5
15  Framedrops:  7    3
16  Framedrops:  5    3
17  Framedrops:  3    2
18  Framedrops:  5    3
19  Framedrops:  4    3
20  Framedrops:  3    2

max is 8 vs 5; min is 3 vs 2.
As for applied load, it is not significant and is "light".

Signed-off-by: Uladzislau 2 Rezki <uladzislau2.rezki@...ymobile.com>
---
 kernel/sched/fair.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6559d19..b56b0c5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6802,6 +6802,15 @@ static int detach_tasks(struct lb_env *env)
 		if (env->idle != CPU_NOT_IDLE && env->src_rq->nr_running <= 1)
 			break;
 
+		/*
+		 * Another CPU can place tasks, since we do not hold dst_rq lock
+		 * while doing balancing. If newly idle CPU already got something,
+		 * give up to reduce latency for CONFIG_PREEMPT kernels.
+		 */
+		if (IS_ENABLED(CONFIG_PREEMPT) && env->idle == CPU_NEWLY_IDLE &&
+				env->dst_rq->nr_running > 0)
+			break;
+
 		p = list_first_entry(tasks, struct task_struct, se.group_node);
 
 		env->loop++;
@@ -6824,7 +6833,8 @@ static int detach_tasks(struct lb_env *env)
 		if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed)
 			goto next;
 
-		if ((load / 2) > env->imbalance)
+		if ((!IS_ENABLED(CONFIG_PREEMPT) || env->idle != CPU_NEWLY_IDLE) &&
+				(load / 2) > env->imbalance)
 			goto next;
 
 		detach_task(p, env);
-- 
2.1.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ