lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1500038464-8742-3-git-send-email-josef@toxicpanda.com>
Date:   Fri, 14 Jul 2017 13:20:59 +0000
From:   Josef Bacik <josef@...icpanda.com>
To:     mingo@...hat.com, peterz@...radead.org,
        linux-kernel@...r.kernel.org, umgwanakikbuti@...il.com,
        tj@...nel.org, kernel-team@...com
Cc:     Josef Bacik <jbacik@...com>
Subject: [PATCH 2/7] sched/fair: calculate runnable_weight slightly differently

From: Josef Bacik <jbacik@...com>

Our runnable_weight currently looks like this

runnable_weight = shares * runnable_load_avg / load_avg

The goal is to scale the runnable weight for the group based on its runnable to
load_avg ratio.  The problem with this is it biases us towards tasks that never
go to sleep.  Tasks that go to sleep are going to have their runnable_load_avg
decayed pretty hard, which will drastically reduce the runnable weight of groups
with interactive tasks.  To solve this imbalance we tweak this slightly, so in
the ideal case it is still the above, but in the interactive case it is

runnable_weight = shares * runnable_weight / load_weight

which will make the weight distribution fairer between interactive and
non-interactive groups.

Signed-off-by: Josef Bacik <jbacik@...com>
---
 kernel/sched/fair.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 326bc55..5d4489e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2880,9 +2880,15 @@ static void update_cfs_group(struct sched_entity *se)
 	 * Note: we need to deal with very sporadic 'runnable > load' cases
 	 * due to numerical instability.
 	 */
-	runnable = shares * gcfs_rq->avg.runnable_load_avg;
-	if (runnable)
-		runnable /= max(gcfs_rq->avg.load_avg, gcfs_rq->avg.runnable_load_avg);
+	runnable = shares * max(scale_load_down(gcfs_rq->runnable_weight),
+				gcfs_rq->avg.runnable_load_avg);
+	if (runnable) {
+		long divider = max(gcfs_rq->avg.load_avg,
+				   scale_load_down(gcfs_rq->load.weight));
+		divider = max_t(long, 1, divider);
+		runnable /= divider;
+	}
+	runnable = clamp_t(long, runnable, MIN_SHARES, shares);
 
 	reweight_entity(cfs_rq_of(se), se, shares, runnable);
 }
-- 
2.9.3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ