lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1262596842-17392-1-git-send-email-sjayaraman@suse.de>
Date:	Mon,  4 Jan 2010 14:50:42 +0530
From:	Suresh Jayaraman <sjayaraman@...e.de>
To:	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	linux-kernel@...r.kernel.org, Suresh Jayaraman <sjayaraman@...e.de>
Subject: [RFC][PATCH] sched: avoid huge bonus to sleepers on busy machines

As I understand the idea of sleeper fairness is to consider sleeping tasks
similar to the ones on the runqueue and credit the sleepers in a way that it
would get CPU as if it were running.

Currently, when fair sleepers are enabled, the task that was sleeping seem to
get a bonus of cfs_rq->min_vruntime - sched_latency (in most cases). While with
gentle fair sleepers this effect was reduced to half, there still remains a
chance that on busy machines with more number of tasks, the sleepers might get
a huge undue bonus.

Here's a patch to avoid this by computing the entitled CPU time for the
sleeping task during the period taking into account only the current
cfs_rq->nr_running and thus tries to make it adaptive.
Compile-tested only.

Signed-off-by: Suresh Jayaraman <sjayaraman@...e.de>
---
 kernel/sched_fair.c |   11 ++++++++++-
 1 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 42ac3c9..d81fcb3 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -739,6 +739,15 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
 	/* sleeps up to a single latency don't count. */
 	if (!initial && sched_feat(FAIR_SLEEPERS)) {
 		unsigned long thresh = sysctl_sched_latency;
+		unsigned long delta_exec = (unsigned long)
+					(rq_of(cfs_rq)->clock - se->exec_start);
+		unsigned long sleeper_bonus;
+
+		/* entitled share of CPU time adapted to current nr_running */
+		if (likely(cfs_rq->nr_running > 1))
+			sleeper_bonus = delta_exec/cfs_rq->nr_running;
+		else
+			sleeper_bonus = delta_exec;
 
 		/*
 		 * Convert the sleeper threshold into virtual time.
@@ -757,7 +766,7 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
 		if (sched_feat(GENTLE_FAIR_SLEEPERS))
 			thresh >>= 1;
 
-		vruntime -= thresh;
+		vruntime -= min(thresh, sleeper_bonus);
 	}
 
 	/* ensure we never gain time by being placed backwards. */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ