lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 27 Sep 2010 17:29:57 -0700
From:	Nikhil Rao <ncrao@...gle.com>
To:	Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>,
	Mike Galbraith <efault@....de>
Cc:	Venkatesh Pallipadi <venki@...gle.com>,
	linux-kernel@...r.kernel.org, Nikhil Rao <ncrao@...gle.com>
Subject: [PATCH 2/3] sched: drop group_capacity to 1 only if remote group has no running tasks

When SD_PREFER_SIBLING is set on a sched domain, drop group_capacity to 1
only if the remote sched group has no running tasks. This addresses the case
where you have two tasks on one socket and the other socket is idle, in which
case you drop the capacity to 1. If the remote group has >=1 running task, then
there is no difference from a cache-sharing perspective.

Signed-off-by: Nikhil Rao <ncrao@...gle.com>
---
 kernel/sched_fair.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index de8a6a0..33a7985 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -2548,7 +2548,7 @@ static inline void update_sd_lb_stats(struct sched_domain *sd, int this_cpu,
 		 * first, lower the sg capacity to one so that we'll try
 		 * and move all the excess tasks away.
 		 */
-		if (prefer_sibling)
+		if (prefer_sibling && !sgs.sum_nr_running)
 			sgs.group_capacity = min(sgs.group_capacity, 1UL);
 
 		if (local_group) {
-- 
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ