lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5011F323.10202@intel.com>
Date:	Fri, 27 Jul 2012 09:47:15 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Mike Galbraith <efault@....de>
CC:	mingo@...hat.com, peterz@...radead.org,
	linux-kernel@...r.kernel.org, suresh.b.siddha@...el.com
Subject: Re: [PATCH 1/2] sched: recover SD_WAKE_AFFINE in select_task_rq_fair
 and code clean up

On 07/26/2012 05:37 PM, Mike Galbraith wrote:

> On Thu, 2012-07-26 at 13:27 +0800, Alex Shi wrote:
> 
>> 	if (affine_sd) {
>> -		if (cpu == prev_cpu || wake_affine(affine_sd, p, sync))
>> +		if (wake_affine(affine_sd, p, sync))
>>  			prev_cpu = cpu;
>>  
>>  		new_cpu = select_idle_sibling(p, prev_cpu);
> 
> Hm, if cpu == prev_cpu, asking wake_affine() if it's ok to put wakee
> back where it came from is wasted cycles.. that's where the task is
> headed regardless of reply.
> 
> -Mike
> 




Sure. I modified the patch as below:

===
>From 610515185d8a98c14c7c339c25381bc96cd99d93 Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@...el.com>
Date: Thu, 26 Jul 2012 08:55:34 +0800
Subject: [PATCH 1/3] sched: recover SD_WAKE_AFFINE in select_task_rq_fair and
 code clean up

Since power saving code was removed from sched now, the implement
code is out of service in this function, and even pollute other logical.
like, 'want_sd' never has chance to be set '0', that remove the effect
of SD_WAKE_AFFINE here.

So, clean up the obsolete code and some other unnecessary code.

Signed-off-by: Alex Shi <alex.shi@...el.com>
---
 kernel/sched/fair.c |   32 +++-----------------------------
 1 files changed, 3 insertions(+), 29 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 22321db..53fd8db 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2686,7 +2686,6 @@ select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
 	int prev_cpu = task_cpu(p);
 	int new_cpu = cpu;
 	int want_affine = 0;
-	int want_sd = 1;
 	int sync = wake_flags & WF_SYNC;
 
 	if (p->nr_cpus_allowed == 1)
@@ -2704,48 +2703,23 @@ select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
 			continue;
 
 		/*
-		 * If power savings logic is enabled for a domain, see if we
-		 * are not overloaded, if so, don't balance wider.
-		 */
-		if (tmp->flags & (SD_PREFER_LOCAL)) {
-			unsigned long power = 0;
-			unsigned long nr_running = 0;
-			unsigned long capacity;
-			int i;
-
-			for_each_cpu(i, sched_domain_span(tmp)) {
-				power += power_of(i);
-				nr_running += cpu_rq(i)->cfs.nr_running;
-			}
-
-			capacity = DIV_ROUND_CLOSEST(power, SCHED_POWER_SCALE);
-
-			if (nr_running < capacity)
-				want_sd = 0;
-		}
-
-		/*
 		 * If both cpu and prev_cpu are part of this domain,
 		 * cpu is a valid SD_WAKE_AFFINE target.
 		 */
 		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
 		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
 			affine_sd = tmp;
-			want_affine = 0;
-		}
-
-		if (!want_sd && !want_affine)
 			break;
+		}
 
 		if (!(tmp->flags & sd_flag))
 			continue;
 
-		if (want_sd)
-			sd = tmp;
+		sd = tmp;
 	}
 
 	if (affine_sd) {
-		if (cpu == prev_cpu || wake_affine(affine_sd, p, sync))
+		if (cpu != prev_cpu && wake_affine(affine_sd, p, sync))
 			prev_cpu = cpu;
 
 		new_cpu = select_idle_sibling(p, prev_cpu);
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ