lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070828222718.GI1894@linux-os.sc.intel.com>
Date:	Tue, 28 Aug 2007 15:27:19 -0700
From:	"Siddha, Suresh B" <suresh.b.siddha@...el.com>
To:	mingo@...e.hu
Cc:	Ingo Molnar <mingo@...e.hu>, nickpiggin@...oo.com.au,
	linux-kernel@...r.kernel.org, akpm@...ux-foundation.org
Subject: Re: [patch] sched: fix broken smt/mc optimizations with CFS

On Mon, Aug 27, 2007 at 12:31:03PM -0700, Siddha, Suresh B wrote:
> Essentially I observed that nice 0 tasks still endup on two cores of same
> package, with out getting spread out to two different packages. This behavior
> is same with out this fix and this fix doesn't help in any way.

Ingo, Appended patch seems to fix the issue and as far as I can test, seems ok
to me.

This is a quick fix for .23. Peter Williams and myself plan to look at
code cleanups in this area (HT/MC optimizations) post .23

BTW, with this fix, do you want to retain the current FUZZ value?

thanks,
suresh
--

Try to fix MC/HT scheduler optimization breakage again, with out breaking
the FUZZ logic.

First fix the check
	if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task)
with this
	if (*imbalance < busiest_load_per_task)

As the current check is always false for nice 0 tasks (as SCHED_LOAD_SCALE_FUZZ
is same as busiest_load_per_task for nice 0 tasks).

With the above change, imbalance was getting reset to 0 in the corner case
condition, making the FUZZ logic fail. Fix it by not corrupting the
imbalance and change the imbalance, only when it finds that the
HT/MC optimization is needed.

Signed-off-by: Suresh Siddha <suresh.b.siddha@...el.com>
---

diff --git a/kernel/sched.c b/kernel/sched.c
index 9fe473a..03e5e8d 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2511,7 +2511,7 @@ group_next:
 	 * a think about bumping its value to force at least one task to be
 	 * moved
 	 */
-	if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) {
+	if (*imbalance < busiest_load_per_task) {
 		unsigned long tmp, pwr_now, pwr_move;
 		unsigned int imbn;
 
@@ -2563,10 +2563,8 @@ small_imbalance:
 		pwr_move /= SCHED_LOAD_SCALE;
 
 		/* Move if we gain throughput */
-		if (pwr_move <= pwr_now)
-			goto out_balanced;
-
-		*imbalance = busiest_load_per_task;
+		if (pwr_move > pwr_now)
+			*imbalance = busiest_load_per_task;
 	}
 
 	return busiest;
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ