[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160622205911.GW30909@twins.programming.kicks-ass.net>
Date: Wed, 22 Jun 2016 22:59:11 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Jirka Hladky <jhladky@...hat.com>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Kamil Kolakowski <kkolakow@...hat.com>
Subject: Re: Kernel 4.7rc3 - Performance drop 30-40% for SPECjbb2005 and
SPECjvm2008 benchmarks against 4.6 kernel
On Wed, Jun 22, 2016 at 04:41:06PM +0200, Jirka Hladky wrote:
> This commit is bad:
> 2159197 - Peter Zijlstra, 8 weeks ago : sched/core: Enable increased
> load resolution on 64-bit kernels
>
> Could you please have a look?
Yes, that is indeed the culprit.
The below 'revert' makes it go fast again. I'll try and figure out
what's wrong tomorrow.
---
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index bf6fea9..e7e312b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -55,7 +55,7 @@ static inline void cpu_load_update_active(struct rq *this_rq) { }
* Really only required when CONFIG_FAIR_GROUP_SCHED is also set, but to
* increase coverage and consistency always enable it on 64bit platforms.
*/
-#ifdef CONFIG_64BIT
+#if 0 // def CONFIG_64BIT
# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT)
Powered by blists - more mailing lists