[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140318091656.GQ25546@laptop.programming.kicks-ass.net>
Date: Tue, 18 Mar 2014 10:16:56 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Davidlohr Bueso <davidlohr@...com>
Cc: mingo@...nel.org, hpa@...or.com, linux-kernel@...r.kernel.org,
tglx@...utronix.de, hpa@...ux.intel.com, len.brown@...el.com,
linux-tip-commits@...r.kernel.org
Subject: Re: [tip:x86/urgent] x86 idle: Repair large-server 50-watt
idle-power regression
On Mon, Mar 17, 2014 at 05:20:10PM -0700, Davidlohr Bueso wrote:
> On Thu, 2013-12-19 at 11:51 -0800, tip-bot for Len Brown wrote:
> > Commit-ID: 40e2d7f9b5dae048789c64672bf3027fbb663ffa
> > Gitweb: http://git.kernel.org/tip/40e2d7f9b5dae048789c64672bf3027fbb663ffa
> > Author: Len Brown <len.brown@...el.com>
> > AuthorDate: Wed, 18 Dec 2013 16:44:57 -0500
> > Committer: H. Peter Anvin <hpa@...ux.intel.com>
> > CommitDate: Thu, 19 Dec 2013 11:47:39 -0800
> >
> > x86 idle: Repair large-server 50-watt idle-power regression
>
> FYI this commit can cause some non trivial performance regressions for
> larger core count systems. While not surprising because of the nature of
> the change, having intel_idle do more cacheline invalidations, I still
> wanted to let you guys know. For instance, on a 160 core Westmere
> system, aim7 throughput can go down in a number of tests, anywhere from
> -10% to -25%.
>
> I guess it comes down to one of those performance vs energy things. And
> sure, max_cstate can be set to overcome this, but it's still something
> that was previously taken for granted.
-10% to -25% seems a lot for a single cacheline flush. Also I would
expect the expected idle time to be very short while running aim7. So
could it be the cacheflush is actually taking longer than the expected
idle time?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists