lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 18 Mar 2014 19:14:04 -0700
From:	Jason Low <jason.low2@...com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Davidlohr Bueso <davidlohr@...com>, Ingo Molnar <mingo@...nel.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>, hpa@...ux.intel.com,
	len.brown@...el.com, linux-tip-commits@...r.kernel.org
Subject: Re: [tip:x86/urgent] x86 idle: Repair large-server 50-watt idle-power regression

On Tue, Mar 18, 2014 at 2:16 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Mon, Mar 17, 2014 at 05:20:10PM -0700, Davidlohr Bueso wrote:
>> On Thu, 2013-12-19 at 11:51 -0800, tip-bot for Len Brown wrote:
>> > Commit-ID:  40e2d7f9b5dae048789c64672bf3027fbb663ffa
>> > Gitweb:     http://git.kernel.org/tip/40e2d7f9b5dae048789c64672bf3027fbb663ffa
>> > Author:     Len Brown <len.brown@...el.com>
>> > AuthorDate: Wed, 18 Dec 2013 16:44:57 -0500
>> > Committer:  H. Peter Anvin <hpa@...ux.intel.com>
>> > CommitDate: Thu, 19 Dec 2013 11:47:39 -0800
>> >
>> > x86 idle: Repair large-server 50-watt idle-power regression
>>
>> FYI this commit can cause some non trivial performance regressions for
>> larger core count systems. While not surprising because of the nature of
>> the change, having intel_idle do more cacheline invalidations, I still
>> wanted to let you guys know. For instance, on a 160 core Westmere
>> system, aim7 throughput can go down in a number of tests, anywhere from
>> -10% to -25%.
>>
>> I guess it comes down to one of those performance vs energy things. And
>> sure, max_cstate can be set to overcome this, but it's still something
>> that was previously taken for granted.
>
> -10% to -25% seems a lot for a single cacheline flush. Also I would
> expect the expected idle time to be very short while running aim7. So
> could it be the cacheflush is actually taking longer than the expected
> idle time?

Can we consider conditionally skipping the cacheline flush if the
approximate average CPU idle time is very short, for instance,
something along the lines of skipping if CPU average idle time <
(sched migration cost or "cacheline_flush_penalty")?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ