lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140107111341.GS31570@twins.programming.kicks-ass.net>
Date:	Tue, 7 Jan 2014 12:13:41 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>
Cc:	Vincent Guittot <vincent.guittot@...aro.org>,
	linux-kernel@...r.kernel.org, mingo@...nel.org, pjt@...gle.com,
	Morten.Rasmussen@....com, cmetcalf@...era.com, tony.luck@...el.com,
	alex.shi@...aro.org, linaro-kernel@...ts.linaro.org, rjw@...k.pl,
	paulmck@...ux.vnet.ibm.com, corbet@....net, tglx@...utronix.de,
	len.brown@...el.com, arjan@...ux.intel.com,
	amit.kucheria@...aro.org, james.hogan@...tec.com,
	schwidefsky@...ibm.com, heiko.carstens@...ibm.com,
	Dietmar.Eggemann@....com
Subject: Re: [RFC] sched: CPU topology try

On Tue, Jan 07, 2014 at 04:09:39PM +0530, Preeti U Murthy wrote:
> On 01/07/2014 03:20 PM, Peter Zijlstra wrote:
> > On Tue, Jan 07, 2014 at 03:10:21PM +0530, Preeti U Murthy wrote:
> >> What if we want to add arch specific flags to the NUMA domain? Currently
> >> with Peter's patch:https://lkml.org/lkml/2013/11/5/239 and this patch,
> >> the arch can modify the sd flags of the topology levels till just before
> >> the NUMA domain. In sd_init_numa(), the flags for the NUMA domain get
> >> initialized. We need to perhaps call into arch here to probe for
> >> additional flags?
> > 
> > What are you thinking of? I was hoping all NUMA details were captured in
> > the distance table.
> > 
> > Its far easier to talk of specifics in this case.
> > 
> If the processor can be core gated, then there is very little power
> savings that we could yield from consolidating all the load onto a
> single node in a NUMA domain. 6 cores on one node or 3 cores each on two
> nodes, the power is drawn by 6 cores in all. So I was thinking under
> this circumstance we might want to set the SD_SHARE_POWERDOMAIN flag at
> the NUMA domain and spread the load if it favours the workload.

So Intel has so far not said a lot of sensible things about power
management on their multi-socket platform.

And I've not heard anything at all from IBM on the POWER chips.

What I know from the Intel side is that packet idle hardly saves
anything when compared to the DRAM power and the cost of having to do
remote memory accesses.

In other words, I'm not at all considering power aware scheduling for
NUMA systems until someone starts talking sense :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ