lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1395544326.3460.98.camel@pasglop>
Date:	Sun, 23 Mar 2014 14:12:06 +1100
From:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
To:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>
Cc:	Vincent Guittot <vincent.guittot@...aro.org>, peterz@...radead.org,
	mingo@...nel.org, linux-kernel@...r.kernel.org,
	tony.luck@...el.com, fenghua.yu@...el.com, schwidefsky@...ibm.com,
	james.hogan@...tec.com, cmetcalf@...era.com,
	linux@....linux.org.uk, linux-arm-kernel@...ts.infradead.org,
	dietmar.eggemann@....com, linaro-kernel@...ts.linaro.org
Subject: Re: [PATCH v3 6/6] sched: powerpc: Add SD_SHARE_POWERDOMAIN for SMT
 level

On Sun, 2014-03-23 at 07:19 +0530, Preeti U Murthy wrote:
> We were discussing the impact of this consolidation and we are not too
> sure if it will yield us good power efficiency. So we would want to
> experiment with the power aware scheduler to find the "sweet spot" for
> the number of threads to consolidate to and more importantly if there
> is
> one such number at all. Else we would not want to go this way at all.
> Hence it looks best if this patch is dropped until we validate it. We
> don't want the code getting in and then out if we find out later there
> are no benefits to it.
> 
> I am sorry that I suggested this patch a bit pre-mature in the
> experimentation and validation stage. When you release the load
> balancing patchset for power aware scheduler I shall validate this
> patch. But until then its best if it does not get merged.

It's quite possible that we never find a correct "sweet spot" for all
workloads.

Ideally, the "target" number of used threads per core should be a
tunable so that the user / distro can "tune" based on a given workload
whether to pack cores and how much to pack them, vs. spreading the
workload. Akin to scheduling for performance vs. power in a way (though
lower perf usually means higher power due to longer running jobs of
course).

In any case, we need to experiment.

Cheers,
Ben.
 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ