lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 16 May 2016 16:00:32 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Michael Neuling <mikey@...ling.org>
Cc:	Matt Fleming <matt@...eblueprint.co.uk>, mingo@...nel.org,
	linux-kernel@...r.kernel.org, clm@...com, mgalbraith@...e.de,
	tglx@...utronix.de, fweisbec@...il.com, srikar@...ux.vnet.ibm.com,
	anton@...ba.org, oliver <oohall@...il.com>,
	"Shreyas B. Prabhu" <shreyas@...ux.vnet.ibm.com>
Subject: Re: [RFC][PATCH 4/7] sched: Replace sd_busy/nr_busy_cpus with
 sched_domain_shared

On Fri, May 13, 2016 at 10:12:26AM +1000, Michael Neuling wrote:

> > Basically; and if so, if its cheap enough to shoot a task to an idle
> > core to avoid queueing. Assuming there still is some cache residency on
> > the old core, the inter-core fill should be much cheaper than fetching
> > it off package (either remote cache or dram).
> 
> So I think that will apply on POWER8.
> 
> In 10.4.2 it says "The L3.1 ECO Caches will be snooped and provide
> intervention data similar to the L2 and L3.0 caches on the
> chip"  That should be much faster than going to another chip or DIMM.
> 
> So migrating to another core on the same chip should be faster than off
> chip.

OK; so something like the below might be what you want to play with.

---
 arch/powerpc/kernel/smp.c | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 55c924b65f71..1a54fa8a3323 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -782,6 +782,23 @@ static struct sched_domain_topology_level powerpc_topology[] = {
 	{ NULL, },
 };
 
+static struct sched_domain_topology_level powerpc8_topology[] = {
+#ifdef CONFIG_SCHED_SMT
+	{ cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) },
+#endif
+#ifdef CONFIG_SCHED_MC
+	/*
+	 * Model the L3.1 cache and sets the LLC as the whole package.
+	 *
+	 * This also ensures we try and move woken tasks to idle cores inside
+	 * the package to avoid queueing.
+	 */
+	{ cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
+#endif
+	{ cpu_cpu_mask, SD_INIT_NAME(DIE) },
+	{ NULL, },
+};
+
 void __init smp_cpus_done(unsigned int max_cpus)
 {
 	cpumask_var_t old_mask;
@@ -806,7 +823,10 @@ void __init smp_cpus_done(unsigned int max_cpus)
 
 	dump_numa_cpu_topology();
 
-	set_sched_topology(powerpc_topology);
+	if (cpu_has_feature(CPU_FTRS_POWER8))
+		set_sched_topology(powerpc8_topology);
+	else
+		set_sched_topology(powerpc_topology);
 
 }
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ