lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1463018737.28449.38.camel@neuling.org>
Date:	Thu, 12 May 2016 12:05:37 +1000
From:	Michael Neuling <mikey@...ling.org>
To:	Peter Zijlstra <peterz@...radead.org>,
	Matt Fleming <matt@...eblueprint.co.uk>
Cc:	mingo@...nel.org, linux-kernel@...r.kernel.org, clm@...com,
	mgalbraith@...e.de, tglx@...utronix.de, fweisbec@...il.com,
	srikar@...ux.vnet.ibm.com, anton@...ba.org,
	oliver <oohall@...il.com>,
	"Shreyas B. Prabhu" <shreyas@...ux.vnet.ibm.com>
Subject: Re: [RFC][PATCH 4/7] sched: Replace sd_busy/nr_busy_cpus with
 sched_domain_shared

On Wed, 2016-05-11 at 20:24 +0200, Peter Zijlstra wrote:
> On Wed, May 11, 2016 at 02:33:45PM +0200, Peter Zijlstra wrote:
> > 
> > Hmm, PPC folks; what does your topology look like?
> > 
> > Currently your sched_domain_topology, as per arch/powerpc/kernel/smp.c
> > seems to suggest your cores do not share cache at all.
> > 
> > https://en.wikipedia.org/wiki/POWER7 seems to agree and states
> > 
> >   "4 MB L3 cache per C1 core"
> > 
> > And http://www-03.ibm.com/systems/resources/systems_power_software_i_pe
> > rfmgmt_underthehood.pdf
> > also explicitly draws pictures with the L3 per core.
> > 
> > _however_, that same document describes L3 inter-core fill and lateral
> > cast-out, which sounds like the L3s work together to form a node wide
> > caching system.
> > 
> > Do we want to model this co-operative L3 slices thing as a sort of
> > node-wide LLC for the purpose of the scheduler ?
> Going back a generation; Power6 seems to have a shared L3 (off package)
> between the two cores on the package. The current topology does not
> reflect that at all.
> 
> And going forward a generation; Power8 seems to share the per-core
> (chiplet) L3 amonst all cores (chiplets) + is has the centaur (memory
> controller) 16M L4.

Yep, L1/L2/L3 is per core on POWER8 and POWER7.  POWER6 and POWER5 (both
dual core chips) had a shared off chip cache

The POWER8 L4 is really a bit different as it's out in the memory
controller.  It's more of a memory DIMM buffer as it can only cache data
associated with the physical addresses on those DIMMS.

> So it seems the current topology setup is not describing these chips
> very well. Also note that the arch topology code can runtime select a
> topology, so you could make that topo setup micro-arch specific.

We are planning on making some topology changes for the upcoming P9 which
will share L2/L3 amongst pairs of cores (24 cores per chip).

FWIW our P9 upstreaming is still in it's infancy since P9 is not released
yet.

Mike

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ