lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 12 May 2016 21:07:52 +1000
From:	Michael Neuling <mikey@...ling.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Matt Fleming <matt@...eblueprint.co.uk>, mingo@...nel.org,
	linux-kernel@...r.kernel.org, clm@...com, mgalbraith@...e.de,
	tglx@...utronix.de, fweisbec@...il.com, srikar@...ux.vnet.ibm.com,
	anton@...ba.org, oliver <oohall@...il.com>,
	"Shreyas B. Prabhu" <shreyas@...ux.vnet.ibm.com>
Subject: Re: [RFC][PATCH 4/7] sched: Replace sd_busy/nr_busy_cpus with
 sched_domain_shared

On Thu, 2016-05-12 at 07:07 +0200, Peter Zijlstra wrote:
> On Thu, May 12, 2016 at 12:05:37PM +1000, Michael Neuling wrote:
> > 
> > On Wed, 2016-05-11 at 20:24 +0200, Peter Zijlstra wrote:
> > > 
> > > On Wed, May 11, 2016 at 02:33:45PM +0200, Peter Zijlstra wrote:
> > > > 
> > > > 
> > > > Hmm, PPC folks; what does your topology look like?
> > > > 
> > > > Currently your sched_domain_topology, as per
> > > > arch/powerpc/kernel/smp.c
> > > > seems to suggest your cores do not share cache at all.
> > > > 
> > > > https://en.wikipedia.org/wiki/POWER7 seems to agree and states
> > > > 
> > > >   "4 MB L3 cache per C1 core"
> > > > 
> > > > And http://www-03.ibm.com/systems/resources/systems_power_software_
> > > > i_pe
> > > > rfmgmt_underthehood.pdf
> > > > also explicitly draws pictures with the L3 per core.
> > > > 
> > > > _however_, that same document describes L3 inter-core fill and
> > > > lateral
> > > > cast-out, which sounds like the L3s work together to form a node
> > > > wide
> > > > caching system.
> > > > 
> > > > Do we want to model this co-operative L3 slices thing as a sort of
> > > > node-wide LLC for the purpose of the scheduler ?
> > > Going back a generation; Power6 seems to have a shared L3 (off
> > > package)
> > > between the two cores on the package. The current topology does not
> > > reflect that at all.
> > > 
> > > And going forward a generation; Power8 seems to share the per-core
> > > (chiplet) L3 amonst all cores (chiplets) + is has the centaur (memory
> > > controller) 16M L4.
> > Yep, L1/L2/L3 is per core on POWER8 and POWER7.  POWER6 and POWER5
> > (both
> > dual core chips) had a shared off chip cache
> But as per the above, Power7 and Power8 have explicit logic to share the
> per-core L3 with the other cores.
> 
> How effective is that? From some of the slides/documents i've looked at
> the L3s are connected with a high-speed fabric. Suggesting that the
> cross-core sharing should be fairly efficient.

I'm not sure.  I thought it was mostly private but if another core was
sleeping or not experiencing much cache pressure, another core could use it
for some things. But I'm fuzzy on the the exact properties, sorry.

> In which case it would make sense to treat/model the combined L3 as a
> single large LLC covering all cores.

Are you thinking it would be much cheaper to migrate a task to another core
inside this chip, than to off chip?

Mikey

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ