lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 12 May 2016 07:07:50 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Michael Neuling <mikey@...ling.org>
Cc:	Matt Fleming <matt@...eblueprint.co.uk>, mingo@...nel.org,
	linux-kernel@...r.kernel.org, clm@...com, mgalbraith@...e.de,
	tglx@...utronix.de, fweisbec@...il.com, srikar@...ux.vnet.ibm.com,
	anton@...ba.org, oliver <oohall@...il.com>,
	"Shreyas B. Prabhu" <shreyas@...ux.vnet.ibm.com>
Subject: Re: [RFC][PATCH 4/7] sched: Replace sd_busy/nr_busy_cpus with
 sched_domain_shared

On Thu, May 12, 2016 at 12:05:37PM +1000, Michael Neuling wrote:
> On Wed, 2016-05-11 at 20:24 +0200, Peter Zijlstra wrote:
> > On Wed, May 11, 2016 at 02:33:45PM +0200, Peter Zijlstra wrote:
> > > 
> > > Hmm, PPC folks; what does your topology look like?
> > > 
> > > Currently your sched_domain_topology, as per arch/powerpc/kernel/smp.c
> > > seems to suggest your cores do not share cache at all.
> > > 
> > > https://en.wikipedia.org/wiki/POWER7 seems to agree and states
> > > 
> > >   "4 MB L3 cache per C1 core"
> > > 
> > > And http://www-03.ibm.com/systems/resources/systems_power_software_i_pe
> > > rfmgmt_underthehood.pdf
> > > also explicitly draws pictures with the L3 per core.
> > > 
> > > _however_, that same document describes L3 inter-core fill and lateral
> > > cast-out, which sounds like the L3s work together to form a node wide
> > > caching system.
> > > 
> > > Do we want to model this co-operative L3 slices thing as a sort of
> > > node-wide LLC for the purpose of the scheduler ?
> > Going back a generation; Power6 seems to have a shared L3 (off package)
> > between the two cores on the package. The current topology does not
> > reflect that at all.
> > 
> > And going forward a generation; Power8 seems to share the per-core
> > (chiplet) L3 amonst all cores (chiplets) + is has the centaur (memory
> > controller) 16M L4.
> 
> Yep, L1/L2/L3 is per core on POWER8 and POWER7.  POWER6 and POWER5 (both
> dual core chips) had a shared off chip cache

But as per the above, Power7 and Power8 have explicit logic to share the
per-core L3 with the other cores.

How effective is that? From some of the slides/documents i've looked at
the L3s are connected with a high-speed fabric. Suggesting that the
cross-core sharing should be fairly efficient.

In which case it would make sense to treat/model the combined L3 as a
single large LLC covering all cores.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ