lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120426161509.GE2407@linux.vnet.ibm.com>
Date:	Thu, 26 Apr 2012 09:15:09 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
	dipankar@...ibm.com, akpm@...ux-foundation.org,
	mathieu.desnoyers@...ymtl.ca, josh@...htriplett.org,
	niv@...ibm.com, tglx@...utronix.de, rostedt@...dmis.org,
	Valdis.Kletnieks@...edu, dhowells@...hat.com,
	eric.dumazet@...il.com, darren@...art.com, fweisbec@...il.com,
	patches@...aro.org
Subject: Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss
 initialization latencies for large systems

On Thu, Apr 26, 2012 at 05:28:57PM +0200, Peter Zijlstra wrote:
> On Thu, 2012-04-26 at 07:12 -0700, Paul E. McKenney wrote:
> > On Thu, Apr 26, 2012 at 02:51:47PM +0200, Peter Zijlstra wrote:
> 
> > > Wouldn't it be much better to match the rcu fanout tree to the physical
> > > topology of the machine?
> > 
> > From what I am hearing, doing so requires me to morph the rcu_node tree
> > at run time.  I might eventually become courageous/inspired/senile
> > enough to try this, but not yet.  ;-)
> 
> Yes, boot time with possibly some hotplug hooks.

Has anyone actually measured any slowdown due to the rcu_node structure
not matching the topology?  (But see also below.)

> > Actually, some of this topology shifting seems to me like a firmware
> > bug.  Why not arrange the Linux-visible numbering in a way to promote
> > locality for code sequencing through the CPUs?
> 
> I'm not sure.. but it seems well established on x86 to first enumerate
> the cores (thread 0) and then the sibling threads (thread 1) -- one
> 'advantage' is that if you boot with max_cpus=$half you get all cores
> instead of half the cores.
> 
> OTOH it does make linear iteration of the cpus 'funny' :-)

Like I said, firmware bug.  Seems like the fix should be there as well.
Perhaps there needs to be two CPU numberings, one for people wanting
whole cores and another for people who want cache locality.  Yes, this
could be confusing, but keep in mind that you are asking every kernel
subsystem to keep its own version of the cache-locality numbering,
and that will be even more confusing.

> Also, a fanout of 16 is nice when your machine doesn't have HT and has a
> 2^n core count, but some popular machines these days have 6/10 cores per
> socket, resulting in your fanout splitting caches.

That is easy.  Such systems can set CONFIG_RCU_FANOUT to 6, 12, 10,
or 20, depending on preference.  With a patch intended for 3.6, they
could set the smallest reasonable value at build time and adjust to
the hardware using the boot parameter.

http://www.gossamer-threads.com/lists/linux/kernel/1524864

I expect to make other similar changes over time, but will be proceeding
cautiously.

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ