[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1335454137.13683.95.camel@twins>
Date: Thu, 26 Apr 2012 17:28:57 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: paulmck@...ux.vnet.ibm.com
Cc: linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
dipankar@...ibm.com, akpm@...ux-foundation.org,
mathieu.desnoyers@...ymtl.ca, josh@...htriplett.org,
niv@...ibm.com, tglx@...utronix.de, rostedt@...dmis.org,
Valdis.Kletnieks@...edu, dhowells@...hat.com,
eric.dumazet@...il.com, darren@...art.com, fweisbec@...il.com,
patches@...aro.org
Subject: Re: [PATCH RFC tip/core/rcu 6/6] rcu: Reduce cache-miss
initialization latencies for large systems
On Thu, 2012-04-26 at 07:12 -0700, Paul E. McKenney wrote:
> On Thu, Apr 26, 2012 at 02:51:47PM +0200, Peter Zijlstra wrote:
> > Wouldn't it be much better to match the rcu fanout tree to the physical
> > topology of the machine?
>
> From what I am hearing, doing so requires me to morph the rcu_node tree
> at run time. I might eventually become courageous/inspired/senile
> enough to try this, but not yet. ;-)
Yes, boot time with possibly some hotplug hooks.
> Actually, some of this topology shifting seems to me like a firmware
> bug. Why not arrange the Linux-visible numbering in a way to promote
> locality for code sequencing through the CPUs?
I'm not sure.. but it seems well established on x86 to first enumerate
the cores (thread 0) and then the sibling threads (thread 1) -- one
'advantage' is that if you boot with max_cpus=$half you get all cores
instead of half the cores.
OTOH it does make linear iteration of the cpus 'funny' :-)
Also, a fanout of 16 is nice when your machine doesn't have HT and has a
2^n core count, but some popular machines these days have 6/10 cores per
socket, resulting in your fanout splitting caches.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists