[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150730162202.GN25159@twins.programming.kicks-ass.net>
Date: Thu, 30 Jul 2015 18:22:02 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com,
Alexander Gordeev <agordeev@...hat.com>
Subject: Re: [PATCH tip/core/rcu 02/12] rcu: Panic if RCU tree can not
accommodate all CPUs
On Thu, Jul 30, 2015 at 08:54:54AM -0700, Paul E. McKenney wrote:
> Good point, and it already does, and I clearly was confused, apologies.
>
> So the real way to make this happen is (for example) to build
> with CONFIG_RCU_FANOUT=2 and CONFIG_RCU_FANOUT_LEAF=16 (the
> default), which could accommodate up to 128 CPUs. Then boot with
> rcutree.rcu_fanout_leaf=2 on a system with more than 16 CPUs, with
> rcutree.rcu_fanout_leaf=3 on a system with more than 24 CPUs, and so on.
Ah, runtime overrides and operator error, but then we can WARN(), reset
the arguments and try again, right? No need to panic the machine and
fail to boot.
> Of course, the truly macho way to get this error message is to build
> with CONFIG_RCU_FANOUT=64 and CONFIG_RCU_FANOUT_LEAF=64, then boot with
> rcutree.rcu_fanout_leaf=63 on a system with more than 16,515,072 CPUs.
> Of course, you get serious style points if the system manages to stay
> up for more than 24 hours without a hardware failure. ;-)
Yes, I'll go power up the nuclear reactor in the basement first :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists