[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091202200955.GA12950@linux.vnet.ibm.com>
Date: Wed, 2 Dec 2009 12:09:55 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org
Cc: mingo@...e.hu, laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...ymtl.ca,
josh@...htriplett.org, dvhltc@...ibm.com, niv@...ibm.com,
tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org,
Valdis.Kletnieks@...edu, dhowells@...hat.com
Subject: [PATCH tip/core/rcu 0/4] rcu: preemptible expedited grace periods
and cleanups
This patchset includes some cleanups, improved diagnostics, and an
implementation of expedited preemptible RCU grace periods that is
actually expedited:
1. Rename the "quiet" functions. The name rcu_quiet_cpu() was
clear enough, but now that we have four flavors of quietness
with more on the way, we need more meaningful names.
2. Enable a fourth level of the rcu_node hierarchy. No, we really
don't need the ability to run on million-CPU SMP systems at the
moment, but the additional level allows more vigorous
stress-testing on 16-CPU systems.
3. Add an implementation of synchronize_rcu_expedited() that
actually expedites preemptible-RCU grace periods.
4. Make RCU_CPU_STALL_DETECTOR be on by default. If this works
well, the #ifdefs will eventually be removed to reduce testing
load.
This patchset is intended for 2.6.34, but has passed sufficient testing
that it could safely be included in 2.6.33, if desired.
b/kernel/rcutorture.c | 34 +++++--
b/kernel/rcutree.c | 66 ++++++++-------
b/kernel/rcutree.h | 3
b/kernel/rcutree_plugin.h | 13 +--
b/kernel/rcutree_trace.c | 11 +-
b/lib/Kconfig.debug | 3
kernel/rcutree.c | 16 ++-
kernel/rcutree.h | 51 ++++++++++-
kernel/rcutree_plugin.h | 198 +++++++++++++++++++++++++++++++++++++++++++---
9 files changed, 324 insertions(+), 71 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists