[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130105174844.GA14172@linux.vnet.ibm.com>
Date: Sat, 5 Jan 2013 09:48:44 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org
Cc: mingo@...e.hu, laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
peterz@...radead.org, rostedt@...dmis.org, Valdis.Kletnieks@...edu,
dhowells@...hat.com, edumazet@...gle.com, darren@...art.com,
fweisbec@...il.com, sbw@....edu, patches@...aro.org
Subject: [PATCH tip/core/rcu 0/14] RCU idle/no-CB changes for 3.9
Hello!
This series contains changes to RCU_FAST_NO_HZ idle entry/exit and also
removes restrictions on no-CBs CPUs. This series contains some commits
that are still rather experimental, so you should avoid using these patches
unless you would like to help debug them. ;-)
1. Tag callback lists with the grace-period number that they are
waiting for. This change enables a number of optimizations
for RCU_FAST_NO_HZ, and though it add a bit of code, it greatly
simplifies RCU's callback handling.
2. Trace callback acceleration (which is when RCU notices that a
group of callbacks doesn't actually need to wait as long as it
previously thought).
3. Remove restrictions on no-CBs CPUs. This patch is probably the
highest-risk of the group.
4. Allow some control of no-CBs CPUs at kernel-build time. The option
of most interest is probably the one that makes -all- CPUs be
no-CBs CPUs.
5. Distinguish the no-CBs kthreads for the various RCU flavors.
Without this patch, CPU 0 would have up to three kthreads all
named "rcuo0", which is less than optimal.
6. Export RCU_FAST_NO_HZ parameters to sysfs to allow run-time
adjustment.
7. Re-introduce callback acceleration during grace-period cleanup.
Now that the callbacks are associated with specific grace periods,
such acceleration is idempotent, and it is now safe to accelerate
more than needed. (In contrast, in the past, too-frequent callback
acceleration resulted in infrequent RCU failures.)
8. Use the newly numbered callbacks to greatly reduce the CPU overhead
incurred at idle entry by RCU_FAST_NO_HZ. The fact that the
callbacks are now numbered means that instead of repeatedly
cranking the RCU state machine to try to get all callbacks
invoked, we can instead rely on the numbering so that the CPU
can take full advantage of any grace periods that elapse while
it is asleep. CPUs with callbacks still have limited sleep times,
especially if they have at least one non-lazy callback queued.
9-14. Allow CPUs to make known their need for future grace periods,
which is also used to reduce the need for frenetic RCU
state-machine cranking upon RCU_FAST_NO_HZ entry to idle.
9. Move the release of the root rcu_node structure's ->lock
to then end of rcu_start_gp().
10. Repurpose no-CB's grace-period event tracing to that of
future grace periods, which share no-CB's grace-period
mechanism.
11. Move the release of the root rcu_node structure's ->lock
to rcu_start_gp()'s callers.
12. Rename the rcu_node ->n_nocb_gp_requests field to
->need_future_gp.
13. Abstract rcu_start_future_gp() from rcu_nocb_wait_gp()
to that RCU_FAST_NO_HZ can use the no-CB CPUs mechanism
for allowing a CPU to record its need for future grace
periods.
14. Make rcu_accelerate_cbs() note the need for future
grace periods, thus avoiding delays in starting grace
periods that currently happen due to the CPUs needing
those grace periods being out of action when the previous
grace period ends.
Thanx, Paul
b/include/linux/rcupdate.h | 1
b/include/trace/events/rcu.h | 77 +++
b/init/Kconfig | 17
b/kernel/rcutree.c | 475 ++++++++++++++++++-----
b/kernel/rcutree.h | 39 -
b/kernel/rcutree_plugin.h | 859 ++++++++++++++++++-------------------------
b/kernel/rcutree_trace.c | 2
7 files changed, 848 insertions(+), 622 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists