[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1332748484.16159.61.camel@twins>
Date: Mon, 26 Mar 2012 09:54:44 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: paulmck@...ux.vnet.ibm.com
Cc: linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
dipankar@...ibm.com, akpm@...ux-foundation.org,
mathieu.desnoyers@...icios.com, josh@...htriplett.org,
niv@...ibm.com, tglx@...utronix.de, rostedt@...dmis.org,
Valdis.Kletnieks@...edu, dhowells@...hat.com,
eric.dumazet@...il.com, darren@...art.com, fweisbec@...il.com,
patches@...aro.org, torvalds@...ux-foundation.org
Subject: Re: [PATCH RFC] rcu: Make __rcu_read_lock() inlinable
On Sun, 2012-03-25 at 13:52 -0700, Paul E. McKenney wrote:
> The preemptible-RCU implementations of __rcu_read_lock() have not been
> inlinable due to task_struct references that ran afoul of include-file
> dependencies. Fix this (as suggested by Linus) by moving the task_struct
> ->rcu_read_lock_nesting field to a per-CPU variable that is saved and
> restored at context-switch time. With this change, __rcu_read_lock()
> references only that new per-CPU variable, and so may be safely
> inlined. This change also allows some code that was duplicated in
> kernel/rcutree_plugin.h and kernel/rcutiny_plugin.h to be merged into
> include/linux/rcupdate.h.
>
> This same approach unfortunately cannot be used on __rcu_read_unlock()
> because it also references the task_struct ->rcu_read_unlock_special
> field, to which cross-task access is required by rcu_boost(). This
> function will be handled in a separate commit, if need be.
>
> The TREE_RCU variant passes modest rcutorture runs, while TINY_RCU still
> has a few bugs. Peter Zijlstra might have some thoughts on hooking into
> the scheduler. Disallows use of RCU from within the architecture-specific
> switch_to() function, which probably runs afoul of tracing for at least
> some architectures. There probably are still a few other bugs, as well.
>
> TREE_RCU should be OK for experimental usage.
Right, so I really dislike adding this cache-miss to the context switch
path, that said, rcu is used often enough that the savings on
rcu_read_lock() might just come out in favour of this.. but it would be
very nice to have some numbers.
Also,
> /*
> + * Save the incoming task's value for rcu_read_lock_nesting at the
> + * end of a context switch. There can be no process-state RCU read-side
> + * critical sections between the call to rcu_switch_from() and to
> + * rcu_switch_to(). Interrupt-level RCU read-side critical sections are
> + * OK because rcu_read_unlock_special() takes early exits when called
> + * at interrupt level.
> + */
> +void rcu_switch_from(void)
> +{
> + current->rcu_read_lock_nesting_save =
> + __this_cpu_read(rcu_read_lock_nesting);
> + barrier();
> + __this_cpu_write(rcu_read_lock_nesting, 0);
> +}
Since rcu_switch_to() will again write rcu_read_lock_nesting, what's the
point of setting it to zero?
Also, that barrier(), there's a clear dependency between the operations
how can the compiler mess that up?
> +/*
> + * Restore the incoming task's value for rcu_read_lock_nesting at the
> + * end of a context switch.
> */
> +void rcu_switch_to(void)
> {
> + __this_cpu_write(rcu_read_lock_nesting,
> + current->rcu_read_lock_nesting_save);
> + barrier();
> + current->rcu_read_lock_nesting_save = 0;
> }
Similar, a future rcu_switch_from() will again over-write
current->rcu_read_lock_nesting_save, what's the point of clearing it?
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2051,7 +2051,9 @@ context_switch(struct rq *rq, struct task_struct *prev,
> #endif
>
> /* Here we just switch the register state and the stack. */
> + rcu_switch_from();
> switch_to(prev, next, prev);
> + rcu_switch_to();
>
> barrier();
> /*
So why not save one call and do:
switch_to(prev, next, prev);
rcu_switch_to(prev, next);
and have
void rcu_switch_to(struct task_struct *prev, struct task_struct *next)
{
prev->rcu_read_lock_nesting_save = __this_cpu_read(rcu_read_lock_nesting);
__this_cpu_write(rcu_read_lock_nesting) = next->rcu_read_lock_nesting_save;
}
preferably as an inline function so we can avoid all calls.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists