[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101106193456.GA14197@Krystal>
Date: Sat, 6 Nov 2010 15:34:56 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Joe Korty <joe.korty@...r.com>, fweisbec@...il.com,
dhowells@...hat.com, loic.minier@...aro.org,
dhaval.giani@...il.com, tglx@...utronix.de, peterz@...radead.org,
linux-kernel@...r.kernel.org, josh@...htriplett.org
Subject: Re: [PATCH] a local-timer-free version of RCU
* Paul E. McKenney (paulmck@...ux.vnet.ibm.com) wrote:
> On Fri, Nov 05, 2010 at 05:00:59PM -0400, Joe Korty wrote:
[...]
> > + *
> > + * RCU read-side critical sections may be nested. Any deferred actions
> > + * will be deferred until the outermost RCU read-side critical section
> > + * completes.
> > + *
> > + * It is illegal to block while in an RCU read-side critical section.
> > + */
> > +void __rcu_read_lock(void)
> > +{
> > + struct rcu_data *r;
> > +
> > + r = &per_cpu(rcu_data, smp_processor_id());
> > + if (r->nest_count++ == 0)
> > + /*
> > + * Set the flags value to show that we are in
> > + * a read side critical section. The code starting
> > + * a batch uses this to determine if a processor
> > + * needs to participate in the batch. Including
> > + * a sequence allows the remote processor to tell
> > + * that a critical section has completed and another
> > + * has begun.
> > + */
> > + r->flags = IN_RCU_READ_LOCK | (r->sequence++ << 2);
>
> It seems to me that we need a memory barrier here -- what am I missing?
Agreed, I spotted it too. One more is needed, see below,
>
> > +}
> > +EXPORT_SYMBOL(__rcu_read_lock);
> > +
> > +/**
> > + * rcu_read_unlock - marks the end of an RCU read-side critical section.
> > + * Check if a RCU batch was started while we were in the critical
> > + * section. If so, call rcu_quiescent() join the rendezvous.
> > + *
> > + * See rcu_read_lock() for more information.
> > + */
> > +void __rcu_read_unlock(void)
> > +{
> > + struct rcu_data *r;
> > + int cpu, flags;
> > +
Another memory barrier would be needed here to ensure that the memory accesses
performed within the C.S. are not reordered wrt nest_count decrement.
> > + cpu = smp_processor_id();
> > + r = &per_cpu(rcu_data, cpu);
> > + if (--r->nest_count == 0) {
> > + flags = xchg(&r->flags, 0);
> > + if (flags & DO_RCU_COMPLETION)
> > + rcu_quiescent(cpu);
> > + }
> > +}
> > +EXPORT_SYMBOL(__rcu_read_unlock);
Thanks,
Mathieu
--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists