[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101106194219.GA24135@Krystal>
Date: Sat, 6 Nov 2010 15:42:19 -0400
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Joe Korty <joe.korty@...r.com>, fweisbec@...il.com,
dhowells@...hat.com, loic.minier@...aro.org,
dhaval.giani@...il.com, tglx@...utronix.de, peterz@...radead.org,
linux-kernel@...r.kernel.org, josh@...htriplett.org
Subject: Re: [PATCH] a local-timer-free version of RCU
* Mathieu Desnoyers (mathieu.desnoyers@...icios.com) wrote:
> > > +/**
> > > + * rcu_read_unlock - marks the end of an RCU read-side critical section.
> > > + * Check if a RCU batch was started while we were in the critical
> > > + * section. If so, call rcu_quiescent() join the rendezvous.
> > > + *
> > > + * See rcu_read_lock() for more information.
> > > + */
> > > +void __rcu_read_unlock(void)
> > > +{
> > > + struct rcu_data *r;
> > > + int cpu, flags;
> > > +
>
> Another memory barrier would be needed here to ensure that the memory accesses
> performed within the C.S. are not reordered wrt nest_count decrement.
Nevermind. xchg() acts as a memory barrier, and nest_count is only ever touched
by the local CPU. No memory barrier needed here.
Thanks,
Mathieu
>
> > > + cpu = smp_processor_id();
> > > + r = &per_cpu(rcu_data, cpu);
> > > + if (--r->nest_count == 0) {
> > > + flags = xchg(&r->flags, 0);
> > > + if (flags & DO_RCU_COMPLETION)
> > > + rcu_quiescent(cpu);
> > > + }
> > > +}
> > > +EXPORT_SYMBOL(__rcu_read_unlock);
>
> Thanks,
>
> Mathieu
>
> --
> Mathieu Desnoyers
> Operating System Efficiency R&D Consultant
> EfficiOS Inc.
> http://www.efficios.com
--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists