[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170116112239.GL5238@linux.vnet.ibm.com>
Date: Mon, 16 Jan 2017 03:22:39 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Josh Triplett <josh@...htriplett.org>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com
Subject: Re: [PATCH tip/core/rcu 1/6] rcu: Abstract the dynticks
momentary-idle operation
On Sun, Jan 15, 2017 at 11:39:51PM -0800, Josh Triplett wrote:
> On Sat, Jan 14, 2017 at 12:54:40AM -0800, Paul E. McKenney wrote:
> > This commit is the first step towards full abstraction of all accesses to
> > the ->dynticks counter, implementing the previously open-coded atomic add
> > of two in a new rcu_dynticks_momentary_idle() function. This abstraction
> > will ease changes to the ->dynticks counter operation.
> >
> > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
>
> This change has an additional effect not documented in the commit
> message: it eliminates the smp_mb__before_atomic and
> smp_mb__after_atomic calls. Can you please document that in the commit
> message, and explain why that doesn't cause a problem?
The trick is that the old code used the non-value-returning atomic_add(),
which does not imply ordering, hence the smp_mb__before_atomic() and
smp_mb__after_atomic() calls. The new code uses atomic_add_return(),
which does return a value, and therefore implies full ordering in and
of itself.
How would you like me to proceed?
Thanx, Paul
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -281,6 +281,19 @@ static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = {
> > #endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
> > };
> >
> > +/*
> > + * Do a double-increment of the ->dynticks counter to emulate a
> > + * momentary idle-CPU quiescent state.
> > + */
> > +static void rcu_dynticks_momentary_idle(void)
> > +{
> > + struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks);
> > + int special = atomic_add_return(2, &rdtp->dynticks);
> > +
> > + /* It is illegal to call this from idle state. */
> > + WARN_ON_ONCE(!(special & 0x1));
> > +}
> > +
> > DEFINE_PER_CPU_SHARED_ALIGNED(unsigned long, rcu_qs_ctr);
> > EXPORT_PER_CPU_SYMBOL_GPL(rcu_qs_ctr);
> >
> > @@ -300,7 +313,6 @@ EXPORT_PER_CPU_SYMBOL_GPL(rcu_qs_ctr);
> > static void rcu_momentary_dyntick_idle(void)
> > {
> > struct rcu_data *rdp;
> > - struct rcu_dynticks *rdtp;
> > int resched_mask;
> > struct rcu_state *rsp;
> >
> > @@ -327,10 +339,7 @@ static void rcu_momentary_dyntick_idle(void)
> > * quiescent state, with no need for this CPU to do anything
> > * further.
> > */
> > - rdtp = this_cpu_ptr(&rcu_dynticks);
> > - smp_mb__before_atomic(); /* Earlier stuff before QS. */
> > - atomic_add(2, &rdtp->dynticks); /* QS. */
> > - smp_mb__after_atomic(); /* Later stuff after QS. */
> > + rcu_dynticks_momentary_idle();
> > break;
> > }
> > }
> > --
> > 2.5.2
> >
>
Powered by blists - more mailing lists