[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101220165118.GI2143@linux.vnet.ibm.com>
Date: Mon, 20 Dec 2010 08:51:18 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Lai Jiangshan <laijs@...fujitsu.com>
Cc: linux-kernel@...r.kernel.org, mingo@...e.hu, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...ymtl.ca,
josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
peterz@...radead.org, rostedt@...dmis.org, Valdis.Kletnieks@...edu,
dhowells@...hat.com, eric.dumazet@...il.com, darren@...art.com,
Frederic Weisbecker <fweisbec@...il.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH RFC tip/core/rcu 15/20] rcu: Keep gpnum and completed
fields synchronized
On Mon, Dec 20, 2010 at 10:13:35AM +0800, Lai Jiangshan wrote:
> On 12/18/2010 04:54 AM, Paul E. McKenney wrote:
> > From: Frederic Weisbecker <fweisbec@...il.com>
> >
> > When a CPU that was in an extended quiescent state wakes
> > up and catches up with grace periods that remote CPUs
> > completed on its behalf, we update the completed field
> > but not the gpnum that keeps a stale value of a backward
> > grace period ID.
> >
> > Later, note_new_gpnum() will interpret the shift between
> > the local CPU and the node grace period ID as some new grace
> > period to handle and will then start to hunt quiescent state.
> >
> > But if every grace periods have already been completed, this
> > interpretation becomes broken. And we'll be stuck in clusters
> > of spurious softirqs because rcu_report_qs_rdp() will make
> > this broken state run into infinite loop.
> >
> > The solution, as suggested by Lai Jiangshan, is to ensure that
> > the gpnum and completed fields are well synchronized when we catch
> > up with completed grace periods on their behalf by other cpus.
> > This way we won't start noting spurious new grace periods.
> >
> > Suggested-by: Lai Jiangshan <laijs@...fujitsu.com>
> > Signed-off-by: Frederic Weisbecker <fweisbec@...il.com>
> > Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> > Cc: Ingo Molnar <mingo@...e.hu>
> > Cc: Thomas Gleixner <tglx@...utronix.de>
> > Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>
> > Cc: Steven Rostedt <rostedt@...dmis.org
> > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> > ---
> > kernel/rcutree.c | 9 +++++++++
> > 1 files changed, 9 insertions(+), 0 deletions(-)
> >
> > diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> > index 916f42b..8105271 100644
> > --- a/kernel/rcutree.c
> > +++ b/kernel/rcutree.c
> > @@ -680,6 +680,15 @@ __rcu_process_gp_end(struct rcu_state *rsp, struct rcu_node *rnp, struct rcu_dat
> > rdp->completed = rnp->completed;
> >
> > /*
> > + * If we were in an extended quiescent state, we may have
> > + * missed some grace periods that others CPUs took care on
> > + * our behalf. Catch up with this state to avoid noting
> > + * spurious new grace periods.
> > + */
> > + if (rdp->completed > rdp->gpnum)
> > + rdp->gpnum = rdp->completed;
>
> Need to use ULONG_CMP_LT(rdp->gpnum, rdp->completed) instead.
You are quite correct! And the next patch in this series made exactly
that change.
Thanx, Paul
> > +
> > + /*
> > * If another CPU handled our extended quiescent states and
> > * we have no more grace period to complete yet, then stop
> > * chasing quiescent states.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists