[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20161025173946.GA32045@linux.vnet.ibm.com>
Date: Tue, 25 Oct 2016 10:39:46 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: riel@...hat.com
Cc: linux-kernel@...r.kernel.org
Subject: Re: Untested patch to recheck idle state for expedited grace periods
On Tue, Oct 11, 2016 at 09:15:40AM -0700, Paul E. McKenney wrote:
> On Tue, Oct 11, 2016 at 06:28:49AM -0700, Paul E. McKenney wrote:
> > Hello, Rik,
> >
> > And it turns out that I did not in fact do the recheck at IPI time.
> > The (untested) patch below is an alleged fix. Thoughts?
>
> And it passes modest rcutorture testing, for whatever that might be
> worth.
How does it do at your end? If I don't hear otherwise, I will assume
that it doesn't help and will mothball it.
Thanx, Paul
> > ------------------------------------------------------------------------
> >
> > commit e53e0b3e7b3c783962f9461bcb9aa8bc3e3a8688
> > Author: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> > Date: Tue Oct 11 06:09:59 2016 -0700
> >
> > rcu: Make expedited grace periods recheck dyntick idle state
> >
> > Expedited grace periods check dyntick-idle state, and avoid sending
> > IPIs to idle CPUs, including those running guest OSes, and, on NOHZ_FULL
> > kernels, nohz_full CPUs. However, the kernel has been observed checking
> > a CPU while it was non-idle, but sending the IPI after it has gone
> > idle. This commit therefore rechecks idle state immediately before
> > sending the IPI, refraining from IPIing CPUs that have since gone idle.
> >
> > Reported-by: Rik van Riel <riel@...hat.com>
> > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> >
> > diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> > index e99a5234d9ed..fe98dd24adf8 100644
> > --- a/kernel/rcu/tree.h
> > +++ b/kernel/rcu/tree.h
> > @@ -404,6 +404,7 @@ struct rcu_data {
> > atomic_long_t exp_workdone1; /* # done by others #1. */
> > atomic_long_t exp_workdone2; /* # done by others #2. */
> > atomic_long_t exp_workdone3; /* # done by others #3. */
> > + int exp_dynticks_snap; /* Double-check need for IPI. */
> >
> > /* 7) Callback offloading. */
> > #ifdef CONFIG_RCU_NOCB_CPU
> > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> > index 24343eb87b58..d3053e99fdb6 100644
> > --- a/kernel/rcu/tree_exp.h
> > +++ b/kernel/rcu/tree_exp.h
> > @@ -358,8 +358,10 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
> > struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
> > struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
> >
> > + rdp->exp_dynticks_snap =
> > + atomic_add_return(0, &rdtp->dynticks);
> > if (raw_smp_processor_id() == cpu ||
> > - !(atomic_add_return(0, &rdtp->dynticks) & 0x1) ||
> > + !(rdp->exp_dynticks_snap & 0x1) ||
> > !(rnp->qsmaskinitnext & rdp->grpmask))
> > mask_ofl_test |= rdp->grpmask;
> > }
> > @@ -377,9 +379,17 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
> > /* IPI the remaining CPUs for expedited quiescent state. */
> > for_each_leaf_node_possible_cpu(rnp, cpu) {
> > unsigned long mask = leaf_node_cpu_bit(rnp, cpu);
> > + struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu);
> > + struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu);
> > +
> > if (!(mask_ofl_ipi & mask))
> > continue;
> > retry_ipi:
> > + if (atomic_add_return(0, &rdtp->dynticks) !=
> > + rdp->exp_dynticks_snap) {
> > + mask_ofl_test |= mask;
> > + continue;
> > + }
> > ret = smp_call_function_single(cpu, func, rsp, 0);
> > if (!ret) {
> > mask_ofl_ipi &= ~mask;
Powered by blists - more mailing lists