[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191009142508.GE143258@google.com>
Date: Wed, 9 Oct 2019 10:25:08 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: Boqun Feng <boqun.feng@...il.com>
Cc: Marco Elver <elver@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
"Paul E. McKenney" <paulmck@...nel.org>,
Josh Triplett <josh@...htriplett.org>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>, rcu@...r.kernel.org
Subject: Re: [PATCH] rcu: Avoid to modify mask_ofl_ipi in
sync_rcu_exp_select_node_cpus()
On Wed, Oct 09, 2019 at 10:20:17AM +0800, Boqun Feng wrote:
[snip]
> > Boqun, are you going to be posting another patch which just uses mask_ofl_ipi
> > in the for_each(..) loop? (without using _snap) as Paul suggested?
> >
>
> IIUC, Paul already has this fix along with other ->expmask queued in his
> dev branch:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git/commit/?h=dev&id=4e4fefe0630dcf7415d62e6d9171c8f209444376
>
> , and with the proper "Reported-by" tag to give syzbot credit.
Yes, I see it now. So Marco you should be good ;)
thanks!
- Joel
>
> Regards,
> Boqun
>
> > Paul mentioned other places where rnp->expmask is locklessly accessed so I
> > think that may be fixed separately (such as the stall-warning code). Paul,
> > were you planning on fixing all such accesses together (other than this code)
> > or should I look into it more? I guess for the stall case, KCSAN would have
> > to trigger stalls to see those issues.
> >
> > thanks,
> >
> > - Joel
> >
> > >
> > > Thanks!
> > > -- Marco
> > >
> > > > > kernel/rcu/tree_exp.h | 13 ++++++-------
> > > > > 1 file changed, 6 insertions(+), 7 deletions(-)
> > > > >
> > > > > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> > > > > index 69c5aa64fcfd..212470018752 100644
> > > > > --- a/kernel/rcu/tree_exp.h
> > > > > +++ b/kernel/rcu/tree_exp.h
> > > > > @@ -387,10 +387,10 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
> > > > > }
> > > > > ret = smp_call_function_single(cpu, rcu_exp_handler, NULL, 0);
> > > > > put_cpu();
> > > > > - if (!ret) {
> > > > > - mask_ofl_ipi &= ~mask;
> > > > > + /* The CPU responses the IPI, and will report QS itself */
> > > > > + if (!ret)
> > > > > continue;
> > > > > - }
> > > > > +
> > > > > /* Failed, raced with CPU hotplug operation. */
> > > > > raw_spin_lock_irqsave_rcu_node(rnp, flags);
> > > > > if ((rnp->qsmaskinitnext & mask) &&
> > > > > @@ -401,13 +401,12 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
> > > > > schedule_timeout_uninterruptible(1);
> > > > > goto retry_ipi;
> > > > > }
> > > > > - /* CPU really is offline, so we can ignore it. */
> > > > > - if (!(rnp->expmask & mask))
> > > > > - mask_ofl_ipi &= ~mask;
> > > > > + /* CPU really is offline, and we need its QS to pass GP. */
> > > > > + if (rnp->expmask & mask)
> > > > > + mask_ofl_test |= mask;
> > > > > raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
> > > > > }
> > > > > /* Report quiescent states for those that went offline. */
> > > > > - mask_ofl_test |= mask_ofl_ipi;
> > > > > if (mask_ofl_test)
> > > > > rcu_report_exp_cpu_mult(rnp, mask_ofl_test, false);
> > > > > }
> > > > > --
> > > > > 2.23.0
> > > > >
Powered by blists - more mailing lists