[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z9MKc7hlZFgLr_jM@localhost.localdomain>
Date: Thu, 13 Mar 2025 17:40:19 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Boqun Feng <boqun.feng@...il.com>,
Joel Fernandes <joel@...lfernandes.org>,
Neeraj Upadhyay <neeraj.upadhyay@....com>,
Uladzislau Rezki <urezki@...il.com>,
Zqiang <qiang.zhang1211@...il.com>, rcu <rcu@...r.kernel.org>
Subject: Re: [PATCH 1/3] rcu/exp: Protect against early QS report
Le Fri, Feb 14, 2025 at 01:10:43AM -0800, Paul E. McKenney a écrit :
> On Fri, Feb 14, 2025 at 12:25:57AM +0100, Frederic Weisbecker wrote:
> > When a grace period is started, the ->expmask of each node is set up
> > from sync_exp_reset_tree(). Then later on each leaf node also initialize
> > its ->exp_tasks pointer.
> >
> > This means that the initialization of the quiescent state of a node and
> > the initialization of its blocking tasks happen with an unlocked node
> > gap in-between.
> >
> > It happens to be fine because nothing is expected to report an exp
> > quiescent state within this gap, since no IPI have been issued yet and
> > every rdp's ->cpu_no_qs.b.exp should be false.
> >
> > However if it were to happen by accident, the quiescent state could be
> > reported and propagated while ignoring tasks that blocked _before_ the
> > start of the grace period.
> >
> > Prevent such trouble to happen in the future and initialize both the
> > quiescent states mask to report and the blocked tasks head from the same
> > node locked block.
> >
> > Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
>
> Thank you for looking into this!
>
> One question: What happens if a CPU has tasks pending during the
> call to sync_exp_reset_tree(), but all of these tasks complete
> their RCU read-side critical sections before execution reaches
> __sync_rcu_exp_select_node_cpus()?
>
> (My guess is that all is well, but even if so, it would be good to record
> why in the commit log.)
All is (expected to be) well because the QS won't be reported yet:
rdp->cpu_no_qs.b.exp is still false, therefore rnp->expmask will
still have the RDPs masks set.
!PREEMPT_RCU is different because sync_sched_exp_online_cleanup()
can report the QS earlier. But that patch is a PREEMPT_RCU concern
only.
I'll drop a note on the changelog.
Thanks.
>
> Thanx, Paul
>
> > ---
> > kernel/rcu/tree_exp.h | 14 +++++++-------
> > 1 file changed, 7 insertions(+), 7 deletions(-)
> >
> > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> > index 8d4895c854c5..caff16e441d1 100644
> > --- a/kernel/rcu/tree_exp.h
> > +++ b/kernel/rcu/tree_exp.h
> > @@ -141,6 +141,13 @@ static void __maybe_unused sync_exp_reset_tree(void)
> > raw_spin_lock_irqsave_rcu_node(rnp, flags);
> > WARN_ON_ONCE(rnp->expmask);
> > WRITE_ONCE(rnp->expmask, rnp->expmaskinit);
> > + /*
> > + * Need to wait for any blocked tasks as well. Note that
> > + * additional blocking tasks will also block the expedited GP
> > + * until such time as the ->expmask bits are cleared.
> > + */
> > + if (rcu_is_leaf_node(rnp) && rcu_preempt_has_tasks(rnp))
> > + WRITE_ONCE(rnp->exp_tasks, rnp->blkd_tasks.next);
> > raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
> > }
> > }
> > @@ -393,13 +400,6 @@ static void __sync_rcu_exp_select_node_cpus(struct rcu_exp_work *rewp)
> > }
> > mask_ofl_ipi = rnp->expmask & ~mask_ofl_test;
> >
> > - /*
> > - * Need to wait for any blocked tasks as well. Note that
> > - * additional blocking tasks will also block the expedited GP
> > - * until such time as the ->expmask bits are cleared.
> > - */
> > - if (rcu_preempt_has_tasks(rnp))
> > - WRITE_ONCE(rnp->exp_tasks, rnp->blkd_tasks.next);
> > raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
> >
> > /* IPI the remaining CPUs for expedited quiescent state. */
> > --
> > 2.46.0
> >
Powered by blists - more mailing lists