[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220330175719.GH4285@paulmck-ThinkPad-P17-Gen-1>
Date: Wed, 30 Mar 2022 10:57:19 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Uladzislau Rezki <uladzislau.rezki@...y.com>,
Boqun Feng <boqun.feng@...il.com>,
Neeraj Upadhyay <quic_neeraju@...cinc.com>,
Joel Fernandes <joel@...lfernandes.org>
Subject: Re: [PATCH 2/4] rcu: No need to reset the poll request flag before
completion
On Wed, Mar 30, 2022 at 01:27:52PM +0200, Frederic Weisbecker wrote:
> On Wed, Mar 16, 2022 at 03:42:53PM +0100, Frederic Weisbecker wrote:
> > The flag allowing to requeue the polling work is reset before the
> > polling even starts. However there is no point in having two competing
> > polling on the same grace period. Just reset the flag once we have
> > completed the grace period only.
> >
> > Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
> > Cc: Neeraj Upadhyay <quic_neeraju@...cinc.com>
> > Cc: Boqun Feng <boqun.feng@...il.com>
> > Cc: Uladzislau Rezki <uladzislau.rezki@...y.com>
> > Cc: Joel Fernandes <joel@...lfernandes.org>
> > ---
> > kernel/rcu/tree_exp.h | 1 -
> > 1 file changed, 1 deletion(-)
> >
> > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> > index b6fd857f34ba..763ec35546ed 100644
> > --- a/kernel/rcu/tree_exp.h
> > +++ b/kernel/rcu/tree_exp.h
> > @@ -911,7 +911,6 @@ static void sync_rcu_do_polled_gp(struct work_struct *wp)
> >
> > raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags);
> > s = rnp->exp_seq_poll_rq;
> > - rnp->exp_seq_poll_rq |= 0x1;
>
> On a second (or actually twentieth) thought, this patch and all those following
> make wrapping issues more likely:
>
> * Before this patch, wrapping occuring *after* the 0x1 is set on the beginning
> of the workqueue is fine. The last vulnerable wrapping scenario is when
> the wrapping happens before we reach the beginning of the workqueue
> execution that sets the 0x1, so the work may happen not to be queued.
>
>
> * After this patch, wrapping occuring *before* the GP completion in the
> workqueue will be ignored and fail. Still unlikely, but less unlikely than
> before this patch.
>
> So please revert this series. Only the first patch "rcu: Remove needless polling
> work requeue for further waiter" still seem to make sense.
I know that twentieth-thought feeling!
I reverted the following commits, and will remove the original and the
reversion of each on my next rebase:
26632dde0c40 ("rcu: No need to reset the poll request flag before completion")
b889e463d447 ("rcu: Perform early sequence fetch for polling locklessly")
11eccc01200f ("rcu: Name internal polling flag")
Would it make sense to apply rcu_seq_done_exact(), perhaps as follows?
Or is there some reason this would cause problems?
Thanx, Paul
------------------------------------------------------------------------
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index b6fd857f34ba..bd47fce0e08c 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -992,7 +992,7 @@ bool poll_state_synchronize_rcu_expedited(unsigned long oldstate)
WARN_ON_ONCE(!(oldstate & RCU_GET_STATE_FROM_EXPEDITED));
if (oldstate & RCU_GET_STATE_USE_NORMAL)
return poll_state_synchronize_rcu(oldstate & ~RCU_GET_STATE_BAD_FOR_NORMAL);
- if (!rcu_exp_gp_seq_done(oldstate & ~RCU_SEQ_STATE_MASK))
+ if (!rcu_seq_done_exact(&rcu_state.expedited_sequence, oldstate & ~RCU_SEQ_STATE_MASK))
return false;
smp_mb(); /* Ensure GP ends before subsequent accesses. */
return true;
Powered by blists - more mailing lists