[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0a02cfe-7fc2-494c-8734-e5583f42a8f7@paulmck-laptop>
Date: Thu, 9 May 2024 20:59:28 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Oleg Nesterov <oleg@...hat.com>
Cc: "Uladzislau Rezki (Sony)" <urezki@...il.com>, RCU <rcu@...r.kernel.org>,
Neeraj upadhyay <Neeraj.Upadhyay@....com>,
Boqun Feng <boqun.feng@...il.com>, Hillf Danton <hdanton@...a.com>,
Joel Fernandes <joel@...lfernandes.org>,
LKML <linux-kernel@...r.kernel.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>,
Frederic Weisbecker <frederic@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 25/48] rcu: Mark writes to rcu_sync ->gp_count field
On Thu, May 09, 2024 at 05:13:12PM +0200, Oleg Nesterov wrote:
> On 05/07, Paul E. McKenney wrote:
> >
> > On Tue, May 07, 2024 at 10:54:41AM -0400, Oleg Nesterov wrote:
> > > Hello,
> > >
> > > I feel I don't really like this patch but I am travelling without my working
> > > laptop, can't read the source code ;) Quite possibly I am wrong, I'll return
> > > to this when I get back on May 10.
> >
> > By the stricter data-race rules used in RCU code [1], this is a bug that
> > needs to be fixed.
>
> Now that I can read the code... Sorry, still can't understand.
>
> > which is read locklessly,
>
> Where???
>
> OK, OK, we have
>
> // rcu_sync_exit()
> WARN_ON_ONCE(READ_ONCE(rsp->gp_count) == 0)
>
> and
>
> // rcu_sync_dtor()
> WARN_ON_ONCE(READ_ONCE(rsp->gp_count));
>
> other than that ->gp_count is always accessed under ->rss_lock.
>
> And yes, at least WARN_ON_ONCE() in rcu_sync_exit() can obviously race with
> rcu_sync_enter/exit which update gp_count. I think this is fine correctness-wise.
>
> But OK, we need to please KCSAN (or is there another problem I missed ???)
>
> We can move these WARN_ON()'s into the ->rss_lock protected section.
>
> Or perhaps we can use data_race(rsp->gp_count) ? To be honest I thought
> that READ_ONCE() should be enough...
>
> Or we can simply kill these WARN_ON_ONCE()'s.
Or we could move those WARN_ON_ONCE() under the lock. If this would
be a lock-contention issue, we could condition them with something like
IS_ENABLED(CONFIG_PROVE_RCU). Then all accesses to those variables would
always be protected by the lock, and the WRITE_ONCE() and READ_ONCE()
calls could be dropped. (Or am I missing another lockless access?)
Which would have the further advantage that if anyone accessed these
without holding the lock, KCSAN would complain.
> I don't understand why should we add more WRITE_ONCE()'s into the critical
> section protected by ->rss_lock.
There are indeed several ways to fix this. Which would you prefer?
> Help! ;)
;-) ;-) ;-)
Thanx, Paul
> Oleg.
>
>
> which in turn results in a data race. The fix is to mark
> > the updates (as below) with WRITE_ONCE().
> >
> > Or is there something in one or the other of these updates to ->gp_count
> > that excludes lockless readers? (I am not seeing it, but you know this
> > code way better than I do!)
> >
> > Thanx, Paul
> >
> > [1] https://docs.google.com/document/d/1FwZaXSg3A55ivVoWffA9iMuhJ3_Gmj_E494dLYjjyLQ/edit?usp=sharing
> >
> > > Oleg.
> > >
> > > On 05/07, Uladzislau Rezki (Sony) wrote:
> > > >
> > > > From: "Paul E. McKenney" <paulmck@...nel.org>
> > > >
> > > > The rcu_sync structure's ->gp_count field is updated under the protection
> > > > of ->rss_lock, but read locklessly, and KCSAN noted the data race.
> > > > This commit therefore uses WRITE_ONCE() to do this update to clearly
> > > > document its racy nature.
> > > >
> > > > Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> > > > Cc: Oleg Nesterov <oleg@...hat.com>
> > > > Cc: Peter Zijlstra <peterz@...radead.org>
> > > > Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
> > > > ---
> > > > kernel/rcu/sync.c | 8 ++++++--
> > > > 1 file changed, 6 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/kernel/rcu/sync.c b/kernel/rcu/sync.c
> > > > index 86df878a2fee..6c2bd9001adc 100644
> > > > --- a/kernel/rcu/sync.c
> > > > +++ b/kernel/rcu/sync.c
> > > > @@ -122,7 +122,7 @@ void rcu_sync_enter(struct rcu_sync *rsp)
> > > > * we are called at early boot time but this shouldn't happen.
> > > > */
> > > > }
> > > > - rsp->gp_count++;
> > > > + WRITE_ONCE(rsp->gp_count, rsp->gp_count + 1);
> > > > spin_unlock_irq(&rsp->rss_lock);
> > > >
> > > > if (gp_state == GP_IDLE) {
> > > > @@ -151,11 +151,15 @@ void rcu_sync_enter(struct rcu_sync *rsp)
> > > > */
> > > > void rcu_sync_exit(struct rcu_sync *rsp)
> > > > {
> > > > + int gpc;
> > > > +
> > > > WARN_ON_ONCE(READ_ONCE(rsp->gp_state) == GP_IDLE);
> > > > WARN_ON_ONCE(READ_ONCE(rsp->gp_count) == 0);
> > > >
> > > > spin_lock_irq(&rsp->rss_lock);
> > > > - if (!--rsp->gp_count) {
> > > > + gpc = rsp->gp_count - 1;
> > > > + WRITE_ONCE(rsp->gp_count, gpc);
> > > > + if (!gpc) {
> > > > if (rsp->gp_state == GP_PASSED) {
> > > > WRITE_ONCE(rsp->gp_state, GP_EXIT);
> > > > rcu_sync_call(rsp);
> > > > --
> > > > 2.39.2
> > > >
> > >
> >
>
Powered by blists - more mailing lists