[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210729140317.GU4397@paulmck-ThinkPad-P17-Gen-1>
Date: Thu, 29 Jul 2021 07:03:17 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Boqun Feng <boqun.feng@...il.com>
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com, mingo@...nel.org, jiangshanlai@...il.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org
Subject: Re: [PATCH rcu 11/18] rcu: Mark lockless ->qsmask read in
rcu_check_boost_fail()
On Thu, Jul 29, 2021 at 04:54:30PM +0800, Boqun Feng wrote:
> On Wed, Jul 21, 2021 at 01:21:19PM -0700, Paul E. McKenney wrote:
> > Accesses to ->qsmask are normally protected by ->lock, but there is an
> > exception in the diagnostic code in rcu_check_boost_fail(). This commit
> > therefore applies data_race() to this access to avoid KCSAN complaining
> > about the C-language writes protected by ->lock.
> >
> > Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> > ---
> > kernel/rcu/tree_stall.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
> > index 42847caa3909b..6dd6c9aa3f757 100644
> > --- a/kernel/rcu/tree_stall.h
> > +++ b/kernel/rcu/tree_stall.h
> > @@ -766,7 +766,7 @@ bool rcu_check_boost_fail(unsigned long gp_state, int *cpup)
> >
> > rcu_for_each_leaf_node(rnp) {
> > if (!cpup) {
> > - if (READ_ONCE(rnp->qsmask)) {
> > + if (data_race(READ_ONCE(rnp->qsmask))) {
>
> If the write sides allow normal writes, i.e. allowing store tearing, the
> READ_ONCE() here could read incomplete writes, which could be anything
> basically? And we get the same result if we remove the READ_ONCE(),
> don't we? Yes, I know, without the READ_ONCE(), compilers can do
> anything to optimize on rnp->qsmask, but the end result is same or
> similar to reading incomplete writes (which is also a result by compiler
> optimization). So if we mark something as data_race(), **in theory**, it
> makes no difference with or without the READ_ONCE(), so I think maybe we
> can remove the READ_ONCE() here?
In this particular case, perhaps. But there is also the possibility
of ASSERT_EXCLUSIVE_WRITER() in conjunction with WRITE_ONCE(), and
data_race(READ_ONCE(()) handles all such cases properly.
Again, the point here is to prevent the compiler from messing with
the load in strange and unpredictable ways while also telling KCSAN
that this particular read should not be considered to be part of the
concurrent algorithm.
Thanx, Paul
> Regards,
> Boqun
>
> > return false;
> > } else {
> > if (READ_ONCE(rnp->gp_tasks))
> > --
> > 2.31.1.189.g2e36527f23
> >
Powered by blists - more mailing lists