lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YQNj6v2o5hG/MoXZ@boqun-archlinux>
Date:   Fri, 30 Jul 2021 10:28:58 +0800
From:   Boqun Feng <boqun.feng@...il.com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
        kernel-team@...com, mingo@...nel.org, jiangshanlai@...il.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
        rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
        fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org
Subject: Re: [PATCH rcu 11/18] rcu: Mark lockless ->qsmask read in
 rcu_check_boost_fail()

On Thu, Jul 29, 2021 at 07:03:17AM -0700, Paul E. McKenney wrote:
> On Thu, Jul 29, 2021 at 04:54:30PM +0800, Boqun Feng wrote:
> > On Wed, Jul 21, 2021 at 01:21:19PM -0700, Paul E. McKenney wrote:
> > > Accesses to ->qsmask are normally protected by ->lock, but there is an
> > > exception in the diagnostic code in rcu_check_boost_fail().  This commit
> > > therefore applies data_race() to this access to avoid KCSAN complaining
> > > about the C-language writes protected by ->lock.
> > > 
> > > Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> > > ---
> > >  kernel/rcu/tree_stall.h | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
> > > index 42847caa3909b..6dd6c9aa3f757 100644
> > > --- a/kernel/rcu/tree_stall.h
> > > +++ b/kernel/rcu/tree_stall.h
> > > @@ -766,7 +766,7 @@ bool rcu_check_boost_fail(unsigned long gp_state, int *cpup)
> > >  
> > >  	rcu_for_each_leaf_node(rnp) {
> > >  		if (!cpup) {
> > > -			if (READ_ONCE(rnp->qsmask)) {
> > > +			if (data_race(READ_ONCE(rnp->qsmask))) {
> > 
> > If the write sides allow normal writes, i.e. allowing store tearing, the
> > READ_ONCE() here could read incomplete writes, which could be anything
> > basically? And we get the same result if we remove the READ_ONCE(),
> > don't we? Yes, I know, without the READ_ONCE(), compilers can do
> > anything to optimize on rnp->qsmask, but the end result is same or
> > similar to reading incomplete writes (which is also a result by compiler
> > optimization). So if we mark something as data_race(), **in theory**, it
> > makes no difference with or without the READ_ONCE(), so I think maybe we
> > can remove the READ_ONCE() here?
> 
> In this particular case, perhaps.  But there is also the possibility
> of ASSERT_EXCLUSIVE_WRITER() in conjunction with WRITE_ONCE(), and
> data_race(READ_ONCE(()) handles all such cases properly.
> 
> Again, the point here is to prevent the compiler from messing with
> the load in strange and unpredictable ways while also telling KCSAN
> that this particular read should not be considered to be part of the
> concurrent algorithm.
> 

Thanks for explaining this ;-)

I guess I'm just a little concerned that things may end up with using
data_race(READ_ONCE()) everywhere instead of data_race(), because who
doesn't want his/her racy code to be 1) not reported by KCSan (using
data_race()), and 2) less racy (using READ_ONCE())? ;-)

Regards,
Boqun

> 							Thanx, Paul
> 
> > Regards,
> > Boqun
> > 
> > >  				return false;
> > >  			} else {
> > >  				if (READ_ONCE(rnp->gp_tasks))
> > > -- 
> > > 2.31.1.189.g2e36527f23
> > > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ