lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201015172318.GA3705@paulmck-ThinkPad-P72>
Date:   Thu, 15 Oct 2020 10:23:18 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Boqun Feng <boqun.feng@...il.com>, Qian Cai <cai@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...nel.org>, x86 <x86@...nel.org>,
        linux-kernel@...r.kernel.org, linux-tip-commits@...r.kernel.org,
        Linux Next Mailing List <linux-next@...r.kernel.org>,
        Stephen Rothwell <sfr@...b.auug.org.au>
Subject: Re: [tip: locking/core] lockdep: Fix lockdep recursion

On Thu, Oct 15, 2020 at 09:15:01AM -0700, Paul E. McKenney wrote:
> On Thu, Oct 15, 2020 at 11:49:26AM +0200, Peter Zijlstra wrote:
> > On Wed, Oct 14, 2020 at 08:41:28PM -0700, Paul E. McKenney wrote:

[ . . . ]

> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -1764,8 +1764,7 @@ static bool rcu_gp_init(void)
> >  		smp_mb(); // Pair with barriers used when updating ->ofl_seq to odd values.
> >  		firstseq = READ_ONCE(rnp->ofl_seq);
> >  		if (firstseq & 0x1)
> > -			while (firstseq == smp_load_acquire(&rnp->ofl_seq))
> > -				schedule_timeout_idle(1);  // Can't wake unless RCU is watching.
> > +			smp_cond_load_relaxed(&rnp->ofl_seq, VAL == firstseq);
> >  		smp_mb(); // Pair with barriers used when updating ->ofl_seq to even values.
> >  		raw_spin_lock(&rcu_state.ofl_lock);
> >  		raw_spin_lock_irq_rcu_node(rnp);
> 
> This would work, and would be absolutely necessary if grace periods
> took only (say) 500 nanoseconds to complete.  But given that they take
> multiple milliseconds at best, and given that this race is extremely
> unlikely, and given the heavy use of virtualization, I have to stick
> with the schedule_timeout_idle().
> 
> In fact, I have on my list to force this race to happen on the grounds
> that if it ain't tested, it don't work...

And it only too about 1000 seconds of TREE03 to make this happen, so we
should be good just relying on rcutorture.  ;-)

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ