lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y8GdYgSBtyKwf/qj@boqun-archlinux>
Date:   Fri, 13 Jan 2023 10:05:22 -0800
From:   Boqun Feng <boqun.feng@...il.com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     linux-kernel@...r.kernel.org, rcu@...r.kernel.org,
        kvm@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
        Waiman Long <longman@...hat.com>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        Josh Triplett <josh@...htriplett.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        David Woodhouse <dwmw2@...radead.org>,
        Paolo Bonzini <pbonzini@...hat.com>, seanjc@...gle.com,
        Joel Fernandes <joel@...lfernandes.org>,
        Matthew Wilcox <willy@...radead.org>,
        Michal Luczaj <mhal@...x.co>
Subject: Re: [PATCH 2/3] rcu: Equip sleepable RCU with lockdep dependency
 graph checks

On Fri, Jan 13, 2023 at 03:29:49AM -0800, Paul E. McKenney wrote:
> On Thu, Jan 12, 2023 at 10:59:54PM -0800, Boqun Feng wrote:
> > Although all flavors of RCU are annotated correctly with lockdep as
> > recursive read locks, their 'check' parameter of lock_acquire() is
> > unset. It means that RCU read locks are not added into the lockdep
> > dependency graph therefore deadlock detection based on dependency graph
> > won't catch deadlock caused by RCU. This is fine for "non-sleepable" RCU
> > flavors since wait-context detection and other context based detection
> > can catch these deadlocks. However for sleepable RCU, this is limited.
> > 
> > Actually we can detect the deadlocks caused by SRCU by 1) making
> > srcu_read_lock() a 'check'ed recursive read lock and 2) making
> > synchronize_srcu() a empty write lock critical section. Even better,
> > with the newly introduced lock_sync(), we can avoid false positives
> > about irq-unsafe/safe. So do it.
> > 
> > Note that NMI safe SRCU read side critical sections are currently not
> > annonated, since step-by-step approach can help us deal with
> > false-positives. These may be annotated in the future.
> > 
> > Signed-off-by: Boqun Feng <boqun.feng@...il.com>
> 
> Nice, thank you!!!
> 
> Acked-by: Paul E. McKenney <paulmck@...nel.org>
> 
> Or if you would prefer that I take the series through -rcu, please just
> let me know.
> 

I prefer that the first two patches go through your tree, because it
reduces the synchronization among locking, rcu and KVM trees to the
synchronization betwen rcu and KVM trees.

For patch #3, since it's not really ready yet, so I don't know, but I
guess when it's finished, probably better go through -rcu.

Regards,
Boqun

> 							Thanx, Paul
> 
> > ---
> >  include/linux/srcu.h  | 23 +++++++++++++++++++++--
> >  kernel/rcu/srcutiny.c |  2 ++
> >  kernel/rcu/srcutree.c |  2 ++
> >  3 files changed, 25 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> > index 9b9d0bbf1d3c..a1595f8c5155 100644
> > --- a/include/linux/srcu.h
> > +++ b/include/linux/srcu.h
> > @@ -102,6 +102,21 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
> >  	return lock_is_held(&ssp->dep_map);
> >  }
> >  
> > +static inline void srcu_lock_acquire(struct lockdep_map *map)
> > +{
> > +	lock_map_acquire_read(map);
> > +}
> > +
> > +static inline void srcu_lock_release(struct lockdep_map *map)
> > +{
> > +	lock_map_release(map);
> > +}
> > +
> > +static inline void srcu_lock_sync(struct lockdep_map *map)
> > +{
> > +	lock_map_sync(map);
> > +}
> > +
> >  #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
> >  
> >  static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
> > @@ -109,6 +124,10 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
> >  	return 1;
> >  }
> >  
> > +#define srcu_lock_acquire(m) do { } while (0)
> > +#define srcu_lock_release(m) do { } while (0)
> > +#define srcu_lock_sync(m) do { } while (0)
> > +
> >  #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
> >  
> >  #define SRCU_NMI_UNKNOWN	0x0
> > @@ -182,7 +201,7 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp)
> >  
> >  	srcu_check_nmi_safety(ssp, false);
> >  	retval = __srcu_read_lock(ssp);
> > -	rcu_lock_acquire(&(ssp)->dep_map);
> > +	srcu_lock_acquire(&(ssp)->dep_map);
> >  	return retval;
> >  }
> >  
> > @@ -226,7 +245,7 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx)
> >  {
> >  	WARN_ON_ONCE(idx & ~0x1);
> >  	srcu_check_nmi_safety(ssp, false);
> > -	rcu_lock_release(&(ssp)->dep_map);
> > +	srcu_lock_release(&(ssp)->dep_map);
> >  	__srcu_read_unlock(ssp, idx);
> >  }
> >  
> > diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
> > index b12fb0cec44d..336af24e0fe3 100644
> > --- a/kernel/rcu/srcutiny.c
> > +++ b/kernel/rcu/srcutiny.c
> > @@ -197,6 +197,8 @@ void synchronize_srcu(struct srcu_struct *ssp)
> >  {
> >  	struct rcu_synchronize rs;
> >  
> > +	srcu_lock_sync(&ssp->dep_map);
> > +
> >  	RCU_LOCKDEP_WARN(lockdep_is_held(ssp) ||
> >  			lock_is_held(&rcu_bh_lock_map) ||
> >  			lock_is_held(&rcu_lock_map) ||
> > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> > index ca4b5dcec675..408088c73e0e 100644
> > --- a/kernel/rcu/srcutree.c
> > +++ b/kernel/rcu/srcutree.c
> > @@ -1267,6 +1267,8 @@ static void __synchronize_srcu(struct srcu_struct *ssp, bool do_norm)
> >  {
> >  	struct rcu_synchronize rcu;
> >  
> > +	srcu_lock_sync(&ssp->dep_map);
> > +
> >  	RCU_LOCKDEP_WARN(lockdep_is_held(ssp) ||
> >  			 lock_is_held(&rcu_bh_lock_map) ||
> >  			 lock_is_held(&rcu_lock_map) ||
> > -- 
> > 2.38.1
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ