[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ac5c75d4-5d21-68a5-a428-3ef449595836@redhat.com>
Date: Mon, 16 Jan 2023 17:01:02 -0500
From: Waiman Long <longman@...hat.com>
To: Boqun Feng <boqun.feng@...il.com>, linux-kernel@...r.kernel.org,
rcu@...r.kernel.org, kvm@...r.kernel.org
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Josh Triplett <josh@...htriplett.org>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
David Woodhouse <dwmw2@...radead.org>,
Paolo Bonzini <pbonzini@...hat.com>, seanjc@...gle.com,
Joel Fernandes <joel@...lfernandes.org>,
Matthew Wilcox <willy@...radead.org>,
Michal Luczaj <mhal@...x.co>
Subject: Re: [PATCH 2/3] rcu: Equip sleepable RCU with lockdep dependency
graph checks
On 1/13/23 01:59, Boqun Feng wrote:
> Although all flavors of RCU are annotated correctly with lockdep as
> recursive read locks, their 'check' parameter of lock_acquire() is
> unset. It means that RCU read locks are not added into the lockdep
> dependency graph therefore deadlock detection based on dependency graph
> won't catch deadlock caused by RCU. This is fine for "non-sleepable" RCU
> flavors since wait-context detection and other context based detection
> can catch these deadlocks. However for sleepable RCU, this is limited.
>
> Actually we can detect the deadlocks caused by SRCU by 1) making
> srcu_read_lock() a 'check'ed recursive read lock and 2) making
> synchronize_srcu() a empty write lock critical section. Even better,
> with the newly introduced lock_sync(), we can avoid false positives
> about irq-unsafe/safe. So do it.
>
> Note that NMI safe SRCU read side critical sections are currently not
> annonated, since step-by-step approach can help us deal with
> false-positives. These may be annotated in the future.
>
> Signed-off-by: Boqun Feng <boqun.feng@...il.com>
> ---
> include/linux/srcu.h | 23 +++++++++++++++++++++--
> kernel/rcu/srcutiny.c | 2 ++
> kernel/rcu/srcutree.c | 2 ++
> 3 files changed, 25 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> index 9b9d0bbf1d3c..a1595f8c5155 100644
> --- a/include/linux/srcu.h
> +++ b/include/linux/srcu.h
> @@ -102,6 +102,21 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
> return lock_is_held(&ssp->dep_map);
> }
>
> +static inline void srcu_lock_acquire(struct lockdep_map *map)
> +{
> + lock_map_acquire_read(map);
> +}
> +
> +static inline void srcu_lock_release(struct lockdep_map *map)
> +{
> + lock_map_release(map);
> +}
> +
> +static inline void srcu_lock_sync(struct lockdep_map *map)
> +{
> + lock_map_sync(map);
> +}
> +
> #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
It would be nice if a comment is added to document the difference
between the 2 sets of rcu_lock_* and srcu_lock_* functions. It is
described in patch description, but it is not easy to figure that out
just by looking at the source files.
Cheers,
Longman
Powered by blists - more mailing lists