[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200217163527.GB145700@google.com>
Date: Mon, 17 Feb 2020 11:35:27 -0500
From: Joel Fernandes <joel@...lfernandes.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Amol Grover <frextrite@...il.com>, Ingo Molnar <mingo@...hat.com>,
Will Deacon <will@...nel.org>, linux-kernel@...r.kernel.org,
linux-kernel-mentees@...ts.linuxfoundation.org,
Madhuparna Bhowmik <madhuparnabhowmik04@...il.com>,
"Paul E . McKenney" <paulmck@...nel.org>
Subject: Re: [PATCH RESEND] lockdep: Pass lockdep expression to RCU lists
On Mon, Feb 17, 2020 at 04:12:46PM +0100, Peter Zijlstra wrote:
> On Sun, Feb 16, 2020 at 01:16:36PM +0530, Amol Grover wrote:
> > Data is traversed using hlist_for_each_entry_rcu outside an
> > RCU read-side critical section but under the protection
> > of either lockdep_lock or with irqs disabled.
> >
> > Hence, add corresponding lockdep expression to silence false-positive
> > lockdep warnings, and harden RCU lists. Also add macro for
> > corresponding lockdep expression.
> >
> > Two things to note:
> > - RCU traversals protected under both, irqs disabled and
> > graph lock, have both the checks in the lockdep expression.
> > - RCU traversals under the protection of just disabled irqs
> > don't have a corresponding lockdep expression as it is implicitly
> > checked for.
> >
> > Signed-off-by: Amol Grover <frextrite@...il.com>
> > ---
> > kernel/locking/lockdep.c | 21 +++++++++++++--------
> > 1 file changed, 13 insertions(+), 8 deletions(-)
> >
> > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> > index 32282e7112d3..696ad5d4daed 100644
> > --- a/kernel/locking/lockdep.c
> > +++ b/kernel/locking/lockdep.c
> > @@ -85,6 +85,8 @@ module_param(lock_stat, int, 0644);
> > * code to recurse back into the lockdep code...
> > */
> > static arch_spinlock_t lockdep_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
> > +#define graph_lock_held() \
> > + arch_spin_is_locked(&lockdep_lock)
> > static struct task_struct *lockdep_selftest_task_struct;
> >
> > static int graph_lock(void)
> > @@ -1009,7 +1011,7 @@ static bool __check_data_structures(void)
> > /* Check the chain_key of all lock chains. */
> > for (i = 0; i < ARRAY_SIZE(chainhash_table); i++) {
> > head = chainhash_table + i;
> > - hlist_for_each_entry_rcu(chain, head, entry) {
> > + hlist_for_each_entry_rcu(chain, head, entry, graph_lock_held()) {
> > if (!check_lock_chain_key(chain))
> > return false;
> > }
>
> URGH.. this patch combines two horribles to create a horrific :/
>
> - spin_is_locked() is an abomination
> - this RCU list stuff is just plain annoying
>
> I'm tempted to do something like:
>
> #define STFU (true)
>
> hlist_for_each_entry_rcu(chain, head, entry, STFU) {
>
> Paul, are we going a little over-board with this stuff? Do we really
> have to annotate all of this?
Could it use hlist_for_each_entry_rcu_notrace() if that's better for this
code? That one does not need the additional condition passed. Though I find
rcu_dereference_raw_nocheck() in that macro a bit odd since it does sparse
checking, where as the rcu_dereference_raw() in hlist_for_each_entry() does
nothing.
And perf can do the same thing if it iss too annoying, like the tracing code
does.
This came up mainly because list_for_each_entry_rcu() does some checking of
it is in a reader section, but it is helpless in its checking when a lock is
held.
thanks,
- Joel
Powered by blists - more mailing lists