lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 17 Feb 2020 16:12:46 +0100 From: Peter Zijlstra <peterz@...radead.org> To: Amol Grover <frextrite@...il.com> Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>, linux-kernel@...r.kernel.org, linux-kernel-mentees@...ts.linuxfoundation.org, Joel Fernandes <joel@...lfernandes.org>, Madhuparna Bhowmik <madhuparnabhowmik04@...il.com>, "Paul E . McKenney" <paulmck@...nel.org> Subject: Re: [PATCH RESEND] lockdep: Pass lockdep expression to RCU lists On Sun, Feb 16, 2020 at 01:16:36PM +0530, Amol Grover wrote: > Data is traversed using hlist_for_each_entry_rcu outside an > RCU read-side critical section but under the protection > of either lockdep_lock or with irqs disabled. > > Hence, add corresponding lockdep expression to silence false-positive > lockdep warnings, and harden RCU lists. Also add macro for > corresponding lockdep expression. > > Two things to note: > - RCU traversals protected under both, irqs disabled and > graph lock, have both the checks in the lockdep expression. > - RCU traversals under the protection of just disabled irqs > don't have a corresponding lockdep expression as it is implicitly > checked for. > > Signed-off-by: Amol Grover <frextrite@...il.com> > --- > kernel/locking/lockdep.c | 21 +++++++++++++-------- > 1 file changed, 13 insertions(+), 8 deletions(-) > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > index 32282e7112d3..696ad5d4daed 100644 > --- a/kernel/locking/lockdep.c > +++ b/kernel/locking/lockdep.c > @@ -85,6 +85,8 @@ module_param(lock_stat, int, 0644); > * code to recurse back into the lockdep code... > */ > static arch_spinlock_t lockdep_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; > +#define graph_lock_held() \ > + arch_spin_is_locked(&lockdep_lock) > static struct task_struct *lockdep_selftest_task_struct; > > static int graph_lock(void) > @@ -1009,7 +1011,7 @@ static bool __check_data_structures(void) > /* Check the chain_key of all lock chains. */ > for (i = 0; i < ARRAY_SIZE(chainhash_table); i++) { > head = chainhash_table + i; > - hlist_for_each_entry_rcu(chain, head, entry) { > + hlist_for_each_entry_rcu(chain, head, entry, graph_lock_held()) { > if (!check_lock_chain_key(chain)) > return false; > } URGH.. this patch combines two horribles to create a horrific :/ - spin_is_locked() is an abomination - this RCU list stuff is just plain annoying I'm tempted to do something like: #define STFU (true) hlist_for_each_entry_rcu(chain, head, entry, STFU) { Paul, are we going a little over-board with this stuff? Do we really have to annotate all of this?
Powered by blists - more mailing lists