lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200903113241.6770aa3b531509153fde3673@kernel.org>
Date:   Thu, 3 Sep 2020 11:32:41 +0900
From:   Masami Hiramatsu <mhiramat@...nel.org>
To:     peterz@...radead.org
Cc:     mingo@...nel.org, Will Deacon <will@...nel.org>,
        linux-kernel@...r.kernel.org,
        Masami Hiramatsu <mhiramat@...nel.org>,
        Paul McKenney <paulmck@...nel.org>,
        Boqun Feng <boqun.feng@...il.com>, frederic@...nel.org,
        joel@...lfernandes.org
Subject: Re: [PATCH] lockdep: Fix "USED" <- "IN-NMI" inversions

On Wed, 2 Sep 2020 18:03:23 +0200
peterz@...radead.org wrote:

> 
> During the LPC RCU BoF Paul asked how come the "USED" <- "IN-NMI"
> detector doesn't trip over rcu_read_lock()'s lockdep annotation.
> 
> Looking into this I found a very embarrasing typo in
> verify_lock_unused():
> 
> -	if (!(class->usage_mask & LOCK_USED))
> +	if (!(class->usage_mask & LOCKF_USED))
> 
> fixing that will indeed cause rcu_read_lock() to insta-splat :/
> 
> The above typo means that instead of testing for: 0x100 (1 <<
> LOCK_USED), we test for 8 (LOCK_USED), which corresponds to (1 <<
> LOCK_ENABLED_HARDIRQ).
> 
> So instead of testing for _any_ used lock, it will only match any lock
> used with interrupts enabled.
> 
> The rcu_read_lock() annotation uses .check=0, which means it will not
> set any of the interrupt bits and will thus never match.
> 
> In order to properly fix the situation and allow rcu_read_lock() to
> correctly work, split LOCK_USED into LOCK_USED and LOCK_USED_READ and by
> having .read users set USED_READ and test USED, pure read-recursive
> locks are permitted.

Confirmed this makes lockdep warnings on kretprobes spinlocks correctly.

Tested-by: Masami Hiramatsu <mhiramat@...nel.org>

Note, if we backport this, we need to backport kretprobe lockless series too.

Thank you,

> 
> Fixes: f6f48e180404 ("lockdep: Teach lockdep about "USED" <- "IN-NMI" inversions")
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index cccf4bc759c6..454355c033d2 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -4324,13 +4324,18 @@ static int separate_irq_context(struct task_struct *curr,
>  static int mark_lock(struct task_struct *curr, struct held_lock *this,
>  			     enum lock_usage_bit new_bit)
>  {
> -	unsigned int new_mask = 1 << new_bit, ret = 1;
> +	unsigned int old_mask, new_mask, ret = 1;
>  
>  	if (new_bit >= LOCK_USAGE_STATES) {
>  		DEBUG_LOCKS_WARN_ON(1);
>  		return 0;
>  	}
>  
> +	if (new_bit == LOCK_USED && this->read)
> +		new_bit = LOCK_USED_READ;
> +
> +	new_mask = 1 << new_bit;
> +
>  	/*
>  	 * If already set then do not dirty the cacheline,
>  	 * nor do any checks:
> @@ -4343,13 +4348,22 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
>  	/*
>  	 * Make sure we didn't race:
>  	 */
> -	if (unlikely(hlock_class(this)->usage_mask & new_mask)) {
> -		graph_unlock();
> -		return 1;
> -	}
> +	if (unlikely(hlock_class(this)->usage_mask & new_mask))
> +		goto unlock;
>  
> +	old_mask = hlock_class(this)->usage_mask;
>  	hlock_class(this)->usage_mask |= new_mask;
>  
> +	/*
> +	 * Save one usage_traces[] entry and map both LOCK_USED and
> +	 * LOCK_USED_READ onto the same entry.
> +	 */
> +	if (new_bit == LOCK_USED || new_bit == LOCK_USED_READ) {
> +		if (old_mask & (LOCKF_USED | LOCKF_USED_READ))
> +			goto unlock;
> +		new_bit = LOCK_USED;
> +	}
> +
>  	if (!(hlock_class(this)->usage_traces[new_bit] = save_trace()))
>  		return 0;
>  
> @@ -4363,6 +4377,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
>  			return 0;
>  	}
>  
> +unlock:
>  	graph_unlock();
>  
>  	/*
> @@ -5297,12 +5312,20 @@ static void verify_lock_unused(struct lockdep_map *lock, struct held_lock *hlock
>  {
>  #ifdef CONFIG_PROVE_LOCKING
>  	struct lock_class *class = look_up_lock_class(lock, subclass);
> +	unsigned long mask = LOCKF_USED;
>  
>  	/* if it doesn't have a class (yet), it certainly hasn't been used yet */
>  	if (!class)
>  		return;
>  
> -	if (!(class->usage_mask & LOCK_USED))
> +	/*
> +	 * READ locks only conflict with USED, such that if we only ever use
> +	 * READ locks, there is no deadlock possible -- RCU.
> +	 */
> +	if (!hlock->read)
> +		mask |= LOCKF_USED_READ;
> +
> +	if (!(class->usage_mask & mask))
>  		return;
>  
>  	hlock->class_idx = class - lock_classes;
> diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
> index baca699b94e9..b0be1560ed17 100644
> --- a/kernel/locking/lockdep_internals.h
> +++ b/kernel/locking/lockdep_internals.h
> @@ -19,6 +19,7 @@ enum lock_usage_bit {
>  #include "lockdep_states.h"
>  #undef LOCKDEP_STATE
>  	LOCK_USED,
> +	LOCK_USED_READ,
>  	LOCK_USAGE_STATES
>  };
>  
> @@ -40,6 +41,7 @@ enum {
>  #include "lockdep_states.h"
>  #undef LOCKDEP_STATE
>  	__LOCKF(USED)
> +	__LOCKF(USED_READ)
>  };
>  
>  #define LOCKDEP_STATE(__STATE)	LOCKF_ENABLED_##__STATE |


-- 
Masami Hiramatsu <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ