lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 8 Nov 2011 10:58:47 +0800
From:	Yong Zhang <yong.zhang0@...il.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	Vegard Nossum <vegard.nossum@...il.com>,
	linux-kernel@...r.kernel.org, sergey.senozhatsky@...il.com,
	bp@...en8.de, Ingo Molnar <mingo@...e.hu>,
	Tejun Heo <tj@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	casteyde.christian@...e.fr
Subject: Re: [PATCH 1/4] lockdep: lock_set_subclass() fix

On Mon, Nov 07, 2011 at 05:10:29PM +0100, Peter Zijlstra wrote:
> On Mon, 2011-11-07 at 16:28 +0100, Vegard Nossum wrote:
> > 1. Initialise the thing completely before doing the copy, or
> > 2. Ignore the warning.
> > 
> > The memset() patch (f59de8992aa6dc85e81aadc26b0f69e17809721d) attempts
> > to do the first, i.e. to clear the whole struct in lockdep_init_map().
> > 
> > I think nr. 1 is the best way to go in principle, but I don't know
> > what it takes for this to work properly. The blanket-clear memset()
> > presumably doesn't work because it clears out something that was
> > already initialised by the caller (right?).
> > 
> > Yong Zhang, can you think of a way to avoid the race you described,
> > perhaps by memset()ing only the right/relevant parts of struct
> > lockdep_map in lockdep_init_map()?
> 
> We could move the key and name pointer to the start of the structure and
> memset everything after that, however wouldn't that leave kmemcheck with
> the same problem? It wouldn't know those two pointers would be
> initialized properly.
> 
> > Peter Zijlstra, if you prefer, we can also just tell kmemcheck that
> > this particular copy is fine, but it means that kmemcheck will not be
> > able to detect any real bugs in this code. It can be done with
> > something like:

We should take ->calss_cache more carefully, because if we memset() it
unconditionnally we will have no chance to set it anymore. Thus the
performace brought by ->class_cache will be gone.

1) for lock_set_subclass(): we can't initialize ->class_cache because
   it's still valid and we need it.
2) for lock_set_class(): we have to initialize ->class_cache because
   it's invalid anymore.

Maybe we could unconditionally set it we look_up_lock_class() find the
class?

> 
> Something like this, although it would be best to come up with a nicer
> way to write it..
> 
> ---
>  include/linux/lockdep.h |    2 +-
>  kernel/lockdep.c        |    3 ++-
>  2 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index b6a56e3..7d66268 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -148,9 +148,9 @@ void clear_lock_stats(struct lock_class *class);
>   * This is embedded into specific lock instances:
>   */
>  struct lockdep_map {
> +	const char			*name;
>  	struct lock_class_key		*key;
>  	struct lock_class		*class_cache[NR_LOCKDEP_CACHING_CLASSES];
> -	const char			*name;
>  #ifdef CONFIG_LOCK_STAT
>  	int				cpu;
>  	unsigned long			ip;
> diff --git a/kernel/lockdep.c b/kernel/lockdep.c
> index e69434b..81855cf 100644
> --- a/kernel/lockdep.c
> +++ b/kernel/lockdep.c
> @@ -2948,7 +2948,8 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
>  void lockdep_init_map(struct lockdep_map *lock, const char *name,
>  		      struct lock_class_key *key, int subclass)
>  {
> -	memset(lock, 0, sizeof(*lock));
> +	kmemcheck_mark_initialized(lock, 2*sizeof(void *));
> +	memset(&lock->class_cache[0], 0, sizeof(*lock)-2*sizeof(void *));

That means ->key have chance to be 0 at some time, right? Then I think it'll
lead to another false positive warning like what Borislav has reported:
http://marc.info/?l=linux-kernel&m=132039877026653

The reason is some rq->lock could carry a wrong key at certain time.

	CPU A				CPU B
  lock_set_subclass(lockA)
    __lock_set_class(lockA)
      lockdep_init_map(lockA)
        memset() /* ->key = NULL */
      				__lock_acquire(lockA)
				  register_lock_class(lockA)
				    look_up_lock_class(lockA)
				      if (unlikely(!lock->key))
				              lock->key = (void *)lock;
	->key = key;
				/* lockA maybe carry wrong class in later running
				 * due to ->class_cache
				 */


Then when another lock_set_subclass() comes:
        CPU A                           CPU B
  lock_set_subclass(lockA);
    lock_set_class(lockA);
      				__lock_acquire(lockA)
                                  /* lockA->class_cache[] is not set,
				   * different subclass */
                                  register_lock_class(lockA);
                                     look_up_lock_class(lockA); /* retrun NULL */
      lockdep_init_map(lockA);
        memset(lockA); /* ->key = NULL */
                                  if (!static_obj(lock->key))
                                  /* we get warning here */


So maybe the simplest way is just annotating ->lock like this:
	kmemcheck_mark_initialized(lock, sizeof(*lock));

Thanks,
Yong
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ