lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 7 Feb 2014 12:46:19 -0800 (PST)
From:	David Rientjes <rientjes@...gle.com>
To:	Gautham R Shenoy <ego@...ux.vnet.ibm.com>
cc:	linux-kernel@...r.kernel.org, peterz@...radead.org,
	penberg@...nel.org
Subject: Re: [PATCH] slub: Hold list_lock unconditionally before the call to
 add_full.

On Sat, 8 Feb 2014, Gautham R Shenoy wrote:

> Hi,
> 
> From the lockdep annotation and the comment that existed before the
> lockdep annotations were introduced, 
> mm/slub.c:add_full(s, n, page) expects to be called with n->list_lock
> held.
> 
> However, there's a call path in deactivate_slab() when
> 
> 	 (new.inuse || n->nr_partial <= s->min_partial) &&
> 	 !(new.freelist) &&
>          !(kmem_cache_debug(s))
> 
> which ends up calling add_full() without holding
> n->list_lock.
> 
> This was discovered while onlining/offlining cpus in 3.14-rc1 due to
> the lockdep annotations added by commit
> c65c1877bd6826ce0d9713d76e30a7bed8e49f38.
> 
> Fix this by unconditionally taking the lock
> irrespective of the state of kmem_cache_debug(s).
> 
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Pekka Enberg <penberg@...nel.org>
> Signed-off-by: Gautham R. Shenoy <ego@...ux.vnet.ibm.com>

No, it's not needed unless kmem_cache_debug(s) is actually set, 
specifically s->flags & SLAB_STORE_USER.

You want the patch at http://marc.info/?l=linux-kernel&m=139147105027693 
instead which is already in -mm and linux-next.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ