lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060729042632.GA3840@localhost.localdomain>
Date:	Fri, 28 Jul 2006 21:26:32 -0700
From:	Ravikiran G Thirumalai <kiran@...lex86.org>
To:	Christoph Lameter <clameter@....com>, Andrew Morton <akpm@...l.org>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	Pekka Enberg <penberg@...helsinki.fi>,
	LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	Arjan van de Ven <arjan@...radead.org>,
	alokk@...softinc.com
Subject: Re: [BUG] Lockdep recursive locking in kmem_cache_free

On Fri, Jul 28, 2006 at 02:26:16PM -0700, Christoph Lameter wrote:
> On Fri, 28 Jul 2006, Ravikiran G Thirumalai wrote:
> 
> > Why should there be any problem taking the remote l3 lock?  If the remote
> > node does not have cpu that does not mean we cannot take a lock from the
> > local node!!! 
> > 
> > I think current git does not teach lockdep to ignore recursion for
> > array_cache->lock when the array_cache->lock are from different cases.  As
> > Arjan pointed out, I can see that l3->list_lock is special cased, but I
> > cannot find where array_cache->lock is taken care of.
> 
> Ok.
>  
> > Again, if this is indeed a problem (recursion) machine should not boot even,
> > when compiled without lockdep, tglx, can you please verify this?
> 
> We seem to be fine on that level.
> 

Since false positives due to off slab slab management seem to occur
often of late, how about adding some comments to slab.c?

Thanks,
Kiran

---

Adds some comments to slab.c.

Also, checks if we get a valid slabp_cache for off slab slab-descriptors.
We should always get this. If we don't, then in that case we,
will have to disable off-slab descriptors for this cache and do the 
calculations again. This is a rare case,  so add a BUG_ON, for now, 
just in case.


Signed-off-by: Alok N Kataria <alok.kataria@...softinc.com>
Signed-off-by: Ravikiran Thirumalai <kiran@...lex86.org>
Signed-off-by: Shai Fultheim <shai@...lex86.org>

Index: linux-2.6.18-rc2.git/mm/slab.c
===================================================================
--- linux-2.6.18-rc2.git.orig/mm/slab.c	2006-07-28 17:43:04.000000000 -0700
+++ linux-2.6.18-rc2.git/mm/slab.c	2006-07-28 18:13:31.000000000 -0700
@@ -2200,8 +2200,17 @@ kmem_cache_create (const char *name, siz
 		cachep->gfpflags |= GFP_DMA;
 	cachep->buffer_size = size;
 
-	if (flags & CFLGS_OFF_SLAB)
+	if (flags & CFLGS_OFF_SLAB) {
 		cachep->slabp_cache = kmem_find_general_cachep(slab_size, 0u);
+		/*
+		 * This is a possibility for one of the malloc_sizes caches.
+		 * But since we go off slab only for object size greater than
+		 * PAGE_SIZE/8, and malloc_sizes gets created in ascending order,
+		 * this should not happen at all.
+		 * But leave a BUG_ON for some lucky dude.
+		 */
+		BUG_ON(!cachep->slabp_cache);
+	}
 	cachep->ctor = ctor;
 	cachep->dtor = dtor;
 	cachep->name = name;
@@ -2435,7 +2444,17 @@ int kmem_cache_destroy(struct kmem_cache
 }
 EXPORT_SYMBOL(kmem_cache_destroy);
 
-/* Get the memory for a slab management obj. */
+/*
+ * Get the memory for a slab management obj.
+ * For a slab cache when the slab descriptor is off-slab, slab descriptors
+ * always come from malloc_sizes caches.  The slab descriptor cannot 
+ * come from the same cache which is getting created because,
+ * when we are searching for an appropriate cache for these
+ * descriptors in kmem_cache_create, we search through the malloc_sizes array.
+ * If we are creating a malloc_sizes cache here it would not be visible to
+ * kmem_find_general_cachep till the initialization is complete.
+ * Hence we cannot have slabp_cache same as the original cache.
+ */
 static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
 				   int colour_off, gfp_t local_flags,
 				   int nodeid)
@@ -3119,6 +3138,12 @@ static void free_block(struct kmem_cache
 		if (slabp->inuse == 0) {
 			if (l3->free_objects > l3->free_limit) {
 				l3->free_objects -= cachep->num;
+				/* No need to drop any previously held
+				 * lock here, even if we have a off-slab slab
+				 * descriptor it is guaranteed to come from
+				 * a different cache, refer to comments before
+				 * alloc_slabmgmt.
+				 */
 				slab_destroy(cachep, slabp);
 			} else {
 				list_add(&slabp->list, &l3->slabs_free);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ