lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1201171620590.14697@router.home>
Date:	Tue, 17 Jan 2012 16:22:09 -0600 (CST)
From:	Christoph Lameter <cl@...ux.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
cc:	Sasha Levin <levinsasha928@...il.com>,
	Dave Jones <davej@...hat.com>, davem <davem@...emloft.net>,
	Pekka Enberg <penberg@...nel.org>,
	Matt Mackall <mpm@...enic.com>, kaber@...sh.net,
	pablo@...filter.org, linux-kernel <linux-kernel@...r.kernel.org>,
	linux-mm <linux-mm@...ck.org>, netfilter-devel@...r.kernel.org,
	netdev <netdev@...r.kernel.org>
Subject: Re: Hung task when calling clone() due to netfilter/slab

Another version that drops the slub lock for both invocations of sysfs
functions from kmem_cache_create. The invocation from slab_sysfs_init
is not a problem since user space is not active at that point.


Subject: slub: Do not take the slub lock while calling into sysfs

This patch avoids holding the slub_lock during kmem_cache_create()
when calling sysfs. It is possible because kmem_cache_create()
allocates the kmem_cache object and therefore is the only one context
that can access the newly created object. It is therefore possible
to drop the slub_lock early. We defer the adding of the new kmem_cache
to the end of processing because the new kmem_cache structure would
be reachable otherwise via scans over slabs. This allows sysfs_slab_add()
to run without holding any locks.

The case is different if we are creating an alias instead of a new
kmem_cache structure. In that case we can also drop the slub lock
early because we have taken a refcount on the kmem_cache structure.
It therefore cannot vanish from under us.
But if the sysfs_slab_alias() call fails we can no longer simply
decrement the refcount since the other references may have gone
away in the meantime. Call kmem_cache_destroy() to cause the
refcount to be decremented and the kmem_cache structure to be
freed if all references are gone.

Signed-off-by: Christoph Lameter <cl@...ux.com>


---
 mm/slub.c |   25 +++++++++++--------------
 1 file changed, 11 insertions(+), 14 deletions(-)

Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2012-01-17 09:53:26.599505365 -0600
+++ linux-2.6/mm/slub.c	2012-01-17 09:59:57.131497273 -0600
@@ -3912,13 +3912,14 @@ struct kmem_cache *kmem_cache_create(con
 		s->objsize = max(s->objsize, (int)size);
 		s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));

+		up_write(&slub_lock);
 		if (sysfs_slab_alias(s, name)) {
-			s->refcount--;
+			kmem_cache_destroy(s);
 			goto err;
 		}
-		up_write(&slub_lock);
 		return s;
 	}
+	up_write(&slub_lock);

 	n = kstrdup(name, GFP_KERNEL);
 	if (!n)
@@ -3928,27 +3929,23 @@ struct kmem_cache *kmem_cache_create(con
 	if (s) {
 		if (kmem_cache_open(s, n,
 				size, align, flags, ctor)) {
-			list_add(&s->list, &slab_caches);
-			if (sysfs_slab_add(s)) {
-				list_del(&s->list);
-				kfree(n);
-				kfree(s);
-				goto err;
+
+			if (sysfs_slab_add(s) == 0) {
+				down_write(&slub_lock);
+				list_add(&s->list, &slab_caches);
+				up_write(&slub_lock);
+				return s;
 			}
-			up_write(&slub_lock);
-			return s;
 		}
 		kfree(n);
 		kfree(s);
 	}
 err:
-	up_write(&slub_lock);

 	if (flags & SLAB_PANIC)
 		panic("Cannot create slabcache %s\n", name);
-	else
-		s = NULL;
-	return s;
+
+	return NULL;
 }
 EXPORT_SYMBOL(kmem_cache_create);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ