lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0706111041160.17264@schroedinger.engr.sgi.com>
Date:	Mon, 11 Jun 2007 10:52:32 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Håvard Skinnemoen <hskinnemoen@...il.com>
cc:	Haavard Skinnemoen <hskinnemoen@...el.com>,
	Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: kernel BUG at mm/slub.c:3689!

On Mon, 11 Jun 2007, Håvard Skinnemoen wrote:

> On 6/11/07, Christoph Lameter <clameter@....com> wrote:
> > Ahhh... I see its the same phenomenon as before but triggered by
> > a different cause.
> > 
> > If you set the align to 32 then the first kmalloc slabs of size
> > 
> > 8
> > 16
> > 32
> > 
> > are all of the same size which leads to duplicate files in sysfs.
> 
> Yes, that seems to be the problem.
> 
> > Does this patch fix it?
> 
> Unfortunately, no. But I get a different error message; see below...

Hmmmm.. Yeah slabs with different user object sizes may coexist.

Argh!

Ok. Drop the patch and use this one instead. This one avoids the use
of smaller slabs that cause the conflict. The first slab will be sized 32
bytes instead of 8.

Note that I do not get why you would be aligning the objects to 32 bytes. 
Increasing the smallest cache size wastes a lot of memory. And it is 
usually advantageous if multiple related objects are in the same cacheline 
unless you have heavy SMP contention.

---
 include/linux/slub_def.h |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

Index: linux-2.6/include/linux/slub_def.h
===================================================================
--- linux-2.6.orig/include/linux/slub_def.h	2007-06-11 10:50:09.000000000 -0700
+++ linux-2.6/include/linux/slub_def.h	2007-06-11 10:50:58.000000000 -0700
@@ -56,7 +56,8 @@ struct kmem_cache {
 /*
  * Kmalloc subsystem.
  */
-#define KMALLOC_SHIFT_LOW 3
+#define KMALLOC_SHIFT_LOW 5
+
 
 /*
  * We keep the general caches in an array of slab caches that are used for
@@ -76,6 +77,9 @@ static inline int kmalloc_index(size_t s
 	if (size > KMALLOC_MAX_SIZE)
 		return -1;
 
+	if (size <= (1 << KMALLOC_SHIFT_LOW))
+		return KMALLOC_SHIFT_LOW;
+
 	if (size > 64 && size <= 96)
 		return 1;
 	if (size > 128 && size <= 192)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ