lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060726085028.GC9592@osiris.boeblingen.de.ibm.com>
Date:	Wed, 26 Jul 2006 10:50:28 +0200
From:	Heiko Carstens <heiko.carstens@...ibm.com>
To:	Christoph Lameter <clameter@....com>
Cc:	Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org,
	Pekka Enberg <penberg@...helsinki.fi>, linux-mm@...ck.org,
	Martin Schwidefsky <schwidefsky@...ibm.com>
Subject: [patch 1/2] slab: always consider caller mandated alignment

From: Heiko Carstens <heiko.carstens@...ibm.com>

In case of CONFIG_DEBUG_SLAB kmem_cache_create() creates caches with an
alignment lesser than ARCH_KMALLOC_MINALIGN. This breaks s390 (32bit),
since it needs an eight byte alignment. Also it doesn't behave like it's
decribed in mm/slab.c :

 * Enforce a minimum alignment for the kmalloc caches.
 * Usually, the kmalloc caches are cache_line_size() aligned, except when
 * DEBUG and FORCED_DEBUG are enabled, then they are BYTES_PER_WORD aligned.
 * Some archs want to perform DMA into kmalloc caches and need a guaranteed
 * alignment larger than BYTES_PER_WORD. ARCH_KMALLOC_MINALIGN allows that.
 * Note that this flag disables some debug features.

For example the following might happen if kmem_cache_create() gets called
with -- size: 64; align: 8; flags with SLAB_HWCACHE_ALIGN, SLAB_RED_ZONE and
SLAB_STORE_USER set.
These are the steps as numbered in kmem_cache_create() where 5) is after the
"if (flags & SLAB_RED_ZONE)" statement.

1) align: 8 ralign 64 
2) align: 8 ralign 64 
3) align: 8 ralign 64 
4) align: 64 ralign 64
5) align: 4 ralign 64 

Note that in this case in step 3) the flags SLAB_RED_ZONE and SLAB_STORE_USER
don't get masked out and that this causes an BYTES_PER_WORD alignment in
step 5) which breaks s390.

Cc: Christoph Lameter <clameter@....com>
Cc: Pekka Enberg <penberg@...helsinki.fi>
Signed-off-by: Heiko Carstens <heiko.carstens@...ibm.com>
---

 mm/slab.c |    3 +++
 1 files changed, 3 insertions(+)

Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c	2006-07-24 09:41:36.000000000 +0200
+++ linux-2.6/mm/slab.c	2006-07-26 09:55:54.000000000 +0200
@@ -2109,6 +2109,9 @@
 		if (ralign > BYTES_PER_WORD)
 			flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
 	}
+	if (align > BYTES_PER_WORD)
+		flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
+
 	/*
 	 * 4) Store it. Note that the debug code below can reduce
 	 *    the alignment to BYTES_PER_WORD.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ