lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060726085113.GD9592@osiris.boeblingen.de.ibm.com>
Date:	Wed, 26 Jul 2006 10:51:13 +0200
From:	Heiko Carstens <heiko.carstens@...ibm.com>
To:	Christoph Lameter <clameter@....com>
Cc:	Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org,
	Pekka Enberg <penberg@...helsinki.fi>, linux-mm@...ck.org,
	Martin Schwidefsky <schwidefsky@...ibm.com>
Subject: [patch 2/2] slab: always consider arch mandated alignment

From: Heiko Carstens <heiko.carstens@...ibm.com>

Since ARCH_KMALLOC_MINALIGN didn't work on s390 I tried ARCH_SLAB_MINALIGN
instead, just to find out that it didn't work too.
In case of CONFIG_DEBUG_SLAB kmem_cache_create() creates caches with an
alignment lesser than ARCH_SLAB_MINALIGN, which it shouldn't according to
this comment in mm/slab.c :

 * Enforce a minimum alignment for all caches.
 * Intended for archs that get misalignment faults even for BYTES_PER_WORD
 * aligned buffers. Includes ARCH_KMALLOC_MINALIGN.
 * If possible: Do not enable this flag for CONFIG_DEBUG_SLAB, it disables
 * some debug features.

For example the following might happen if kmem_cache_create() gets called
with -- size: 64; align: 0; flags with SLAB_HWCACHE_ALIGN, SLAB_RED_ZONE and
SLAB_STORE_USER set. ARCH_SLAB_MINALIGN is 8.
These are the steps as numbered in kmem_cache_create() where 5) is after the
"if (flags & SLAB_RED_ZONE)" statement.

1) align: 0 ralign 64
2) align: 0 ralign 64
3) align: 0 ralign 64
4) align: 64 ralign 64
5) align: 4 ralign 64

Note that in this case in step 2) the flags SLAB_RED_ZONE and SLAB_STORE_USER
don't get masked out and that this causes an BYTES_PER_WORD alignment in
step 5) which is lesser than ARCH_SLAB_MINALIGN.

Cc: Christoph Lameter <clameter@....com>
Cc: Pekka Enberg <penberg@...helsinki.fi>
Signed-off-by: Heiko Carstens <heiko.carstens@...ibm.com>
---

 mm/slab.c |    3 +++
 1 files changed, 3 insertions(+)

Index: linux-2.6/mm/slab.c
===================================================================
--- linux-2.6.orig/mm/slab.c	2006-07-26 09:55:54.000000000 +0200
+++ linux-2.6/mm/slab.c	2006-07-26 09:57:07.000000000 +0200
@@ -2103,6 +2103,9 @@
 		if (ralign > BYTES_PER_WORD)
 			flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
 	}
+	if (BYTES_PER_WORD < ARCH_SLAB_MINALIGN)
+		flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
+
 	/* 3) caller mandated alignment: disables debug if necessary */
 	if (ralign < align) {
 		ralign = align;
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ