lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 13 Feb 2009 03:56:41 +0900
From:	Paul Mundt <lethal@...ux-sh.org>
To:	Giuseppe CAVALLARO <peppe.cavallaro@...com>
Cc:	linux-kernel@...r.kernel.org, linux-sh@...r.kernel.org,
	linux-mm@...r.kernel.org
Subject: Re: [PATCH] slab: fix slab flags for archs use alignment larger 64-bit

On Thu, Feb 12, 2009 at 06:51:13PM +0100, Giuseppe CAVALLARO wrote:
> I think, this fix is necessary for all the architectures want to
> perform DMA into kmalloc caches and need a guaranteed alignment
> larger than the alignment of a 64-bit integer.
> An example is sh architecture where ARCH_KMALLOC_MINALIGN is L1_CACHE_BYTES.
> 
> As side effect, these kind of objects cannot be visible
> within the /proc/slab_allocators file.
> 
> Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@...com>
> ---
>  mm/slab.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/mm/slab.c b/mm/slab.c
> index ddc41f3..031d785 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2262,7 +2262,7 @@ kmem_cache_create (const char *name, size_t size, size_t align,
>  		ralign = align;
>  	}
>  	/* disable debug if necessary */
> -	if (ralign > __alignof__(unsigned long long))
> +	if (ralign > ARCH_KMALLOC_MINALIGN)
>  		flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
>  	/*
>  	 * 4) Store it.

No, this change in itself is not sufficient. The redzone marker placement
as well as that of the user store need to know about the minalign as well
before slab debug can work correctly.

I last looked at this when introducing ARCH_SLAB_MINALIGN:

http://article.gmane.org/gmane.linux.kernel/262528

But it would need some rework for the current slab code.

Note that the ARCH_KMALLOC_MINALIGN value has no meaning here, as this
relates to slab caches in general, of which kmalloc just happens to have
a few. This is also why the rest of the kmem_cache_create() code
references ARCH_SLAB_MINALIGN in the first place. But that in itself is
irrelevant since for the kmalloc slab caches, ARCH_KMALLOC_MINALIGN is
already passed in as the align value for kmem_cache_create(), so ralign
is already set to L1_CACHE_BYTES immediately before that check.

What exactly are you having problems with that made you come up with this
patch? It would be helpful to know precisely what your issues are, as
this change in itself is only related to slab debug, and not general
operation.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ