lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1203200910030.19333@router.home>
Date:	Tue, 20 Mar 2012 09:14:44 -0500 (CDT)
From:	Christoph Lameter <cl@...ux.com>
To:	Lai Jiangshan <laijs@...fujitsu.com>
cc:	Pekka Enberg <penberg@...nel.org>, Matt Mackall <mpm@...enic.com>,
	Tejun Heo <tj@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC PATCH 2/6] slub: add kmalloc_align()

On Tue, 20 Mar 2012, Lai Jiangshan wrote:

> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index a32bcfd..67ac6b4 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -280,6 +280,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
>  	return __kmalloc(size, flags);
>  }
>
> +static __always_inline
> +void *kmalloc_align(size_t size, gfp_t flags, size_t align)
> +{
> +	return kmalloc(ALIGN(size, align), flags);
> +}

This assumes that kmalloc allocates aligned memory. Which it does only
in special cases (power of two cache and debugging off).

>  #ifdef CONFIG_NUMA
>  void *__kmalloc_node(size_t size, gfp_t flags, int node);
>  void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node);
> diff --git a/mm/slub.c b/mm/slub.c
> index 4907563..01cf99d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3238,7 +3238,7 @@ static struct kmem_cache *__init create_kmalloc_cache(const char *name,
>  	 * This function is called with IRQs disabled during early-boot on
>  	 * single CPU so there's no need to take slub_lock here.
>  	 */
> -	if (!kmem_cache_open(s, name, size, ARCH_KMALLOC_MINALIGN,
> +	if (!kmem_cache_open(s, name, size, ALIGN_OF_LAST_BIT(size),
>  								flags, NULL))
>  		goto panic;

Why does the alignment of struct kmem_cache change? I'd rather have a
__alignof__(struct kmem_cache) here with alignment specified with the
struct definition.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ