lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 19 May 2010 20:03:00 +0200
From:	Manfred Spraul <manfred@...orfullife.com>
To:	Johannes Stezenbach <js@...21.net>
CC:	David Woodhouse <dwmw2@...radead.org>,
	Herbert Xu <herbert@...dor.hengli.com.au>,
	David Miller <davem@...emloft.net>, penberg@...helsinki.fi,
	mpm@...enic.com, ken@...elabs.ch, geert@...ux-m68k.org,
	michael-dev@...i-braun.de, linux-kernel@...r.kernel.org,
	linux-crypto@...r.kernel.org, anemo@....ocn.ne.jp
Subject: Re: [PATCH 1/4] mm: Move ARCH_SLAB_MINALIGN and ARCH_KMALLOC_MINALIGN
 to <linux/slab_def.h>

On 05/19/2010 03:30 PM, Johannes Stezenbach wrote:
> Hi,
>
> I have some comments/questions, I hope it's not too silly:
>
> On Wed, May 19, 2010 at 12:01:42PM +0100, David Woodhouse wrote:
>    
>> +#ifndef ARCH_KMALLOC_MINALIGN
>> +/*
>> + * Enforce a minimum alignment for the kmalloc caches.
>> + * Usually, the kmalloc caches are cache_line_size() aligned, except when
>> + * DEBUG and FORCED_DEBUG are enabled, then they are BYTES_PER_WORD aligned.
>> + * Some archs want to perform DMA into kmalloc caches and need a guaranteed
>> + * alignment larger than the alignment of a 64-bit integer.
>> + * ARCH_KMALLOC_MINALIGN allows that.
>> + * Note that increasing this value may disable some debug features.
>> + */
>> +#define ARCH_KMALLOC_MINALIGN __alignof__(unsigned long long)
>> +#endif
>>      
> I think the comment is confusing.  IIRC kmalloc() API guarantees that
> the allocated buffer is suitable for DMA, so if cache coherence is not
> handled by hardware the arch might need to set this to the cache line size,
> and that's what ARCH_KMALLOC_MINALIGN is about. Nothing else.
>    
Is this text better?

/*
* Enforce a minimum alignment for the kmalloc caches.
* kmalloc allocations are guaranteed to be BYTES_PER_WORD aligned - 
sizeof(void*)
* If an arch needs a larger guarantee (e.g. cache_line_size due to DMA), 
then it
* must use ARCH_KMALLOC_MINALIGN to enforce that.
* Note: Do not set ARCH_KMALLOC_MINALIGN for performance reasons.
* Unless debug options are enabled, the kernel uses cache_line_size() 
automatically.
*/

>> +#ifndef ARCH_SLAB_MINALIGN
>> +/*
>> + * Enforce a minimum alignment for all caches.
>> + * Intended for archs that get misalignment faults even for BYTES_PER_WORD
>> + * aligned buffers. Includes ARCH_KMALLOC_MINALIGN.
>> + * If possible: Do not enable this flag for CONFIG_DEBUG_SLAB, it disables
>> + * some debug features.
>> + */
>> +#define ARCH_SLAB_MINALIGN 0
>> +#endif
>>      
> Why is this needed at all?  If code calls kmem_cache_create()
> with wrong align parameter, or has wrong expectations wrt kmalloc()
> alignment guarantees, this code needs to be fixed?
> I mean, portable code cannot assume that unaligned accesses work?
>    
ARM uses 8 bytes. I don't know why.
A 32-bit arch and unaligned 64-bit loads are not supported?

--
     Manfred
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ