lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTinqgNKYLo2Ux_5Bsx=K=cgUB4gT7Djpnfbp6TGh@mail.gmail.com>
Date:	Tue, 7 Dec 2010 11:12:37 +0000
From:	Will Newton <will.newton@...il.com>
To:	Linux Kernel list <linux-kernel@...r.kernel.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Arnd Bergmann <arnd@...db.de>
Subject: Re: [PATCH] include/linux/unaligned: Pack the whole struct rather
 than just the field.

On Wed, Dec 1, 2010 at 10:11 PM, Will Newton <will.newton@...il.com> wrote:
> The current packed struct implementation of unaligned access adds
> the packed attribute only to the field within the unaligned struct
> rather than to the struct as a whole. This is not sufficient to
> enforce proper behaviour on architectures with a default struct
> alignment of more than one byte.
>
> For example, the current implementation of __get_unaligned_cpu16
> when compiled for arm with gcc -O1 -mstructure-size-boundary=32
> assumes the struct is on a 4 byte boundary so performs the load
> of the 16bit packed field as if it were on a 4 byte boundary:
>
> __get_unaligned_cpu16:
>        ldrh    r0, [r0, #0]
>        bx      lr
>
> Moving the packed attribute to the struct rather than the field
> causes the proper unaligned access code to be generated:
>
> __get_unaligned_cpu16:
>        ldrb    r3, [r0, #0]    @ zero_extendqisi2
>        ldrb    r0, [r0, #1]    @ zero_extendqisi2
>        orr     r0, r3, r0, asl #8
>        bx      lr
>
> Signed-off-by: Will Newton <will.newton@...il.com>

I don't know of any designated maintainer for this code, but add a
couple of ccs of people who might be interested.

This change doesn't change any behaviour on current architectures, but
is more correct and may enable other architectures to move to the
packed struct unaligned access implementation and so allow us to move
more architectures to the asm-generic implementation.

> ---
>  include/linux/unaligned/packed_struct.h |    6 +++---
>  1 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/unaligned/packed_struct.h
> b/include/linux/unaligned/packed_struct.h
> index 2498bb9..c9a6abd 100644
> --- a/include/linux/unaligned/packed_struct.h
> +++ b/include/linux/unaligned/packed_struct.h
> @@ -3,9 +3,9 @@
>
>  #include <linux/kernel.h>
>
> -struct __una_u16 { u16 x __attribute__((packed)); };
> -struct __una_u32 { u32 x __attribute__((packed)); };
> -struct __una_u64 { u64 x __attribute__((packed)); };
> +struct __una_u16 { u16 x; } __attribute__((packed));
> +struct __una_u32 { u32 x; } __attribute__((packed));
> +struct __una_u64 { u64 x; } __attribute__((packed));
>
>  static inline u16 __get_unaligned_cpu16(const void *p)
>  {
> --
> 1.7.0.4
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ