[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20180128090250.3gxq2uoebiwh4who@gmail.com>
Date: Sun, 28 Jan 2018 10:02:50 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Dan Williams <dan.j.williams@...el.com>
Cc: tglx@...utronix.de, linux-arch@...r.kernel.org,
kernel-hardening@...ts.openwall.com, gregkh@...uxfoundation.org,
x86@...nel.org, Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, torvalds@...ux-foundation.org,
alan@...ux.intel.com
Subject: Re: [PATCH v5 03/12] x86: implement array_idx_mask
* Dan Williams <dan.j.williams@...el.com> wrote:
> 'array_idx' uses a mask to sanitize user controllable array indexes,
> i.e. generate a 0 mask if idx >= sz, and a ~0 mask otherwise. While the
> default array_idx_mask handles the carry-bit from the (index - size)
> result in software. The x86 'array_idx_mask' does the same, but the
> carry-bit is handled in the processor CF flag without conditional
> instructions in the control flow.
Same style comments apply as for patch 02.
> Suggested-by: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: "H. Peter Anvin" <hpa@...or.com>
> Cc: x86@...nel.org
> Signed-off-by: Dan Williams <dan.j.williams@...el.com>
> ---
> arch/x86/include/asm/barrier.h | 22 ++++++++++++++++++++++
> 1 file changed, 22 insertions(+)
>
> diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
> index 01727dbc294a..30419b674ebd 100644
> --- a/arch/x86/include/asm/barrier.h
> +++ b/arch/x86/include/asm/barrier.h
> @@ -24,6 +24,28 @@
> #define wmb() asm volatile("sfence" ::: "memory")
> #endif
>
> +/**
> + * array_idx_mask - generate a mask for array_idx() that is ~0UL when
> + * the bounds check succeeds and 0 otherwise
> + *
> + * mask = 0 - (idx < sz);
> + */
> +#define array_idx_mask array_idx_mask
> +static inline unsigned long array_idx_mask(unsigned long idx, unsigned long sz)
Please put an extra newline between definitions (even if they are closely related
as these).
> +{
> + unsigned long mask;
> +
> +#ifdef CONFIG_X86_32
> + asm ("cmpl %1,%2; sbbl %0,%0;"
> +#else
> + asm ("cmpq %1,%2; sbbq %0,%0;"
> +#endif
Wouldn't this suffice:
asm ("cmp %1,%2; sbb %0,%0;"
... as the word width should automatically be 32 bits on 32-bit kernels and 64
bits on 64-bit kernels?
Thanks,
Ingo
Powered by blists - more mailing lists