[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKv+Gu-PLsWZLPM-OfXHLGKs7PojRB4jFsyp+X_5OW6ryi7gRQ@mail.gmail.com>
Date: Wed, 11 Jul 2018 09:20:03 +0200
From: Ard Biesheuvel <ard.biesheuvel@...aro.org>
To: "Yandong.Zhao" <yandong77520@...il.com>,
Dave Martin <Dave.Martin@....com>
Cc: linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Will Deacon <will.deacon@....com>,
Catalin Marinas <catalin.marinas@....com>,
zhaoyd@...ndersoft.com, zhaoxb@...ndersoft.com,
fanlc0801@...ndersoft.com
Subject: Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On 11 July 2018 at 03:09, Yandong.Zhao <yandong77520@...il.com> wrote:
> From: Yandong Zhao <yandong77520@...il.com>
>
> It does not matter if the caller of may_use_simd() migrates to
> another cpu after the call, but it is still important that the
> kernel_neon_busy percpu instance that is read matches the cpu the
> task is running on at the time of the read.
>
> This means that raw_cpu_read() is not sufficient. kernel_neon_busy
> may appear true if the caller migrates during the execution of
> raw_cpu_read() and the next task to be scheduled in on the initial
> cpu calls kernel_neon_begin().
>
> This patch replaces raw_cpu_read() with this_cpu_read() to protect
> against this race.
>
> Signed-off-by: Yandong Zhao <yandong77520@...il.com>
I had a bit of trouble disentangling the per-cpu spaghetti to decide
whether this may trigger warnings when CONFIG_DEBUG_PREEMPT=y, but I
don't think so. So assuming this is *not* the case:
Acked-by: Ard Biesheuvel <ard.biesheuvel@...aro.org>
> ---
> arch/arm64/include/asm/simd.h | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
> index fa8b3fe..784a8c2 100644
> --- a/arch/arm64/include/asm/simd.h
> +++ b/arch/arm64/include/asm/simd.h
> @@ -29,7 +29,8 @@
> static __must_check inline bool may_use_simd(void)
> {
> /*
> - * The raw_cpu_read() is racy if called with preemption enabled.
> + * The this_cpu_read() is racy if called with preemption enabled,
> + * since the task may subsequently migrate to another CPU.
> * This is not a bug: kernel_neon_busy is only set when
> * preemption is disabled, so we cannot migrate to another CPU
> * while it is set, nor can we migrate to a CPU where it is set.
> @@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void)
> * false.
> */
> return !in_irq() && !irqs_disabled() && !in_nmi() &&
> - !raw_cpu_read(kernel_neon_busy);
> + !this_cpu_read(kernel_neon_busy);
> }
>
> #else /* ! CONFIG_KERNEL_MODE_NEON */
> --
> 1.9.1
>
Powered by blists - more mailing lists