lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 15 Aug 2023 09:12:48 -0700
From:   Vineet Gupta <vgupta@...nel.org>
To:     Pavel.Kozlov@...opsys.com, linux-snps-arc@...ts.infradead.org
Cc:     linux-kernel@...r.kernel.org, vgupta@...nel.org,
        stable@...r.kernel.org
Subject: Re: [PATCH] ARC: avoid unwanted gcc optimizations in atomic
 operations



On 8/15/23 08:11, Pavel.Kozlov@...opsys.com wrote:
> From: Pavel Kozlov<pavel.kozlov@...opsys.com>
>
> Notify a compiler about write operations and prevent unwanted
> optimizations. Add the "memory" clobber to the clobber list.
>
> An obvious problem with unwanted compiler optimizations appeared after
> the cpumask optimization commit 596ff4a09b89 ("cpumask: re-introduce
> constant-sized cpumask optimizations").
>
> After this commit the SMP kernels for ARC no longer loads because of
> failed assert in the percpu allocator initialization routine:
>
> percpu: BUG: failure at mm/percpu.c:2981/pcpu_build_alloc_info()!
>
> The write operation performed by the scond instruction in the atomic
> inline asm code is not properly passed to the compiler. The compiler
> cannot correctly optimize a nested loop that runs through the cpumask
> in the pcpu_build_alloc_info() function.
>
> Add the "memory" clobber to fix this.
>
> Link:https://github.com/foss-for-synopsys-dwc-arc-processors/linux/issues/135
> Cc:<stable@...r.kernel.org>  # v6.3+
> Signed-off-by: Pavel Kozlov<pavel.kozlov@...opsys.com>


Acked-by: Vineet Gupta <vgupta@...nel.org>

Fixes: b64be6836993c431e ("ARC: atomics: implement relaxed variants")

Before that commit, atomic ops could elide memory clobber because the 
trailing smp_mb() did that anyways.
However after that commit, the smp_mb() was optional for relaxed 
variants and thus needs clobber.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ