lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 20 Mar 2024 18:27:26 +0800
From: Xi Ruoyao <xry111@...111.site>
To: Huacai Chen <chenhuacai@...ngson.cn>, Arnd Bergmann <arnd@...db.de>, 
 Huacai Chen <chenhuacai@...nel.org>
Cc: loongarch@...ts.linux.dev, linux-arch@...r.kernel.org, Xuefeng Li
	 <lixuefeng@...ngson.cn>, Guo Ren <guoren@...nel.org>, Xuerui Wang
	 <kernel@...0n.name>, Jiaxun Yang <jiaxun.yang@...goat.com>, 
	linux-kernel@...r.kernel.org, loongson-kernel@...ts.loongnix.cn, 
	stable@...r.kernel.org, Xiaotian Wu <wuxiaotian@...ngson.cn>, Miao Wang
	 <shankerwangmiao@...il.com>, Xing Li <lixing@...ngson.cn>, Hongchen Zhang
	 <zhanghongchen@...ngson.cn>, Rui Wang <wangrui@...ngson.cn>
Subject: Re: [PATCH] LoongArch: Change __my_cpu_offset definition to avoid
 mis-optimization

On Fri, 2024-03-15 at 10:45 +0800, Huacai Chen wrote:
> From GCC commit 3f13154553f8546a ("df-scan: remove ad-hoc handling of
> global regs in asms"), global registers will no longer be forced to add
> to the def-use chain. Then current_thread_info(), current_stack_pointer
> and __my_cpu_offset may be lifted out of the loop because they are no
> longer treated as "volatile variables".

Ooops...  I'm wondering why this issue has not blown up our systems
before.  The referred GCC commit is far before LoongArch CPUs are taped.

> This optimization is still correct for the current_thread_info() and
> current_stack_pointer usages because they are associated to a thread.
> However it is wrong for __my_cpu_offset because it is associated to a
> CPU rather than a thread: if the thread migrates to a different CPU in
> the loop, __my_cpu_offset should be changed.
> 
> Change __my_cpu_offset definition to treat it as a "volatile variable",
> in order to avoid such a mis-optimization.
> 
> Cc: stable@...r.kernel.org

I suppose we should add Fixes: 5b0b14e550a0 ("LoongArch: Add
atomic/locking header") here.

> Reported-by: Xiaotian Wu <wuxiaotian@...ngson.cn>
> Reported-by: Miao Wang <shankerwangmiao@...il.com>
> Signed-off-by: Xing Li <lixing@...ngson.cn>
> Signed-off-by: Hongchen Zhang <zhanghongchen@...ngson.cn>
> Signed-off-by: Rui Wang <wangrui@...ngson.cn>
> Signed-off-by: Huacai Chen <chenhuacai@...ngson.cn>
> ---
>  arch/loongarch/include/asm/percpu.h | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/loongarch/include/asm/percpu.h b/arch/loongarch/include/asm/percpu.h
> index 9b36ac003f89..03b98491d301 100644
> --- a/arch/loongarch/include/asm/percpu.h
> +++ b/arch/loongarch/include/asm/percpu.h
> @@ -29,7 +29,12 @@ static inline void set_my_cpu_offset(unsigned long off)
>  	__my_cpu_offset = off;
>  	csr_write64(off, PERCPU_BASE_KS);
>  }
> -#define __my_cpu_offset __my_cpu_offset
> +
> +#define __my_cpu_offset					\
> +({							\
> +	__asm__ __volatile__("":"+r"(__my_cpu_offset));	\
> +	__my_cpu_offset;				\
> +})
>  
>  #define PERCPU_OP(op, asm_op, c_op)					\
>  static __always_inline unsigned long __percpu_##op(void *ptr,		\

-- 
Xi Ruoyao <xry111@...111.site>
School of Aerospace Science and Technology, Xidian University

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ