[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c491ab26-dc44-4bc9-b481-29b4ba62f658@linux.dev>
Date: Fri, 23 May 2025 14:43:23 +0800
From: Yanteng Si <si.yanteng@...ux.dev>
To: Huacai Chen <chenhuacai@...ngson.cn>, Huacai Chen <chenhuacai@...nel.org>
Cc: loongarch@...ts.linux.dev, Xuefeng Li <lixuefeng@...ngson.cn>,
Guo Ren <guoren@...nel.org>, Xuerui Wang <kernel@...0n.name>,
Jiaxun Yang <jiaxun.yang@...goat.com>, linux-kernel@...r.kernel.org,
stable@...r.kernel.org, WANG Rui <wangrui@...ngson.cn>
Subject: Re: [PATCH] LoongArch: Avoid using $r0/$r1 as "mask" for csrxchg
在 5/22/25 8:50 PM, Huacai Chen 写道:
> When building kernel with LLVM there are occasionally such errors:
>
> In file included from ./include/linux/spinlock.h:59:
> In file included from ./include/linux/irqflags.h:17:
> arch/loongarch/include/asm/irqflags.h:38:3: error: must not be $r0 or $r1
> 38 | "csrxchg %[val], %[mask], %[reg]\n\t"
> | ^
> <inline asm>:1:16: note: instantiated into assembly here
> 1 | csrxchg $a1, $ra, 0
> | ^
>
> The "mask" of the csrxchg instruction should not be $r0 or $r1, but the
> compiler cannot avoid generating such code currently. So force to use t0
> in the inline asm, in order to avoid using $r0/$r1.
>
> Cc: stable@...r.kernel.org
> Suggested-by: WANG Rui <wangrui@...ngson.cn>
> Signed-off-by: Huacai Chen <chenhuacai@...ngson.cn>
Reviewed-by: Yanteng Si <si.yanteng@...ux.dev>
Thanks,
Yanteng
> ---
> arch/loongarch/include/asm/irqflags.h | 16 ++++++++++++----
> 1 file changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/arch/loongarch/include/asm/irqflags.h b/arch/loongarch/include/asm/irqflags.h
> index 319a8c616f1f..003172b8406b 100644
> --- a/arch/loongarch/include/asm/irqflags.h
> +++ b/arch/loongarch/include/asm/irqflags.h
> @@ -14,40 +14,48 @@
> static inline void arch_local_irq_enable(void)
> {
> u32 flags = CSR_CRMD_IE;
> + register u32 mask asm("t0") = CSR_CRMD_IE;
> +
> __asm__ __volatile__(
> "csrxchg %[val], %[mask], %[reg]\n\t"
> : [val] "+r" (flags)
> - : [mask] "r" (CSR_CRMD_IE), [reg] "i" (LOONGARCH_CSR_CRMD)
> + : [mask] "r" (mask), [reg] "i" (LOONGARCH_CSR_CRMD)
> : "memory");
> }
>
> static inline void arch_local_irq_disable(void)
> {
> u32 flags = 0;
> + register u32 mask asm("t0") = CSR_CRMD_IE;
> +
> __asm__ __volatile__(
> "csrxchg %[val], %[mask], %[reg]\n\t"
> : [val] "+r" (flags)
> - : [mask] "r" (CSR_CRMD_IE), [reg] "i" (LOONGARCH_CSR_CRMD)
> + : [mask] "r" (mask), [reg] "i" (LOONGARCH_CSR_CRMD)
> : "memory");
> }
>
> static inline unsigned long arch_local_irq_save(void)
> {
> u32 flags = 0;
> + register u32 mask asm("t0") = CSR_CRMD_IE;
> +
> __asm__ __volatile__(
> "csrxchg %[val], %[mask], %[reg]\n\t"
> : [val] "+r" (flags)
> - : [mask] "r" (CSR_CRMD_IE), [reg] "i" (LOONGARCH_CSR_CRMD)
> + : [mask] "r" (mask), [reg] "i" (LOONGARCH_CSR_CRMD)
> : "memory");
> return flags;
> }
>
> static inline void arch_local_irq_restore(unsigned long flags)
> {
> + register u32 mask asm("t0") = CSR_CRMD_IE;
> +
> __asm__ __volatile__(
> "csrxchg %[val], %[mask], %[reg]\n\t"
> : [val] "+r" (flags)
> - : [mask] "r" (CSR_CRMD_IE), [reg] "i" (LOONGARCH_CSR_CRMD)
> + : [mask] "r" (mask), [reg] "i" (LOONGARCH_CSR_CRMD)
> : "memory");
> }
>
Powered by blists - more mailing lists