[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241217-aba8759654f30826104e5da0@orel>
Date: Tue, 17 Dec 2024 14:03:44 +0100
From: Andrew Jones <ajones@...tanamicro.com>
To: guoren@...nel.org
Cc: paul.walmsley@...ive.com, palmer@...belt.com, bjorn@...osinc.com,
conor@...nel.org, leobras@...hat.com, alexghiti@...osinc.com,
christoph.muellner@...ll.eu, linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
parri.andrea@...il.com, ericchancf@...gle.com, Guo Ren <guoren@...ux.alibaba.com>
Subject: Re: [PATCH V2] riscv: Implement smp_cond_load8/16() with Zawrs
On Mon, Dec 16, 2024 at 08:39:10PM -0500, guoren@...nel.org wrote:
> From: Guo Ren <guoren@...ux.alibaba.com>
>
> RISC-V code uses the queued spinlock implementation, which calls
> the macros smp_cond_load_acquire for one byte. So, complement the
> implementation of byte and halfword versions.
>
> Signed-off-by: Guo Ren <guoren@...ux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@...nel.org>
> Cc: Andrew Jones <ajones@...tanamicro.com>
> ---
> Changes in V2:
> - Fixup mask typo (0xf -> 0xff, 0xff -> 0xffff) by Andrew.
> ---
> arch/riscv/include/asm/cmpxchg.h | 38 +++++++++++++++++++++++++++++---
> 1 file changed, 35 insertions(+), 3 deletions(-)
>
> diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> index 4cadc56220fe..aa4410beb065 100644
> --- a/arch/riscv/include/asm/cmpxchg.h
> +++ b/arch/riscv/include/asm/cmpxchg.h
> @@ -365,16 +365,48 @@ static __always_inline void __cmpwait(volatile void *ptr,
> {
> unsigned long tmp;
>
> + u32 *__ptr32b;
> + ulong __s, __val, __mask;
> +
> asm goto(ALTERNATIVE("j %l[no_zawrs]", "nop",
> 0, RISCV_ISA_EXT_ZAWRS, 1)
> : : : : no_zawrs);
>
> switch (size) {
> case 1:
> - fallthrough;
> + __ptr32b = (u32 *)((ulong)(ptr) & ~0x3);
> + __s = ((ulong)(ptr) & 0x3) * BITS_PER_BYTE;
> + __val = val << __s;
> + __mask = 0xff << __s;
> +
> + asm volatile(
> + " lr.w %0, %1\n"
> + " and %0, %0, %3\n"
> + " xor %0, %0, %2\n"
> + " bnez %0, 1f\n"
> + ZAWRS_WRS_NTO "\n"
> + "1:"
> + : "=&r" (tmp), "+A" (*(__ptr32b))
> + : "r" (__val), "r" (__mask)
> + : "memory");
> + break;
> case 2:
> - /* RISC-V doesn't have lr instructions on byte and half-word. */
> - goto no_zawrs;
> + __ptr32b = (u32 *)((ulong)(ptr) & ~0x3);
> + __s = ((ulong)(ptr) & 0x2) * BITS_PER_BYTE;
> + __val = val << __s;
> + __mask = 0xffff << __s;
> +
> + asm volatile(
> + " lr.w %0, %1\n"
> + " and %0, %0, %3\n"
> + " xor %0, %0, %2\n"
> + " bnez %0, 1f\n"
> + ZAWRS_WRS_NTO "\n"
> + "1:"
> + : "=&r" (tmp), "+A" (*(__ptr32b))
> + : "r" (__val), "r" (__mask)
> + : "memory");
> + break;
> case 4:
> asm volatile(
> " lr.w %0, %1\n"
> --
> 2.40.1
>
Reviewed-by: Andrew Jones <ajones@...tanamicro.com>
Powered by blists - more mailing lists