[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aMLpMBWtHDI9sPHK@willie-the-truck>
Date: Thu, 11 Sep 2025 16:22:24 +0100
From: Will Deacon <will@...nel.org>
To: Yeoreum Yun <yeoreum.yun@....com>
Cc: catalin.marinas@....com, broonie@...nel.org, maz@...nel.org,
oliver.upton@...ux.dev, joey.gouly@....com, james.morse@....com,
ardb@...nel.org, scott@...amperecomputing.com,
suzuki.poulose@....com, yuzenghui@...wei.com, mark.rutland@....com,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RESEND v7 6/6] arm64: futex: support futex with FEAT_LSUI
On Sat, Aug 16, 2025 at 04:19:29PM +0100, Yeoreum Yun wrote:
> Current futex atomic operations are implemented with ll/sc instructions
> and clearing PSTATE.PAN.
>
> Since Armv9.6, FEAT_LSUI supplies not only load/store instructions but
> also atomic operation for user memory access in kernel it doesn't need
> to clear PSTATE.PAN bit anymore.
>
> With theses instructions some of futex atomic operations don't need to
> be implmented with ldxr/stlxr pair instead can be implmented with
> one atomic operation supplied by FEAT_LSUI.
>
> However, some of futex atomic operations still need to use ll/sc way
> via ldtxr/stltxr supplied by FEAT_LSUI since there is no correspondant
> atomic instruction or doesn't support word size operation.
> (i.e) eor, cas{mb}t
>
> But It's good to work without clearing PSTATE.PAN bit.
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun@....com>
> ---
> arch/arm64/include/asm/futex.h | 130 ++++++++++++++++++++++++++++++++-
> 1 file changed, 129 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
> index 22a6301a9f3d..ece35ca9b5d9 100644
> --- a/arch/arm64/include/asm/futex.h
> +++ b/arch/arm64/include/asm/futex.h
> @@ -9,6 +9,8 @@
> #include <linux/uaccess.h>
> #include <linux/stringify.h>
>
> +#include <asm/alternative.h>
> +#include <asm/alternative-macros.h>
> #include <asm/errno.h>
>
> #define LLSC_MAX_LOOPS 128 /* What's the largest number you can think of? */
> @@ -115,11 +117,137 @@ __llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> return ret;
> }
>
> +#ifdef CONFIG_AS_HAS_LSUI
> +
> +#define __LSUI_PREAMBLE ".arch_extension lsui\n"
> +
> +#define LSUI_FUTEX_ATOMIC_OP(op, asm_op, mb) \
> +static __always_inline int \
> +__lsui_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \
> +{ \
> + int ret = 0; \
> + int oldval; \
> + \
> + uaccess_ttbr0_enable(); \
> + asm volatile("// __lsui_futex_atomic_" #op "\n" \
> + __LSUI_PREAMBLE \
> +"1: " #asm_op #mb " %w3, %w2, %1\n" \
> +"2:\n" \
> + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0) \
> + : "+r" (ret), "+Q" (*uaddr), "=r" (oldval) \
> + : "r" (oparg) \
> + : "memory"); \
> + uaccess_ttbr0_disable(); \
> + \
> + if (!ret) \
> + *oval = oldval; \
> + \
> + return ret; \
> +}
> +
> +LSUI_FUTEX_ATOMIC_OP(add, ldtadd, al)
> +LSUI_FUTEX_ATOMIC_OP(or, ldtset, al)
> +LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr, al)
> +LSUI_FUTEX_ATOMIC_OP(set, swpt, al)
> +
> +static __always_inline int
> +__lsui_futex_atomic_and(int oparg, u32 __user *uaddr, int *oval)
> +{
> + return __lsui_futex_atomic_andnot(~oparg, uaddr, oval);
> +}
> +
> +static __always_inline int
> +__lsui_futex_atomic_eor(int oparg, u32 __user *uaddr, int *oval)
> +{
> + unsigned int loops = LLSC_MAX_LOOPS;
> + int ret, oldval, tmp;
> +
> + uaccess_ttbr0_enable();
> + /*
> + * there are no ldteor/stteor instructions...
> + */
*sigh*
Were these new instructions not added with futex in mind?
I wonder whether CAS would be better than exclusives for xor...
> +static __always_inline int
> +__lsui_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> +{
> + int ret = 0;
> + unsigned int loops = LLSC_MAX_LOOPS;
> + u32 val, tmp;
> +
> + uaccess_ttbr0_enable();
> + /*
> + * cas{al}t doesn't support word size...
> + */
What about just aligning down and doing a 64-bit cas in that case?
Will
Powered by blists - more mailing lists