[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aMqwPYb53L+OwRfw@e129823.arm.com>
Date: Wed, 17 Sep 2025 13:57:33 +0100
From: Yeoreum Yun <yeoreum.yun@....com>
To: catalin.marinas@....com, will@...nel.org, broonie@...nel.org,
maz@...nel.org, oliver.upton@...ux.dev, joey.gouly@....com,
james.morse@....com, ardb@...nel.org, scott@...amperecomputing.com,
suzuki.poulose@....com, yuzenghui@...wei.com, mark.rutland@....com
Cc: linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI
Hi,
> +LSUI_FUTEX_ATOMIC_OP(add, ldtadd, al)
> +LSUI_FUTEX_ATOMIC_OP(or, ldtset, al)
> +LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr, al)
> +LSUI_FUTEX_ATOMIC_OP(set, swpt, al)
> +
> +static __always_inline int
> +__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
> +{
> + int ret = 0;
> +
> + asm volatile("// __lsui_cmpxchg64\n"
> + __LSUI_PREAMBLE
> +"1: casalt %x2, %x3, %1\n"
> +"2:\n"
> + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
> + : "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
> + : "r" (newval)
> + : "memory");
> +
> + return ret;
> +}
> +
> +static __always_inline int
> +__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> +{
> + u64 __user *uaddr_al;
> + u64 oval64, nval64, tmp;
> + static const u64 hi_mask = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ?
> + GENMASK_U64(63, 32): GENMASK_U64(31, 0);
> + static const u8 hi_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 32 : 0;
> + static const u8 lo_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 0 : 32;
> +
> + uaddr_al = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> + if (get_user(oval64, uaddr_al))
> + return -EFAULT;
> +
> + if ((u32 __user *)uaddr_al != uaddr) {
> + nval64 = ((oval64 & ~hi_mask) | ((u64)newval << hi_shift));
> + oval64 = ((oval64 & ~hi_mask) | ((u64)oldval << hi_shift));
> + } else {
> + nval64 = ((oval64 & hi_mask) | ((u64)newval << lo_shift));
> + oval64 = ((oval64 & hi_mask) | ((u64)oldval << lo_shift));
> + }
> +
> + tmp = oval64;
> +
> + if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> + return -EFAULT;
> +
> + if (tmp != oval64)
> + return -EAGAIN;
> +
> + *oval = oldval;
> +
> + return 0;
> +}
> +
While I see the code I couldn't erase some suspicion
because of below questions...:
1. Suppose there is structure:
struct s_test {
u32 futex;
u32 others;
};
Before CPU0 executing casalt futex, CPU1 executes the store32_rel() on
others. Then, Can CPU0 can observe the CPU1's store32_rel()
since casalt operates with &futex, but CPU1 operates with &others.
CPU0 CPU1
... store32_rel(&s_test->others);
/// can this see CPU1's modification?
casalt(..., ..., &s_test->futex);
2. Suppose there is structure:
struct s_test {
u32 others;
u32 futex;
};
Then, can below "ldtr" be reordered after casalt?
ldtr(&s_test->futex);
...
casalt(..., ..., &s_test->others);
I think the both cases can break the memory consistency unintensionaly
in the view of user...
Well, the dmb ish; could be solved the above problem before casalt,
However, It seems it's much better to return former ll/sc method...?
Thanks!
--
Sincerely,
Yeoreum Yun
Powered by blists - more mailing lists