[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aMq5DbqsXj6vP7Xe@e129823.arm.com>
Date: Wed, 17 Sep 2025 14:35:09 +0100
From: Yeoreum Yun <yeoreum.yun@....com>
To: Mark Rutland <mark.rutland@....com>
Cc: catalin.marinas@....com, will@...nel.org, broonie@...nel.org,
maz@...nel.org, oliver.upton@...ux.dev, joey.gouly@....com,
james.morse@....com, ardb@...nel.org, scott@...amperecomputing.com,
suzuki.poulose@....com, yuzenghui@...wei.com,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI
Hi Mark,
> On Wed, Sep 17, 2025 at 12:08:38PM +0100, Yeoreum Yun wrote:
> > +static __always_inline int
> > +__lsui_cmpxchg64(u64 __user *uaddr, u64 *oldval, u64 newval)
> > +{
> > + int ret = 0;
> > +
> > + asm volatile("// __lsui_cmpxchg64\n"
> > + __LSUI_PREAMBLE
> > +"1: casalt %x2, %x3, %1\n"
> > +"2:\n"
> > + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)
> > + : "+r" (ret), "+Q" (*uaddr), "+r" (*oldval)
> > + : "r" (newval)
> > + : "memory");
> > +
> > + return ret;
> > +}
> > +
> > +static __always_inline int
> > +__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> > +{
> > + u64 __user *uaddr_al;
>
> Please use 'uaddr64' to match the other 64-bit variables.
>
> I assume that the '_al' suffix is meant to be short for 'aligned', but I
> think using '64' is more consistent and clearer.
>
> > + u64 oval64, nval64, tmp;
>
> Likewise, 'orig64' would be clearer than 'tmp' here.
Thanks for your suggestion.
>
> > + static const u64 hi_mask = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ?
> > + GENMASK_U64(63, 32): GENMASK_U64(31, 0);
> > + static const u8 hi_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 32 : 0;
> > + static const u8 lo_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 0 : 32;
> > +
> > + uaddr_al = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> > + if (get_user(oval64, uaddr_al))
> > + return -EFAULT;
> > +
> > + if ((u32 __user *)uaddr_al != uaddr) {
> > + nval64 = ((oval64 & ~hi_mask) | ((u64)newval << hi_shift));
> > + oval64 = ((oval64 & ~hi_mask) | ((u64)oldval << hi_shift));
> > + } else {
> > + nval64 = ((oval64 & hi_mask) | ((u64)newval << lo_shift));
> > + oval64 = ((oval64 & hi_mask) | ((u64)oldval << lo_shift));
> > + }
> > +
> > + tmp = oval64;
> > +
> > + if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> > + return -EFAULT;
> > +
> > + if (tmp != oval64)
> > + return -EAGAIN;
>
> This means that we'll immediately return -EAGAIN upon a spurious failure
> (where the adjacent 4 bytes have changed), whereas the LL/SC ops would
> retry FUTEX_MAX_LOOPS before returning -EGAIN.
>
> I suspect we want to retry here (or in the immediate caller).
Right. I've thought about it but at the time of writing,
I return -EAGAIN immediately. Let's wait for other people's comments.
>
> > +
> > + *oval = oldval;
> > +
> > + return 0;
> > +}
>
> Aside from the retry issue, I *think* you can simplify this to something
> like:
>
> static __always_inline int
> __lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
> {
> uaddr64 = (u64 __user *)PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> u64 oval64, nval64, orig64;
>
> if (get_user(oval64, uaddr64)
> return -EFAULT;
>
> if (IS_ALIGNED(addr, sizeof(u64)) == IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN)) {
> FIELD_MODIFY(GENMASK_U64(31, 0), &oval64, oldval);
> FIELD_MODIFY(GENMASK_U64(31, 0), &nval64, newval);
> } else {
> FIELD_MODIFY(GENMASK_U64(63, 32), &oval64, oldval);
> FIELD_MODIFY(GENMASK_U64(63, 32), &nval64, newval);
> }
> orig64 = oval64;
>
> if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> return -EFAULT;
>
> if (oval64 != orig64)
> return -EAGAIN;
>
> *oval = oldval;
> return 0;
> }
Hmm I think this wouldn'b cover the case below when big-endianess used.
struct {
u32 others 0x55667788;
u32 futex = 0x11223344;
};
In this case, memory layout would be:
55 66 77 88 11 22 33 44
So, the value of fetched oval64 is 0x5566778811223344;
So, it should modify the GENMASK_U64(31, 0) fields.
But, it tries to modify GENMASK_U64(63, 32) fields.
Thanks!
[...]
--
Sincerely,
Yeoreum Yun
Powered by blists - more mailing lists