[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aMvMucuo7BS2y87S@e129823.arm.com>
Date: Thu, 18 Sep 2025 10:11:21 +0100
From: Yeoreum Yun <yeoreum.yun@....com>
To: Mark Rutland <mark.rutland@....com>
Cc: catalin.marinas@....com, will@...nel.org, broonie@...nel.org,
maz@...nel.org, oliver.upton@...ux.dev, joey.gouly@....com,
james.morse@....com, ardb@...nel.org, scott@...amperecomputing.com,
suzuki.poulose@....com, yuzenghui@...wei.com,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 5/5] arm64: futex: support futex with FEAT_LSUI
Hi Mark,
[...]
> > > + static const u64 hi_mask = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ?
> > > + GENMASK_U64(63, 32): GENMASK_U64(31, 0);
> > > + static const u8 hi_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 32 : 0;
> > > + static const u8 lo_shift = IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? 0 : 32;
> > > +
> > > + uaddr_al = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
> > > + if (get_user(oval64, uaddr_al))
> > > + return -EFAULT;
> > > +
> > > + if ((u32 __user *)uaddr_al != uaddr) {
> > > + nval64 = ((oval64 & ~hi_mask) | ((u64)newval << hi_shift));
> > > + oval64 = ((oval64 & ~hi_mask) | ((u64)oldval << hi_shift));
> > > + } else {
> > > + nval64 = ((oval64 & hi_mask) | ((u64)newval << lo_shift));
> > > + oval64 = ((oval64 & hi_mask) | ((u64)oldval << lo_shift));
> > > + }
> > > +
> > > + tmp = oval64;
> > > +
> > > + if (__lsui_cmpxchg64(uaddr_al, &oval64, nval64))
> > > + return -EFAULT;
> > > +
> > > + if (tmp != oval64)
> > > + return -EAGAIN;
> >
> > This means that we'll immediately return -EAGAIN upon a spurious failure
> > (where the adjacent 4 bytes have changed), whereas the LL/SC ops would
> > retry FUTEX_MAX_LOOPS before returning -EGAIN.
> >
> > I suspect we want to retry here (or in the immediate caller).
>
> Right. I've thought about it but at the time of writing,
> I return -EAGAIN immediately. Let's wait for other people's comments.
When I get step back, I found my thougt was wrong as you point out.
So, what about this?
static __always_inline int
__lsui_cmpxchg32(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval)
{
u64 __user *uaddr64;
bool futex_on_lo;
int ret = -EAGAIN, i;
u32 other, orig_other;
union {
struct futex_on_lo {
u32 val;
u32 other;
} lo_futex;
struct futex_on_hi {
u32 other;
u32 val;
} hi_futex;
u64 raw;
} oval64, orig64, nval64;
uaddr64 = (u64 __user *) PTR_ALIGN_DOWN(uaddr, sizeof(u64));
futex_on_lo = (IS_ALIGNED((unsigned long)uaddr, sizeof(u64)) ==
IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN));
for (i = 0; i < FUTEX_MAX_LOOPS; i++) {
if (get_user(oval64.raw, uaddr64))
return -EFAULT;
nval64.raw = oval64.raw;
if (futex_on_lo) {
oval64.lo_futex.val = oldval;
nval64.lo_futex.val = newval;
} else {
oval64.hi_futex.val = oldval;
nval64.hi_futex.val = newval;
}
orig64.raw = oval64.raw;
if (__lsui_cmpxchg64(uaddr64, &oval64.raw, nval64.raw))
return -EFAULT;
if (futex_on_lo) {
oldval = oval64.lo_futex.val;
other = oval64.lo_futex.other;
orig_other = orig64.lo_futex.other;
} else {
oldval = oval64.hi_futex.val;
other = oval64.hi_futex.other;
orig_other = orig64.hi_futex.other;
}
if (other == orig_other) {
ret = 0;
break;
}
}
if (!ret)
*oval = oldval;
return ret;
}
Unfortunately, if there was high competition on "other"
I think return -EAGAIN is the best efforts..
Am I missing something?
Thanks.
--
Sincerely,
Yeoreum Yun
Powered by blists - more mailing lists