[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y7RVkjDC3EjQUCzM@FVFF77S0Q05N>
Date: Tue, 3 Jan 2023 16:19:30 +0000
From: Mark Rutland <mark.rutland@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Boqun Feng <boqun.feng@...il.com>, torvalds@...ux-foundation.org,
corbet@....net, will@...nel.org, catalin.marinas@....com,
dennis@...nel.org, tj@...nel.org, cl@...ux.com, hca@...ux.ibm.com,
gor@...ux.ibm.com, agordeev@...ux.ibm.com,
borntraeger@...ux.ibm.com, svens@...ux.ibm.com,
Herbert Xu <herbert@...dor.apana.org.au>, davem@...emloft.net,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
joro@...tes.org, suravee.suthikulpanit@....com,
robin.murphy@....com, dwmw2@...radead.org,
baolu.lu@...ux.intel.com, Arnd Bergmann <arnd@...db.de>,
penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
Andrew Morton <akpm@...ux-foundation.org>, vbabka@...e.cz,
roman.gushchin@...ux.dev, 42.hyeyoo@...il.com,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-s390@...r.kernel.org,
linux-crypto@...r.kernel.org, iommu@...ts.linux.dev,
linux-arch@...r.kernel.org
Subject: Re: [RFC][PATCH 05/12] arch: Introduce
arch_{,try_}_cmpxchg128{,_local}()
On Tue, Jan 03, 2023 at 02:03:37PM +0000, Mark Rutland wrote:
> On Tue, Jan 03, 2023 at 01:25:35PM +0000, Mark Rutland wrote:
> > On Tue, Dec 20, 2022 at 12:08:16PM +0100, Peter Zijlstra wrote:
> > > On Mon, Dec 19, 2022 at 12:07:25PM -0800, Boqun Feng wrote:
> > > > On Mon, Dec 19, 2022 at 04:35:30PM +0100, Peter Zijlstra wrote:
> > > > > For all architectures that currently support cmpxchg_double()
> > > > > implement the cmpxchg128() family of functions that is basically the
> > > > > same but with a saner interface.
> > > > >
> > > > > Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> > > > > ---
> > > > > arch/arm64/include/asm/atomic_ll_sc.h | 38 +++++++++++++++++++++++
> > > > > arch/arm64/include/asm/atomic_lse.h | 33 +++++++++++++++++++-
> > > > > arch/arm64/include/asm/cmpxchg.h | 26 ++++++++++++++++
> > > > > arch/s390/include/asm/cmpxchg.h | 33 ++++++++++++++++++++
> > > > > arch/x86/include/asm/cmpxchg_32.h | 3 +
> > > > > arch/x86/include/asm/cmpxchg_64.h | 55 +++++++++++++++++++++++++++++++++-
> > > > > 6 files changed, 185 insertions(+), 3 deletions(-)
> > > > >
> > > > > --- a/arch/arm64/include/asm/atomic_ll_sc.h
> > > > > +++ b/arch/arm64/include/asm/atomic_ll_sc.h
> > > > > @@ -326,6 +326,44 @@ __CMPXCHG_DBL( , , , )
> > > > > __CMPXCHG_DBL(_mb, dmb ish, l, "memory")
> > > > >
> > > > > #undef __CMPXCHG_DBL
> > > > > +
> > > > > +union __u128_halves {
> > > > > + u128 full;
> > > > > + struct {
> > > > > + u64 low, high;
> > > > > + };
> > > > > +};
> > > > > +
> > > > > +#define __CMPXCHG128(name, mb, rel, cl) \
> > > > > +static __always_inline u128 \
> > > > > +__ll_sc__cmpxchg128##name(volatile u128 *ptr, u128 old, u128 new) \
> > > > > +{ \
> > > > > + union __u128_halves r, o = { .full = (old) }, \
> > > > > + n = { .full = (new) }; \
> > > > > + \
> > > > > + asm volatile("// __cmpxchg128" #name "\n" \
> > > > > + " prfm pstl1strm, %2\n" \
> > > > > + "1: ldxp %0, %1, %2\n" \
> > > > > + " eor %3, %0, %3\n" \
> > > > > + " eor %4, %1, %4\n" \
> > > > > + " orr %3, %4, %3\n" \
> > > > > + " cbnz %3, 2f\n" \
> > > > > + " st" #rel "xp %w3, %5, %6, %2\n" \
> > > > > + " cbnz %w3, 1b\n" \
> > > > > + " " #mb "\n" \
> > > > > + "2:" \
> > > > > + : "=&r" (r.low), "=&r" (r.high), "+Q" (*(unsigned long *)ptr) \
> > > >
> > > > I wonder whether we should use "(*(u128 *)ptr)" instead of "(*(unsigned
> > > > long *) ptr)"? Because compilers may think only 64bit value pointed by
> > > > "ptr" gets modified, and they are allowed to do "useful" optimization.
> > >
> > > In this I've copied the existing cmpxchg_double() code; I'll have to let
> > > the arch folks speak here, I've no clue.
> >
> > We definitely need to ensure the compiler sees we poke the whole thing, or it
> > can get this horribly wrong, so that is a latent bug.
> >
> > See commit:
> >
> > fee960bed5e857eb ("arm64: xchg: hazard against entire exchange variable")
> >
> > ... for examples of GCC being clever, where I overlooked the *_double() cases.
> Using __uint128_t instead, e.g.
>
> diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h
> index 0890e4f568fb7..cbb3d961123b1 100644
> --- a/arch/arm64/include/asm/atomic_ll_sc.h
> +++ b/arch/arm64/include/asm/atomic_ll_sc.h
> @@ -315,7 +315,7 @@ __ll_sc__cmpxchg_double##name(unsigned long old1, \
> " cbnz %w0, 1b\n" \
> " " #mb "\n" \
> "2:" \
> - : "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr) \
> + : "=&r" (tmp), "=&r" (ret), "+Q" (*(__uint128_t *)ptr) \
> : "r" (old1), "r" (old2), "r" (new1), "r" (new2) \
> : cl); \
> \
> diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
> index 52075e93de6c0..a94d6dacc0292 100644
> --- a/arch/arm64/include/asm/atomic_lse.h
> +++ b/arch/arm64/include/asm/atomic_lse.h
> @@ -311,7 +311,7 @@ __lse__cmpxchg_double##name(unsigned long old1, \
> " eor %[old2], %[old2], %[oldval2]\n" \
> " orr %[old1], %[old1], %[old2]" \
> : [old1] "+&r" (x0), [old2] "+&r" (x1), \
> - [v] "+Q" (*(unsigned long *)ptr) \
> + [v] "+Q" (*(__uint128_t *)ptr) \
> : [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4), \
> [oldval1] "r" (oldval1), [oldval2] "r" (oldval2) \
> : cl); \
>
> ... makes GCC much happier:
> ... I'll go check whether clang is happy with that, and how far back that can
> go, otherwise we'll need to blat the high half with a separate constaint that
> (ideally) doesn't end up allocating a pointless address register.
Hmm... from the commit history it looks like GCC prior to 5.1 might not be
happy with that, but that *might* just be if we actually do arithmetic on the
value, and we might be ok just using it for memroy effects. I can't currently
get such an old GCC to run on my machines so I haven't been able to check.
I'll dig into this a bit more tomorrow, but it looks like the options (for a
backport-suitable fix) will be:
(a) use a __uint128_t input+output, as above, if we're lucky
(b) introduce a second 64-bit input+output for the high half (likely a "+o")
(c) use a full memory clobber for ancient compilers.
Mark.
Powered by blists - more mailing lists