[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zgw27hoEwKxAkyjh@FVFF77S0Q05N>
Date: Tue, 2 Apr 2024 17:48:46 +0100
From: Mark Rutland <mark.rutland@....com>
To: Haris Okanovic <harisokn@...zon.com>
Cc: linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
linux-assembly@...r.kernel.org, peterz@...radead.org
Subject: Re: [PATCH 2/3] arm64: add __READ_ONCE_EX()
On Mon, Apr 01, 2024 at 08:47:05PM -0500, Haris Okanovic wrote:
> Perform an exclusive load, which atomically loads a word and arms the
> execusive monitor to enable wfe() polling of an address.
>
> Adding this macro in preparation for an arm64 cpuidle driver which
> supports a wfe() based polling state.
>
> https://developer.arm.com/documentation/dht0008/a/arm-synchronization-primitives/exclusive-accesses/exclusive-monitors
>
> Signed-off-by: Haris Okanovic <harisokn@...zon.com>
> ---
> arch/arm64/include/asm/readex.h | 46 +++++++++++++++++++++++++++++++++
> 1 file changed, 46 insertions(+)
> create mode 100644 arch/arm64/include/asm/readex.h
>
> diff --git a/arch/arm64/include/asm/readex.h b/arch/arm64/include/asm/readex.h
> new file mode 100644
> index 000000000000..51963c3107e1
> --- /dev/null
> +++ b/arch/arm64/include/asm/readex.h
> @@ -0,0 +1,46 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Based on arch/arm64/include/asm/rwonce.h
> + *
> + * Copyright (C) 2020 Google LLC.
> + * Copyright (C) 2024 Amazon.com, Inc. or its affiliates.
> + */
> +
> +#ifndef __ASM_READEX_H
> +#define __ASM_READEX_H
> +
> +#define __LOAD_EX(sfx, regs...) "ldaxr" #sfx "\t" #regs
> +
> +#define __READ_ONCE_EX(x) \
> +({ \
> + typeof(&(x)) __x = &(x); \
> + int atomic = 1; \
> + union { __unqual_scalar_typeof(*__x) __val; char __c[1]; } __u; \
> + switch (sizeof(x)) { \
> + case 1: \
> + asm volatile(__LOAD_EX(b, %w0, %1) \
> + : "=r" (*(__u8 *)__u.__c) \
> + : "Q" (*__x) : "memory"); \
> + break; \
> + case 2: \
> + asm volatile(__LOAD_EX(h, %w0, %1) \
> + : "=r" (*(__u16 *)__u.__c) \
> + : "Q" (*__x) : "memory"); \
> + break; \
> + case 4: \
> + asm volatile(__LOAD_EX(, %w0, %1) \
> + : "=r" (*(__u32 *)__u.__c) \
> + : "Q" (*__x) : "memory"); \
> + break; \
> + case 8: \
> + asm volatile(__LOAD_EX(, %0, %1) \
> + : "=r" (*(__u64 *)__u.__c) \
> + : "Q" (*__x) : "memory"); \
> + break; \
> + default: \
> + atomic = 0; \
> + } \
> + atomic ? (typeof(*__x))__u.__val : (*(volatile typeof(__x))__x);\
> +})
Why can't you use the existing smp_cond_load_relaxed() or
smp_cond_load_acquire()?
I don't believe this is necessary.
Mark.
Powered by blists - more mailing lists