[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241106113953.GA13801@willie-the-truck>
Date: Wed, 6 Nov 2024 11:39:54 +0000
From: Will Deacon <will@...nel.org>
To: Haris Okanovic <harisokn@...zon.com>
Cc: ankur.a.arora@...cle.com, catalin.marinas@....com,
linux-pm@...r.kernel.org, kvm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
pbonzini@...hat.com, wanpengli@...cent.com, vkuznets@...hat.com,
rafael@...nel.org, daniel.lezcano@...aro.org, peterz@...radead.org,
arnd@...db.de, lenb@...nel.org, mark.rutland@....com,
mtosatti@...hat.com, sudeep.holla@....com, cl@...two.org,
misono.tomohiro@...itsu.com, maobibo@...ngson.cn,
joao.m.martins@...cle.com, boris.ostrovsky@...cle.com,
konrad.wilk@...cle.com
Subject: Re: [PATCH 1/5] asm-generic: add smp_vcond_load_relaxed()
On Tue, Nov 05, 2024 at 12:30:37PM -0600, Haris Okanovic wrote:
> Relaxed poll until desired mask/value is observed at the specified
> address or timeout.
>
> This macro is a specialization of the generic smp_cond_load_relaxed(),
> which takes a simple mask/value condition (vcond) instead of an
> arbitrary expression. It allows architectures to better specialize the
> implementation, e.g. to enable wfe() polling of the address on arm.
This doesn't make sense to me. The existing smp_cond_load() functions
already use wfe on arm64 and I don't see why we need a special helper
just to do a mask.
> Signed-off-by: Haris Okanovic <harisokn@...zon.com>
> ---
> include/asm-generic/barrier.h | 25 +++++++++++++++++++++++++
> 1 file changed, 25 insertions(+)
>
> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> index d4f581c1e21d..112027eabbfc 100644
> --- a/include/asm-generic/barrier.h
> +++ b/include/asm-generic/barrier.h
> @@ -256,6 +256,31 @@ do { \
> })
> #endif
>
> +/**
> + * smp_vcond_load_relaxed() - (Spin) wait until an expected value at address
> + * with no ordering guarantees. Spins until `(*addr & mask) == val` or
> + * `nsecs` elapse, and returns the last observed `*addr` value.
> + *
> + * @nsecs: timeout in nanoseconds
> + * @addr: pointer to an integer
> + * @mask: a bit mask applied to read values
> + * @val: Expected value with mask
> + */
> +#ifndef smp_vcond_load_relaxed
I know naming is hard, but "vcond" is especially terrible.
Perhaps smp_cond_load_timeout()?
Will
Powered by blists - more mailing lists