[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241210135052.GB15607@willie-the-truck>
Date: Tue, 10 Dec 2024 13:50:52 +0000
From: Will Deacon <will@...nel.org>
To: Ankur Arora <ankur.a.arora@...cle.com>
Cc: linux-pm@...r.kernel.org, kvm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org, catalin.marinas@....com,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
pbonzini@...hat.com, vkuznets@...hat.com, rafael@...nel.org,
daniel.lezcano@...aro.org, peterz@...radead.org, arnd@...db.de,
lenb@...nel.org, mark.rutland@....com, harisokn@...zon.com,
mtosatti@...hat.com, sudeep.holla@....com, cl@...two.org,
maz@...nel.org, misono.tomohiro@...itsu.com, maobibo@...ngson.cn,
zhenglifeng1@...wei.com, joao.m.martins@...cle.com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com
Subject: Re: [PATCH v9 05/15] arm64: barrier: add support for
smp_cond_relaxed_timeout()
On Thu, Nov 07, 2024 at 11:08:08AM -0800, Ankur Arora wrote:
> Support a waited variant of polling on a conditional variable
> via smp_cond_relaxed_timeout().
>
> This uses the __cmpwait_relaxed() primitive to do the actual
> waiting, when the wait can be guaranteed to not block forever
> (in case there are no stores to the waited for cacheline.)
> For this we depend on the availability of the event-stream.
>
> For cases when the event-stream is unavailable, we fallback to
> a spin-waited implementation which is identical to the generic
> variant.
>
> Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
> ---
> arch/arm64/include/asm/barrier.h | 54 ++++++++++++++++++++++++++++++++
> 1 file changed, 54 insertions(+)
>
> diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
> index 1ca947d5c939..ab2515ecd6ca 100644
> --- a/arch/arm64/include/asm/barrier.h
> +++ b/arch/arm64/include/asm/barrier.h
> @@ -216,6 +216,60 @@ do { \
> (typeof(*ptr))VAL; \
> })
>
> +#define __smp_cond_load_timeout_spin(ptr, cond_expr, \
> + time_expr_ns, time_limit_ns) \
> +({ \
> + typeof(ptr) __PTR = (ptr); \
> + __unqual_scalar_typeof(*ptr) VAL; \
> + unsigned int __count = 0; \
> + for (;;) { \
> + VAL = READ_ONCE(*__PTR); \
> + if (cond_expr) \
> + break; \
> + cpu_relax(); \
> + if (__count++ < smp_cond_time_check_count) \
> + continue; \
> + if ((time_expr_ns) >= time_limit_ns) \
> + break; \
> + __count = 0; \
> + } \
> + (typeof(*ptr))VAL; \
> +})
This is a carbon-copy of the asm-generic timeout implementation. Please
can you avoid duplicating that in the arch code?
Will
Powered by blists - more mailing lists