[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zf6nnoiv.fsf@oracle.com>
Date: Fri, 09 Jan 2026 01:06:32 -0800
From: Ankur Arora <ankur.a.arora@...cle.com>
To: Will Deacon <will@...nel.org>
Cc: Ankur Arora <ankur.a.arora@...cle.com>, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-pm@...r.kernel.org, bpf@...r.kernel.org, arnd@...db.de,
catalin.marinas@....com, peterz@...radead.org,
akpm@...ux-foundation.org, mark.rutland@....com, harisokn@...zon.com,
cl@...two.org, ast@...nel.org, rafael@...nel.org,
daniel.lezcano@...aro.org, memxor@...il.com, zhenglifeng1@...wei.com,
xueshuai@...ux.alibaba.com, joao.m.martins@...cle.com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com
Subject: Re: [PATCH v8 02/12] arm64: barrier: Support
smp_cond_load_relaxed_timeout()
Will Deacon <will@...nel.org> writes:
> On Sun, Dec 14, 2025 at 08:49:09PM -0800, Ankur Arora wrote:
>> Support waiting in smp_cond_load_relaxed_timeout() via
>> __cmpwait_relaxed(). To ensure that we wake from waiting in
>> WFE periodically and don't block forever if there are no stores
>> to ptr, this path is only used when the event-stream is enabled.
>>
>> Note that when using __cmpwait_relaxed() we ignore the timeout
>> value, allowing an overshoot by upto the event-stream period.
>> And, in the unlikely event that the event-stream is unavailable,
>> fallback to spin-waiting.
>>
>> Also set SMP_TIMEOUT_POLL_COUNT to 1 so we do the time-check in
>> each iteration of smp_cond_load_relaxed_timeout().
>>
>> Cc: Arnd Bergmann <arnd@...db.de>
>> Cc: Will Deacon <will@...nel.org>
>> Cc: Catalin Marinas <catalin.marinas@....com>
>> Cc: linux-arm-kernel@...ts.infradead.org
>> Suggested-by: Will Deacon <will@...nel.org>
>> Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
>> ---
>>
>> Notes:
>> - cpu_poll_relax() now takes an additional parameter.
>>
>> - added a comment detailing why we define SMP_TIMEOUT_POLL_COUNT=1 and
>> how it ties up with smp_cond_load_relaxed_timeout().
>>
>> - explicitly include <asm/vdso/processor.h> for cpu_relax().
>>
>> arch/arm64/include/asm/barrier.h | 21 +++++++++++++++++++++
>> 1 file changed, 21 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
>> index 9495c4441a46..6190e178db51 100644
>> --- a/arch/arm64/include/asm/barrier.h
>> +++ b/arch/arm64/include/asm/barrier.h
>> @@ -12,6 +12,7 @@
>> #include <linux/kasan-checks.h>
>>
>> #include <asm/alternative-macros.h>
>> +#include <asm/vdso/processor.h>
>>
>> #define __nops(n) ".rept " #n "\nnop\n.endr\n"
>> #define nops(n) asm volatile(__nops(n))
>> @@ -219,6 +220,26 @@ do { \
>> (typeof(*ptr))VAL; \
>> })
>>
>> +/* Re-declared here to avoid include dependency. */
>> +extern bool arch_timer_evtstrm_available(void);
>> +
>> +/*
>> + * In the common case, cpu_poll_relax() sits waiting in __cmpwait_relaxed()
>> + * for the ptr value to change.
>> + *
>> + * Since this period is reasonably long, choose SMP_TIMEOUT_POLL_COUNT
>> + * to be 1, so smp_cond_load_{relaxed,acquire}_timeout() does a
>> + * time-check in each iteration.
>> + */
>> +#define SMP_TIMEOUT_POLL_COUNT 1
>> +
>> +#define cpu_poll_relax(ptr, val, timeout_ns) do { \
>> + if (arch_timer_evtstrm_available()) \
>> + __cmpwait_relaxed(ptr, val); \
>> + else \
>> + cpu_relax(); \
>> +} while (0)
>
> Acked-by: Will Deacon <will@...nel.org>
Thanks!
--
ankur
Powered by blists - more mailing lists