[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87jz04anq1.fsf@oracle.com>
Date: Wed, 05 Nov 2025 00:27:18 -0800
From: Ankur Arora <ankur.a.arora@...cle.com>
To: Catalin Marinas <catalin.marinas@....com>
Cc: Ankur Arora <ankur.a.arora@...cle.com>, Arnd Bergmann <arnd@...db.de>,
linux-kernel@...r.kernel.org, Linux-Arch <linux-arch@...r.kernel.org>,
linux-arm-kernel@...ts.infradead.org, linux-pm@...r.kernel.org,
bpf@...r.kernel.org, Will Deacon <will@...nel.org>,
Peter Zijlstra
<peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mark
Rutland <mark.rutland@....com>,
"Haris
Okanovic" <harisokn@...zon.com>,
"Christoph Lameter (Ampere)"
<cl@...two.org>,
Alexei Starovoitov <ast@...nel.org>,
"Rafael J . Wysocki"
<rafael@...nel.org>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
"Kumar
Kartikeya Dwivedi" <memxor@...il.com>, zhenglifeng1@...wei.com,
xueshuai@...ux.alibaba.com, Joao Martins <joao.m.martins@...cle.com>,
"Boris
Ostrovsky" <boris.ostrovsky@...cle.com>,
Konrad Rzeszutek Wilk
<konrad.wilk@...cle.com>
Subject: Re: [RESEND PATCH v7 2/7] arm64: barrier: Support
smp_cond_load_relaxed_timeout()
Catalin Marinas <catalin.marinas@....com> writes:
> On Mon, Nov 03, 2025 at 01:00:33PM -0800, Ankur Arora wrote:
>> /**
>> * smp_cond_load_relaxed_timeout() - (Spin) wait for cond with no ordering
>> * guarantees until a timeout expires.
>> * @ptr: pointer to the variable to wait on
>> * @cond: boolean expression to wait for
>> * @time_expr: time expression in caller's preferred clock
>> * @time_end: end time in nanosecond (compared against time_expr;
>> * might also be used for setting up a future event.)
>> *
>> * Equivalent to using READ_ONCE() on the condition variable.
>> *
>> * Note that the expiration of the timeout might have an architecture specific
>> * delay.
>> */
>> #ifndef smp_cond_load_relaxed_timeout
>> #define smp_cond_load_relaxed_timeout(ptr, cond_expr, time_expr, time_end_ns) \
>> ({ \
>> typeof(ptr) __PTR = (ptr); \
>> __unqual_scalar_typeof(*ptr) VAL; \
>> u32 __n = 0, __spin = SMP_TIMEOUT_POLL_COUNT; \
>> u64 __time_end_ns = (time_end_ns); \
>> \
>> for (;;) { \
>> VAL = READ_ONCE(*__PTR); \
>> if (cond_expr) \
>> break; \
>> cpu_poll_relax(__PTR, VAL, __time_end_ns); \
>
> With time_end_ns being passed to cpu_poll_relax(), we assume that this
> is always the absolute time. Do we still need time_expr in this case?
> It works for WFET as long as we can map this time_end_ns onto the
> hardware CNTVCT.
So I like this idea. Given that we only promise a coarse granularity we
should be able to get by with using a coarse clock of our choosing.
However, maybe some callers need a globally consistent clock just in
case they could migrate and do something stateful in the cond_expr?
(For instance rqspinlock wants ktime_mono. Though I don't think these
callers can migrate.)
> Alternatively, we could pass something like remaining_ns, though not
> sure how smp_cond_load_relaxed_timeout() can decide to spin before
> checking time_expr again (we probably went over this in the past two
> years ;)).
I'm sure it is in there somewhere :).
This one?: https://lore.kernel.org/lkml/aJy414YufthzC1nv@arm.com/.
Though the whole wait_policy thing confused the issue somewhat there.
Though that problem exists for both remaining_ns and for time_end_ns
with WFE. I think we are fine so long as SMP_TIMEOUT_POLL_COUNT is
defined to be 1.
For now, I think it makes sense to always pass the absolute deadline
even if the caller uses remaining_ns. So:
#define smp_cond_load_relaxed_timeout(ptr, cond_expr, time_expr, remaining_ns) \
({ \
typeof(ptr) __PTR = (ptr); \
__unqual_scalar_typeof(*ptr) VAL; \
u32 __n = 0, __spin = SMP_TIMEOUT_POLL_COUNT; \
u64 __time_start_ns = (time_expr); \
s64 __time_end_ns = __time_start_ns + (remaining_ns); \
\
for (;;) { \
VAL = READ_ONCE(*__PTR); \
if (cond_expr) \
break; \
cpu_poll_relax(__PTR, VAL, __time_end_ns); \
if (++__n < __spin) \
continue; \
if ((time_expr) >= __time_end_ns) { \
VAL = READ_ONCE(*__PTR); \
break; \
} \
__n = 0; \
} \
(typeof(*ptr))VAL; \
})
--
ankur
Powered by blists - more mailing lists