[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241107190818.522639-2-ankur.a.arora@oracle.com>
Date: Thu, 7 Nov 2024 11:08:04 -0800
From: Ankur Arora <ankur.a.arora@...cle.com>
To: linux-pm@...r.kernel.org, kvm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org
Cc: catalin.marinas@....com, will@...nel.org, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, pbonzini@...hat.com,
vkuznets@...hat.com, rafael@...nel.org, daniel.lezcano@...aro.org,
peterz@...radead.org, arnd@...db.de, lenb@...nel.org,
mark.rutland@....com, harisokn@...zon.com, mtosatti@...hat.com,
sudeep.holla@....com, cl@...two.org, maz@...nel.org,
misono.tomohiro@...itsu.com, maobibo@...ngson.cn,
zhenglifeng1@...wei.com, joao.m.martins@...cle.com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com
Subject: [PATCH v9 01/15] asm-generic: add barrier smp_cond_load_relaxed_timeout()
Add a timed variant of smp_cond_load_relaxed().
This is useful because arm64 supports polling on a conditional variable
by directly waiting on the cacheline instead of spin waiting for the
condition to change.
However, an implementation such as this has a problem that it can block
forever -- unless there's an explicit timeout or another out-of-band
mechanism which allows it to come out of the wait state periodically.
smp_cond_load_relaxed_timeout() supports these semantics by specifying
a time-check expression and an associated time-limit.
However, note that for the generic spin-wait implementation we want to
minimize the numbers of instructions executed in each iteration. So,
limit how often we evaluate the time-check expression by doing it once
every smp_cond_time_check_count.
The inner loop in poll_idle() has a substantially similar structure
and constraints as smp_cond_load_relaxed_timeout(), so define
smp_cond_time_check_count to the same value used in poll_idle().
Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
---
include/asm-generic/barrier.h | 42 +++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index d4f581c1e21d..77726ef807e4 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -273,6 +273,48 @@ do { \
})
#endif
+#ifndef smp_cond_time_check_count
+/*
+ * Limit how often smp_cond_load_relaxed_timeout() evaluates time_expr_ns.
+ * This helps reduce the number of instructions executed while spin-waiting.
+ */
+#define smp_cond_time_check_count 200
+#endif
+
+/**
+ * smp_cond_load_relaxed_timeout() - (Spin) wait for cond with no ordering
+ * guarantees until a timeout expires.
+ * @ptr: pointer to the variable to wait on
+ * @cond: boolean expression to wait for
+ * @time_expr_ns: evaluates to the current time
+ * @time_limit_ns: compared against time_expr_ns
+ *
+ * Equivalent to using READ_ONCE() on the condition variable.
+ *
+ * Due to C lacking lambda expressions we load the value of *ptr into a
+ * pre-named variable @VAL to be used in @cond.
+ */
+#ifndef smp_cond_load_relaxed_timeout
+#define smp_cond_load_relaxed_timeout(ptr, cond_expr, time_expr_ns, \
+ time_limit_ns) ({ \
+ typeof(ptr) __PTR = (ptr); \
+ __unqual_scalar_typeof(*ptr) VAL; \
+ unsigned int __count = 0; \
+ for (;;) { \
+ VAL = READ_ONCE(*__PTR); \
+ if (cond_expr) \
+ break; \
+ cpu_relax(); \
+ if (__count++ < smp_cond_time_check_count) \
+ continue; \
+ if ((time_expr_ns) >= time_limit_ns) \
+ break; \
+ __count = 0; \
+ } \
+ (typeof(*ptr))VAL; \
+})
+#endif
+
/*
* pmem_wmb() ensures that all stores for which the modification
* are written to persistent storage by preceding instructions have
--
2.43.5
Powered by blists - more mailing lists