[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230809021311.1390578-6-leobras@redhat.com>
Date: Tue, 8 Aug 2023 23:13:09 -0300
From: Leonardo Bras <leobras@...hat.com>
To: Will Deacon <will@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Boqun Feng <boqun.feng@...il.com>,
Mark Rutland <mark.rutland@....com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Leonardo Bras <leobras@...hat.com>,
Andrea Parri <parri.andrea@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Andrzej Hajda <andrzej.hajda@...el.com>,
Palmer Dabbelt <palmer@...osinc.com>,
Guo Ren <guoren@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-riscv@...ts.infradead.org
Subject: [RFC PATCH v4 4/5] riscv/cmpxchg: Implement cmpxchg for variables of size 1 and 2
cmpxchg for variables of size 1-byte and 2-bytes is not yet available for
riscv, even though its present in other architectures such as arm64 and
x86. This could lead to not being able to implement some locking mechanisms
or requiring some rework to make it work properly.
Implement 1-byte and 2-bytes cmpxchg in order to achieve parity with other
architectures.
Signed-off-by: Leonardo Bras <leobras@...hat.com>
---
arch/riscv/include/asm/cmpxchg.h | 34 ++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 5a07646fae65..cfada8a7cfd2 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -72,6 +72,35 @@
* indicated by comparing RETURN with OLD.
*/
+#define __arch_cmpxchg_masked(sc_sfx, prepend, append, r, p, o, n) \
+({ \
+ u32 *__ptr32b = (u32 *)((ulong)(p) & ~0x3); \
+ ulong __s = ((ulong)(p) & (0x4 - sizeof(*p))) * BITS_PER_BYTE; \
+ ulong __mask = GENMASK(((sizeof(*p)) * BITS_PER_BYTE) - 1, 0) \
+ << __s; \
+ ulong __newx = (ulong)(n) << __s; \
+ ulong __oldx = (ulong)(o) << __s; \
+ ulong __retx; \
+ ulong __rc; \
+ \
+ __asm__ __volatile__ ( \
+ prepend \
+ "0: lr.w %0, %2\n" \
+ " and %1, %0, %z5\n" \
+ " bne %1, %z3, 1f\n" \
+ " and %1, %0, %z6\n" \
+ " or %1, %1, %z4\n" \
+ " sc.w" sc_sfx " %1, %1, %2\n" \
+ " bnez %1, 0b\n" \
+ append \
+ "1:\n" \
+ : "=&r" (__retx), "=&r" (__rc), "+A" (*(__ptr32b)) \
+ : "rJ" ((long)__oldx), "rJ" (__newx), \
+ "rJ" (__mask), "rJ" (~__mask) \
+ : "memory"); \
+ \
+ r = (__typeof__(*(p)))((__retx & __mask) >> __s); \
+})
#define __arch_cmpxchg(lr_sfx, sc_sfx, prepend, append, r, p, co, o, n) \
({ \
@@ -98,6 +127,11 @@
__typeof__(*(ptr)) __ret; \
\
switch (sizeof(*__ptr)) { \
+ case 1: \
+ case 2: \
+ __arch_cmpxchg_masked(sc_sfx, prepend, append, \
+ __ret, __ptr, __old, __new); \
+ break; \
case 4: \
__arch_cmpxchg(".w", ".w" sc_sfx, prepend, append, \
__ret, __ptr, (long), __old, __new); \
--
2.41.0
Powered by blists - more mailing lists