[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <172122712402.2215.3878920766724866170.tip-bot2@tip-bot2>
Date: Wed, 17 Jul 2024 14:38:44 -0000
From: "tip-bot2 for Uros Bizjak" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Uros Bizjak <ubizjak@...il.com>, Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: locking/core] locking/atomic/x86: Introduce the
read64_nonatomic macro to x86_32 with cx8
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 6e30a7c98a9fda2f894e970e9cd637657f39c59d
Gitweb: https://git.kernel.org/tip/6e30a7c98a9fda2f894e970e9cd637657f39c59d
Author: Uros Bizjak <ubizjak@...il.com>
AuthorDate: Wed, 05 Jun 2024 20:13:15 +02:00
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Wed, 17 Jul 2024 16:28:11 +02:00
locking/atomic/x86: Introduce the read64_nonatomic macro to x86_32 with cx8
As described in commit:
e73c4e34a0e9 ("locking/atomic/x86: Introduce arch_atomic64_read_nonatomic() to x86_32")
the value preload before the CMPXCHG loop does not need to be atomic.
Introduce the read64_nonatomic assembly macro to load the value from a
atomic_t location in a faster non-atomic way and use it in
atomic64_cx8_32.S.
Signed-off-by: Uros Bizjak <ubizjak@...il.com>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Link: https://lore.kernel.org/r/20240605181424.3228-1-ubizjak@gmail.com
---
arch/x86/lib/atomic64_cx8_32.S | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/lib/atomic64_cx8_32.S b/arch/x86/lib/atomic64_cx8_32.S
index 90afb48..b2eff07 100644
--- a/arch/x86/lib/atomic64_cx8_32.S
+++ b/arch/x86/lib/atomic64_cx8_32.S
@@ -16,6 +16,11 @@
cmpxchg8b (\reg)
.endm
+.macro read64_nonatomic reg
+ movl (\reg), %eax
+ movl 4(\reg), %edx
+.endm
+
SYM_FUNC_START(atomic64_read_cx8)
read64 %ecx
RET
@@ -51,7 +56,7 @@ SYM_FUNC_START(atomic64_\func\()_return_cx8)
movl %edx, %edi
movl %ecx, %ebp
- read64 %ecx
+ read64_nonatomic %ecx
1:
movl %eax, %ebx
movl %edx, %ecx
@@ -79,7 +84,7 @@ addsub_return sub sub sbb
SYM_FUNC_START(atomic64_\func\()_return_cx8)
pushl %ebx
- read64 %esi
+ read64_nonatomic %esi
1:
movl %eax, %ebx
movl %edx, %ecx
Powered by blists - more mailing lists