[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <169506886746.27769.14933126685918343806.tip-bot2@tip-bot2>
Date: Mon, 18 Sep 2023 20:27:47 -0000
From: "tip-bot2 for Uros Bizjak" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Uros Bizjak <ubizjak@...il.com>, Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: locking/core] locking/lockref/x86: Enable
ARCH_USE_CMPXCHG_LOCKREF for X86_CMPXCHG64
The following commit has been merged into the locking/core branch of tip:
Commit-ID: a432b7c0cf420dbf2448c6bda6a6697afbb153d5
Gitweb: https://git.kernel.org/tip/a432b7c0cf420dbf2448c6bda6a6697afbb153d5
Author: Uros Bizjak <ubizjak@...il.com>
AuthorDate: Mon, 18 Sep 2023 20:40:27 +02:00
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Mon, 18 Sep 2023 22:18:32 +02:00
locking/lockref/x86: Enable ARCH_USE_CMPXCHG_LOCKREF for X86_CMPXCHG64
The following commit:
bc08b449ee14 ("lockref: implement lockless reference count updates using cmpxchg()")
enabled lockless reference count updates using cmpxchg() only for x86_64,
and left x86_32 behind due to inability to detect support for
cmpxchg8b instruction.
Nowadays, we can use CONFIG_X86_CMPXCHG64 for this purpose. Also,
by using try_cmpxchg64() instead of cmpxchg64() in the CMPXCHG_LOOP macro,
the compiler actually produces sane code, improving the
lockref_get_not_zero() main loop from:
eb: 8d 48 01 lea 0x1(%eax),%ecx
ee: 85 c0 test %eax,%eax
f0: 7e 2f jle 121 <lockref_get_not_zero+0x71>
f2: 8b 44 24 10 mov 0x10(%esp),%eax
f6: 8b 54 24 14 mov 0x14(%esp),%edx
fa: 8b 74 24 08 mov 0x8(%esp),%esi
fe: f0 0f c7 0e lock cmpxchg8b (%esi)
102: 8b 7c 24 14 mov 0x14(%esp),%edi
106: 89 c1 mov %eax,%ecx
108: 89 c3 mov %eax,%ebx
10a: 8b 74 24 10 mov 0x10(%esp),%esi
10e: 89 d0 mov %edx,%eax
110: 31 fa xor %edi,%edx
112: 31 ce xor %ecx,%esi
114: 09 f2 or %esi,%edx
116: 75 58 jne 170 <lockref_get_not_zero+0xc0>
to:
350: 8d 4f 01 lea 0x1(%edi),%ecx
353: 85 ff test %edi,%edi
355: 7e 79 jle 3d0 <lockref_get_not_zero+0xb0>
357: f0 0f c7 0e lock cmpxchg8b (%esi)
35b: 75 53 jne 3b0 <lockref_get_not_zero+0x90>
Signed-off-by: Uros Bizjak <ubizjak@...il.com>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Acked-by: Linus Torvalds <torvalds@...ux-foundation.org>
Link: https://lore.kernel.org/r/20230918184050.9180-1-ubizjak@gmail.com
---
arch/x86/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 66bfaba..1379603 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -28,7 +28,6 @@ config X86_64
select ARCH_HAS_GIGANTIC_PAGE
select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
select ARCH_SUPPORTS_PER_VMA_LOCK
- select ARCH_USE_CMPXCHG_LOCKREF
select HAVE_ARCH_SOFT_DIRTY
select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE
@@ -118,6 +117,7 @@ config X86
select ARCH_SUPPORTS_LTO_CLANG
select ARCH_SUPPORTS_LTO_CLANG_THIN
select ARCH_USE_BUILTIN_BSWAP
+ select ARCH_USE_CMPXCHG_LOCKREF if X86_CMPXCHG64
select ARCH_USE_MEMTEST
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
Powered by blists - more mailing lists