[<prev] [next>] [day] [month] [year] [list]
Message-ID: <169537034995.27769.6683093223501252262.tip-bot2@tip-bot2>
Date: Fri, 22 Sep 2023 08:12:29 -0000
From: "tip-bot2 for Ingo Molnar" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Nathan Chancellor <nathan@...nel.org>,
Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org, x86@...nel.org
Subject: [tip: x86/asm] x86/bitops: Remove unused __sw_hweight64() assembly
implementation on x86-32
The following commit has been merged into the x86/asm branch of tip:
Commit-ID: ad424743256b0119bd60a9248db4df5d998000a4
Gitweb: https://git.kernel.org/tip/ad424743256b0119bd60a9248db4df5d998000a4
Author: Ingo Molnar <mingo@...nel.org>
AuthorDate: Sat, 22 Jan 2022 13:39:15 +01:00
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Fri, 22 Sep 2023 09:34:50 +02:00
x86/bitops: Remove unused __sw_hweight64() assembly implementation on x86-32
Header cleanups in the fast-headers tree highlighted that we have an
unused assembly implementation for __sw_hweight64():
WARNING: modpost: EXPORT symbol "__sw_hweight64" [vmlinux] version ...
__arch_hweight64() on x86-32 is defined in the
arch/x86/include/asm/arch_hweight.h header as an inline, using
__arch_hweight32():
#ifdef CONFIG_X86_32
static inline unsigned long __arch_hweight64(__u64 w)
{
return __arch_hweight32((u32)w) +
__arch_hweight32((u32)(w >> 32));
}
*But* there's also a __sw_hweight64() assembly implementation:
arch/x86/lib/hweight.S
SYM_FUNC_START(__sw_hweight64)
#ifdef CONFIG_X86_64
...
#else /* CONFIG_X86_32 */
/* We're getting an u64 arg in (%eax,%edx): unsigned long hweight64(__u64 w) */
pushl %ecx
call __sw_hweight32
movl %eax, %ecx # stash away result
movl %edx, %eax # second part of input
call __sw_hweight32
addl %ecx, %eax # result
popl %ecx
ret
#endif
But this __sw_hweight64 assembly implementation is unused - and it's
essentially doing the same thing that the inline wrapper does.
Remove the assembly version and add a comment about it.
Reported-by: Nathan Chancellor <nathan@...nel.org>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org
---
arch/x86/lib/hweight.S | 20 ++++++--------------
1 file changed, 6 insertions(+), 14 deletions(-)
diff --git a/arch/x86/lib/hweight.S b/arch/x86/lib/hweight.S
index 12c16c6..0a152e5 100644
--- a/arch/x86/lib/hweight.S
+++ b/arch/x86/lib/hweight.S
@@ -36,8 +36,12 @@ SYM_FUNC_START(__sw_hweight32)
SYM_FUNC_END(__sw_hweight32)
EXPORT_SYMBOL(__sw_hweight32)
-SYM_FUNC_START(__sw_hweight64)
+/*
+ * No 32-bit variant, because it's implemented as an inline wrapper
+ * on top of __arch_hweight32():
+ */
#ifdef CONFIG_X86_64
+SYM_FUNC_START(__sw_hweight64)
pushq %rdi
pushq %rdx
@@ -66,18 +70,6 @@ SYM_FUNC_START(__sw_hweight64)
popq %rdx
popq %rdi
RET
-#else /* CONFIG_X86_32 */
- /* We're getting an u64 arg in (%eax,%edx): unsigned long hweight64(__u64 w) */
- pushl %ecx
-
- call __sw_hweight32
- movl %eax, %ecx # stash away result
- movl %edx, %eax # second part of input
- call __sw_hweight32
- addl %ecx, %eax # result
-
- popl %ecx
- RET
-#endif
SYM_FUNC_END(__sw_hweight64)
EXPORT_SYMBOL(__sw_hweight64)
+#endif
Powered by blists - more mailing lists