[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200310221747.2848474-1-jesse.brandeburg@intel.com>
Date: Tue, 10 Mar 2020 15:17:46 -0700
From: Jesse Brandeburg <jesse.brandeburg@...el.com>
To: tglx@...utronix.de, mingo@...hat.com, bp@...en8.de
Cc: Jesse Brandeburg <jesse.brandeburg@...el.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, linux@...musvillemoes.dk,
andriy.shevchenko@...el.com, dan.j.williams@...el.com,
peterz@...radead.org
Subject: [PATCH v6 1/2] x86: fix bitops.h warning with a moved cast
Fix many sparse warnings when building with C=1. These are useless
noise from the bitops.h file and getting rid of them helps devs
make more use of the tools and possibly find real bugs.
When the kernel is compiled with C=1, there are lots of messages like:
arch/x86/include/asm/bitops.h:77:37: warning: cast truncates bits from constant value (ffffff7f becomes 7f)
CONST_MASK() is using a signed integer "1" to create the mask which is
later cast to (u8), in order to yield an 8-bit value for the assembly
instructions to use. Simplify the expressions used to clearly indicate
they are working on 8-bit values only, which still keeps sparse happy
without an accidental promotion to a 32 bit integer.
The warning was occurring because certain bitmasks that end with a bit
set next to a natural boundary like 7, 15, 23, 31, end up with a mask
like 0x7f, which then results in sign extension due to the integer
type promotion rules[1]. It was really only "clear_bit" that was
having problems, and it was only on some bit checks that resulted in a
mask like 0xffffff7f being generated after the inversion.
Verified with a test module (see next patch) and assembly inspection
that the patch doesn't introduce any change in generated code.
[1] https://stackoverflow.com/questions/46073295/implicit-type-promotion-rules
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@...el.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@...el.com>
---
v6: reworded commit message, enhanced explanation
v5: changed code to use simple AND and XOR, updated commit message
v4: reverse argument order as suggested by David Laight, added reviewed-by
v3: Clean up the header file changes as per peterz.
v2: use correct CC: list
---
arch/x86/include/asm/bitops.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 062cdecb2f24..53f246e9df5a 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "orb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" ((u8)CONST_MASK(nr))
+ : "iq" (CONST_MASK(nr) & 0xff)
: "memory");
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
@@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "andb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" ((u8)~CONST_MASK(nr)));
+ : "iq" (CONST_MASK(nr) ^ 0xff));
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
base-commit: 8b614cb8f1dcac8ca77cf4dd85f46ef3055f8238
--
2.24.1
Powered by blists - more mailing lists