[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200222000214.2169531-1-jesse.brandeburg@intel.com>
Date: Fri, 21 Feb 2020 16:02:13 -0800
From: Jesse Brandeburg <jesse.brandeburg@...el.com>
To: tglx@...utronix.de, mingo@...hat.com, bp@...en8.de
Cc: Jesse Brandeburg <jesse.brandeburg@...el.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, linux@...musvillemoes.dk,
andriy.shevchenko@...el.com, dan.j.williams@...el.com,
peterz@...radead.org
Subject: [PATCH v4 1/2] x86: fix bitops.h warning with a moved cast
Fix many sparse warnings when building with C=1.
When the kernel is compiled with C=1, there are lots of messages like:
arch/x86/include/asm/bitops.h:77:37: warning: cast truncates bits from constant value (ffffff7f becomes 7f)
CONST_MASK() is using a signed integer "1" to create the mask which
is later cast to (u8) when used. Move the cast to the definition and
clean up the calling sites to prevent sparse from warning.
The reason the warning was occurring is because certain bitmasks that
end with a mask next to a natural boundary like 7, 15, 23, 31, end up
with a mask like 0x7f, which then results in sign extension when doing
an invert (but I'm not a compiler expert). It was really only
"clear_bit" that was having problems, and it was only on bit checks next
to a byte boundary (top bit).
Verified with a test module (see next patch) and assembly inspection
that the patch doesn't introduce any change in generated code.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@...el.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@...el.com>
---
v4: reverse argument order as suggested by David Laight, added reviewed-by
v3: Clean up the header file changes as per peterz.
v2: use correct CC: list
---
arch/x86/include/asm/bitops.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 062cdecb2f24..fed152434ed0 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -46,7 +46,7 @@
* a mask operation on a byte.
*/
#define CONST_MASK_ADDR(nr, addr) WBYTE_ADDR((void *)(addr) + ((nr)>>3))
-#define CONST_MASK(nr) (1 << ((nr) & 7))
+#define CONST_MASK(nr) ((u8)1 << ((nr) & 7))
static __always_inline void
arch_set_bit(long nr, volatile unsigned long *addr)
@@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "orb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" ((u8)CONST_MASK(nr))
+ : "iq" (CONST_MASK(nr))
: "memory");
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
@@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "andb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" ((u8)~CONST_MASK(nr)));
+ : "iq" (CONST_MASK(nr) ^ 0xff));
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
base-commit: ca7e1fd1026c5af6a533b4b5447e1d2f153e28f2
--
2.24.1
Powered by blists - more mailing lists