[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260121160912-cafca67d-bfc0-414a-adaa-80c863acd93a@linutronix.de>
Date: Wed, 21 Jan 2026 16:17:18 +0100
From: Thomas Weißschuh <thomas.weissschuh@...utronix.de>
To: david.laight.linux@...il.com
Cc: Nathan Chancellor <nathan@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Arnd Bergmann <arnd@...db.de>, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org, Yury Norov <yury.norov@...il.com>,
Lucas De Marchi <lucas.demarchi@...el.com>, Jani Nikula <jani.nikula@...el.com>,
Vincent Mailhol <mailhol.vincent@...adoo.fr>, Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Kees Cook <keescook@...omium.org>, Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH next 12/14] bits: move the defitions of BIT() and
BIT_ULL() back to linux/bits.h
On Wed, Jan 21, 2026 at 02:57:29PM +0000, david.laight.linux@...il.com wrote:
> From: David Laight <david.laight.linux@...il.com>
>
> The definition of BIT() was moved from linux/bits.h to vdso/bits.h to
> isolate the vdso from 'normal' kernel headers.
> BIT_ULL() was then moved to be defined in the same place for consistency.
>
> Since then linux/bits.h had gained BIT_Unn() and it really makes sense
> for BIT() and BIT_ULL() to be defined in the same place.
>
> Move BIT_ULL() and make code that include both headers use the definition
> of BIT() from linux/bits.h
> Add BIT_U128() for completness.
>
> This lets BIT() pick up the extra compile time checks for W=[1c] builds
> that detect errors like:
> long foo(void) { int x = 64; return BIT(x); }
> For which clang (silently) just generates a 'return' instruction.
>
> Note that nothing the the x86-64 build relies on the definition in
> vdso/bits.h, linux/bits.h is always included.
>
> Signed-off-by: David Laight <david.laight.linux@...il.com>
> ---
> include/linux/bits.h | 7 ++++++-
> include/vdso/bits.h | 2 +-
> 2 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/bits.h b/include/linux/bits.h
> index 0f559038981d..3dd32b9eef35 100644
> --- a/include/linux/bits.h
> +++ b/include/linux/bits.h
> @@ -2,7 +2,6 @@
> #ifndef __LINUX_BITS_H
> #define __LINUX_BITS_H
>
> -#include <vdso/bits.h>
> #include <uapi/linux/bits.h>
>
> #define BIT_MASK(nr) (UL(1) << ((nr) % BITS_PER_LONG))
> @@ -89,10 +88,16 @@ int BIT_INPUT_CHECK_FAIL(void) __compiletime_error("Bit number out of range");
> ((unsigned int)BIT_INPUT_CHECK(+(nr), BITS_PER_TYPE(type)) + ((type)1 << (nr)))
> #endif /* defined(__ASSEMBLY__) */
>
> +/* Prefer this definition of BIT() to the one in vdso/bits.h */
> +#undef BIT
> +#define __VDSO_BITS_H
This is ugly.
Why can't the vDSO code make use of those checks, too?
Or use _BITUL() from the UAPI in the vDSO and remove vdso/bits.h.
> +#define BIT(nr) BIT_TYPE(unsigned long, nr)
> +#define BIT_ULL(nr) BIT_TYPE(unsigned long long, nr)
> #define BIT_U8(nr) BIT_TYPE(u8, nr)
> #define BIT_U16(nr) BIT_TYPE(u16, nr)
> #define BIT_U32(nr) BIT_TYPE(u32, nr)
> #define BIT_U64(nr) BIT_TYPE(u64, nr)
> +#define BIT_U128(nr) BIT_TYPE(u128, nr)
>
> #if defined(__ASSEMBLY__)
>
> diff --git a/include/vdso/bits.h b/include/vdso/bits.h
> index 388b212088ea..a6ac1e6b637c 100644
> --- a/include/vdso/bits.h
> +++ b/include/vdso/bits.h
> @@ -4,7 +4,7 @@
> > #include <vdso/const.h>
>
> +/* Most code picks up BIT() from linux/bits.h */
> #define BIT(nr) (UL(1) << (nr))
> -#define BIT_ULL(nr) (ULL(1) << (nr))
>
> #endif /* __VDSO_BITS_H */
> --
> 2.39.5
>
Powered by blists - more mailing lists