lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZSQn4Mppz9aJgFib@yury-ThinkPad>
Date:   Mon, 9 Oct 2023 09:18:40 -0700
From:   Yury Norov <yury.norov@...il.com>
To:     Alexander Lobakin <aleksander.lobakin@...el.com>
Cc:     Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
        Rasmus Villemoes <linux@...musvillemoes.dk>,
        Alexander Potapenko <glider@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Eric Dumazet <edumazet@...gle.com>,
        David Ahern <dsahern@...nel.org>,
        Przemek Kitszel <przemyslaw.kitszel@...el.com>,
        Simon Horman <simon.horman@...igine.com>,
        netdev@...r.kernel.org, linux-btrfs@...r.kernel.org,
        dm-devel@...hat.com, ntfs3@...ts.linux.dev,
        linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 03/14] bitops: let the compiler optimize __assign_bit()

On Mon, Oct 09, 2023 at 05:10:15PM +0200, Alexander Lobakin wrote:
> Since commit b03fc1173c0c ("bitops: let optimize out non-atomic bitops
> on compile-time constants"), the compilers are able to expand inline
> bitmap operations to compile-time initializers when possible.
> However, during the round of replacement if-__set-else-__clear with
> __assign_bit() as per Andy's advice, bloat-o-meter showed +1024 bytes
> difference in object code size for one module (even one function),
> where the pattern:
> 
> 	DECLARE_BITMAP(foo) = { }; // on the stack, zeroed
> 
> 	if (a)
> 		__set_bit(const_bit_num, foo);
> 	if (b)
> 		__set_bit(another_const_bit_num, foo);
> 	...
> 
> is heavily used, although there should be no difference: the bitmap is
> zeroed, so the second half of __assign_bit() should be compiled-out as
> a no-op.
> I either missed the fact that __assign_bit() has bitmap pointer marked
> as `volatile` (as we usually do for bitmaps) or was hoping that the

No, we usually don't. Atomic ops on individual bits is a notable exception
for bitmaps, as the comment for generic_test_bit() says, for example:
         /*
          * Unlike the bitops with the '__' prefix above, this one *is* atomic,
          * so `volatile` must always stay here with no cast-aways. See
          * `Documentation/atomic_bitops.txt` for the details.
          */

For non-atomic single-bit operations and all multi-bit ops, volatile is
useless, and generic___test_and_set_bit() in the same file casts the 
*addr to a plain 'unsigned long *'.

> compilers would at least try to look past the `volatile` for
> __always_inline functions. Anyhow, due to that attribute, the compilers
> were always compiling the whole expression and no mentioned compile-time
> optimizations were working.
> 
> Convert __assign_bit() to a macro since it's a very simple if-else and
> all of the checks are performed inside __set_bit() and __clear_bit(),
> thus that wrapper has to be as transparent as possible. After that
> change, despite it showing only -20 bytes change for vmlinux (due to
> that it's still relatively unpopular), no drastic code size changes
> happen when replacing if-set-else-clear for onstack bitmaps with
> __assign_bit(), meaning the compiler now expands them to the actual
> operations will all the expected optimizations.
> 
> Cc: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@...el.com>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> ---
>  include/linux/bitops.h | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/include/linux/bitops.h b/include/linux/bitops.h
> index e0cd09eb91cd..f98f4fd1047f 100644
> --- a/include/linux/bitops.h
> +++ b/include/linux/bitops.h
> @@ -284,14 +284,8 @@ static __always_inline void assign_bit(long nr, volatile unsigned long *addr,
>  		clear_bit(nr, addr);
>  }
>  
> -static __always_inline void __assign_bit(long nr, volatile unsigned long *addr,
> -					 bool value)
> -{
> -	if (value)
> -		__set_bit(nr, addr);
> -	else
> -		__clear_bit(nr, addr);
> -}
> +#define __assign_bit(nr, addr, value)				\
> +	((value) ? __set_bit(nr, addr) : __clear_bit(nr, addr))

Can you protect nr and addr with braces just as well?
Can you convert the atomic version too, to keep them synchronized ?

>  
>  /**
>   * __ptr_set_bit - Set bit in a pointer's value
> -- 
> 2.41.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ