lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221019152140.2679055-1-alexandr.lobakin@intel.com>
Date:   Wed, 19 Oct 2022 17:21:40 +0200
From:   Alexander Lobakin <alexandr.lobakin@...el.com>
To:     Yury Norov <yury.norov@...il.com>
Cc:     Alexander Lobakin <alexandr.lobakin@...el.com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
        Rasmus Villemoes <linux@...musvillemoes.dk>,
        Michal Swiatkowski <michal.swiatkowski@...ux.intel.com>,
        Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 net-next 1/6] bitmap: try to optimize arr32 <-> bitmap on 64-bit LEs

From: Yury Norov <yury.norov@...il.com>
Date: Tue, 18 Oct 2022 15:41:46 -0700

> On Tue, Oct 18, 2022 at 04:00:22PM +0200, Alexander Lobakin wrote:
> > Unlike bitmap_{from,to}_arr64(), when there can be no out-of-bounds
> > accesses (due to u64 always being no shorter than unsigned long),
> > it can't be guaranteed with arr32s due to that on 64-bit platforms:
> > 
> > bits     BITS_TO_U32 * sizeof(u32)    BITS_TO_LONGS * sizeof(long)
> > 1-32     4                            8
> > 33-64    8                            8
> > 95-96    12                           16
> > 97-128   16                           16
> > 
> > and so on.
> > That is why bitmap_{from,to}_arr32() are always defined there as
> > externs. But quite often @nbits is a compile-time constant, which
> > means we could suggest whether it can be inlined or not at
> > compile-time basing on the number of bits (above).
> > 
> > So, try to determine that at compile time and, in case of both
> > containers having the same size in bytes, resolve it to
> > bitmap_copy_clear_tail() on Little Endian. No changes here for
> > Big Endian or when the number of bits *really* is variable.
> 
> You're saying 'try to optimize', but don't show any numbers. What's
> the target for your optimization? Can you demonstrate how it performs
> in test or in real work?

I had them somewhere, but given that you provided a different
approach to try, I'd better retest each of them and collect some
fresh numbers. If it's not worth it, I'll simply drop the patch
from the series / include stats in the commitmsg otherwise.

>  
> > Signed-off-by: Alexander Lobakin <alexandr.lobakin@...el.com>
> > ---
> >  include/linux/bitmap.h | 51 ++++++++++++++++++++++++++++++------------
> >  lib/bitmap.c           | 12 +++++-----
> >  2 files changed, 43 insertions(+), 20 deletions(-)

[...]

> > +#if BITS_PER_LONG == 32
> > +#define bitmap_arr32_compat(nbits)		true
> > +#elif defined(__LITTLE_ENDIAN)
> > +#define bitmap_arr32_compat(nbits)		\
> 
> 'Compat' is reserved for a compatibility layer between kernel and
> user spaces running different ABIs. Can you pick some other word?

Yeah, sure. I was also thinking of this as it didn't sound good to
me as well, but didn't find anything better at the top of my head
and then forgot about it at all.

> 
> > +	(__builtin_constant_p(nbits) &&		\
> > +	 BITS_TO_U32(nbits) * sizeof(u32) ==	\
> > +	 BITS_TO_LONGS(nbits) * sizeof(long))
> 
> I think it should be:
>         round_up(nbits, 32) == round_up(nbits, 64)

Ah, correct.

> 
> >  #else
> > -#define bitmap_from_arr32(bitmap, buf, nbits)			\
> > -	bitmap_copy_clear_tail((unsigned long *) (bitmap),	\
> > -			(const unsigned long *) (buf), (nbits))
> > -#define bitmap_to_arr32(buf, bitmap, nbits)			\
> > -	bitmap_copy_clear_tail((unsigned long *) (buf),		\
> > -			(const unsigned long *) (bitmap), (nbits))
> 
> Can you keep this part untouched? I'd like to have a clear meaning -
> on 32-bit arch, bitmap_to_arr32 is a simple copy.

Sure, why not, I don't have a strong preference here.

> 
> > +#define bitmap_arr32_compat(nbits)		false
> >  #endif
> >  
> > +void __bitmap_from_arr32(unsigned long *bitmap, const u32 *buf, unsigned int nbits);
> > +void __bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, unsigned int nbits);
> > +
> > +static inline void bitmap_from_arr32(unsigned long *bitmap, const u32 *buf,
> > +				     unsigned int nbits)
> > +{
> > +	const unsigned long *src = (const unsigned long *)buf;
> > +
> > +	if (bitmap_arr32_compat(nbits))
> > +		bitmap_copy_clear_tail(bitmap, src, nbits);
> > +	else
> > +		__bitmap_from_arr32(bitmap, buf, nbits);
> 
> If you would really want to optimize it, I'd suggest something like
> this:
>     #ifdef __LITTLE_ENDIAN
>         /*copy as many full 64-bit words as we can */
>         bitmap_copy(bitmap, src, round_down(nbits, BITS_PER_LONG)); 
> 
>         /* now copy part of last word per-byte */
>         ...
>     #else
> 	__bitmap_from_arr32(bitmap, buf, nbits);
>     #endif
> 
> This should be better because it uses fast bitmap_copy() regardless
> the number of bits. Assuming bitmap_copy() is significantly faster
> than bitmap_from_arr(), people will be surprised by the difference of
> speed of copying, say, 2048 and 2049-bit bitmaps. Right?
> 
> But unless we'll see real numbers, it's questionable to me if that's
> worth the effort.

Nice suggestion, thanks! I'll retest all I have and then we'll see.

> 
> Thanks,
> Yury

Thanks,
Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ