[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190826183956.GF15933@bombadil.infradead.org>
Date: Mon, 26 Aug 2019 11:39:56 -0700
From: Matthew Wilcox <willy@...radead.org>
To: Denis Efremov <efremov@...ras.ru>
Cc: akpm@...ux-foundation.org, Akinobu Mita <akinobu.mita@...il.com>,
Jan Kara <jack@...e.cz>, linux-kernel@...r.kernel.org,
Matthew Wilcox <matthew@....cx>, dm-devel@...hat.com,
linux-fsdevel@...r.kernel.org, linux-media@...r.kernel.org,
Erdem Tumurov <erdemus@...il.com>,
Vladimir Shelekhov <vshel@....nsk.su>
Subject: Re: [PATCH v2] lib/memweight.c: open codes bitmap_weight()
On Sun, Aug 25, 2019 at 02:39:47PM +0300, Denis Efremov wrote:
> On 25.08.2019 09:11, Matthew Wilcox wrote:
> > On Sat, Aug 24, 2019 at 01:01:02PM +0300, Denis Efremov wrote:
> >> This patch open codes the bitmap_weight() call. The direct
> >> invocation of hweight_long() allows to remove the BUG_ON and
> >> excessive "longs to bits, bits to longs" conversion.
> >
> > Honestly, that's not the problem with this function. Take a look
> > at https://danluu.com/assembly-intrinsics/ for a _benchmarked_
> > set of problems with popcnt.
> >
> >> BUG_ON was required to check that bitmap_weight() will return
> >> a correct value, i.e. the computed weight will fit the int type
> >> of the return value.
> >
> > What? No. Look at the _arguments_ of bitmap_weight():
> >
> > static __always_inline int bitmap_weight(const unsigned long *src, unsigned int nbits)
>
> I'm not sure why it is INT_MAX then? I would expect in case we care only about arguments
> something like:
>
> BUG_ON(longs >= UINT_MAX / BITS_PER_LONG);
People aren't always terribly consistent with INT_MAX vs UINT_MAX.
Also, bitmap_weight() should arguably return an unisnged int (it can't
legitimately return a negative value).
> >> With this patch memweight() controls the
> >> computation directly with size_t type everywhere. Thus, the BUG_ON
> >> becomes unnecessary.
> >
> > Why are you bothering? How are you allocating half a gigabyte of memory?
> > Why are you calling memweight() on half a gigabyte of memory?
> >
>
> No, we don't use such big arrays. However, it's possible to remove BUG_ON and make
> the code more "straight". Why do we need to "artificially" limit this function
> to arrays of a particular size if we can relatively simple omit this restriction?
You're not making a great case for changing the implementation of
memweight() here ...
> I don't know how the implementation of this optimization will look like in it's
> final shape, because of different hardware/compiler issues. It looks there are
> a number of different ways to do it https://arxiv.org/pdf/1611.07612.pdf,
> http://0x80.pl/articles/sse-popcount.html.
The problem with using XMM registers is that they have to be saved/restored.
Not to mention the thermal issues caused by heavy usage of AVX instructions.
> However, if it will be based on popcnt instruction I would expect that
> hweight_long will also contain this intrinsics. Since version 4.9.2
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=62011#c13 GCC knows of the
> false-dependency in popcnt and generates code to handle it
Ah! Glad to see GCC knows about this problem and has worked around it.
> (e.g. xor https://godbolt.org/z/Q7AW_d) Thus, I would expect that it's
> possible to use popcnt intrinsics in hweight_long that would be natively
> optimized in all loops like "for (...) { res += hweight_long() }" without
> requiring manual unrolling like in builtin_popcnt_unrolled_errata_manual
> example of Dan Luu's optimization.
That might be expecting rather more from our compiler than is reasonable ...
> >
> > Also, why does the trailer do this:
> >
> > for (; bytes > 0; bytes--, bitmap++)
> > ret += hweight8(*bitmap);
> >
> > instead of calling hweight_long on *bitmap & mask?
> >
>
> Do you mean something like this?
>
> longs = bytes;
> bytes = do_div(longs, sizeof(long));
> bitmap_long = (const unsigned long *)bitmap;
> if (longs) {
> for (; longs > 0; longs--, bitmap_long++)
> ret += hweight_long(*bitmap_long);
> }
> if (bytes) {
> ret += hweight_long(*bitmap_long &
> ((0x1 << bytes * BITS_PER_BYTE) - 1));
> }
>
> The *bitmap_long will lead to buffer overflow here.
No it won't. The CPU will access more bytes than the `bytes' argument
would seem to imply -- but it's going to have fetched that entire
cacheline anyway. It might confuse a very strict bounds checking library,
but usually those just check you're not accessing outside your object,
which is going to be a multiple of 'sizeof(long)' anyway.
If we do something like this, we'll need to use an 'inverse' of that mask
on big-endian machines. ie something more like:
if (bytes) {
unsigned long mask;
if (_BIG_ENDIAN)
mask = ~0UL >> (bytes * 8);
else
mask = ~0UL << (bytes * 8);
ret += hweight_long(*bitmap_long & ~mask);
}
Also we need a memweight() test to be sure we didn't get that wrong.
Powered by blists - more mailing lists