[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZTpYbCa0Qmry0HGH@smile.fi.intel.com>
Date: Thu, 26 Oct 2023 15:15:40 +0300
From: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
To: Alexander Potapenko <glider@...gle.com>
Cc: catalin.marinas@....com, will@...nel.org, pcc@...gle.com,
andreyknvl@...il.com, aleksander.lobakin@...el.com,
linux@...musvillemoes.dk, yury.norov@...il.com,
alexandru.elisei@....com, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, eugenis@...gle.com,
syednwaris@...il.com, william.gray@...aro.org,
Arnd Bergmann <arnd@...db.de>
Subject: Re: [PATCH v9 1/2] lib/bitmap: add bitmap_{read,write}()
On Wed, Oct 25, 2023 at 10:38:11AM +0200, Alexander Potapenko wrote:
> From: Syed Nayyar Waris <syednwaris@...il.com>
>
> The two new functions allow reading/writing values of length up to
> BITS_PER_LONG bits at arbitrary position in the bitmap.
>
> The code was taken from "bitops: Introduce the for_each_set_clump macro"
> by Syed Nayyar Waris with a number of changes and simplifications:
> - instead of using roundup(), which adds an unnecessary dependency
> on <linux/math.h>, we calculate space as BITS_PER_LONG-offset;
> - indentation is reduced by not using else-clauses (suggested by
> checkpatch for bitmap_get_value());
> - bitmap_get_value()/bitmap_set_value() are renamed to bitmap_read()
> and bitmap_write();
> - some redundant computations are omitted.
...
> * bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst
> * bitmap_get_value8(map, start) Get 8bit value from map at start
> * bitmap_set_value8(map, value, start) Set 8bit value to map at start
> + * bitmap_read(map, start, nbits) Read an nbits-sized value from
> + * map at start
> + * bitmap_write(map, value, start, nbits) Write an nbits-sized value to
> + * map at start
I still didn't get the grouping you implied with this...
> * Note, bitmap_zero() and bitmap_fill() operate over the region of
> * unsigned longs, that is, bits behind bitmap till the unsigned long
...
> +/**
> + * bitmap_read - read a value of n-bits from the memory region
> + * @map: address to the bitmap memory region
> + * @start: bit offset of the n-bit value
> + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG
> + *
> + * Returns: value of nbits located at the @start bit offset within the @map
> + * memory region.
> + *
> + * Note: callers on 32-bit systems must be careful to not attempt reading more
> + * than sizeof(unsigned long).
sizeof() here is misleading, We talk about bits, BITS_PER_LONG (which is 32),
here it's better to be explicit that reading more than 32 bits at a time on
32-bit platform will return 0. Actually what you need is to describe...
> + */
> +static inline unsigned long bitmap_read(const unsigned long *map,
> + unsigned long start,
> + unsigned long nbits)
> +{
> + size_t index = BIT_WORD(start);
> + unsigned long offset = start % BITS_PER_LONG;
> + unsigned long space = BITS_PER_LONG - offset;
> + unsigned long value_low, value_high;
> +
> + if (unlikely(!nbits || nbits > BITS_PER_LONG))
> + return 0;
...this return in the Return section.
> +
> + if (space >= nbits)
> + return (map[index] >> offset) & BITMAP_LAST_WORD_MASK(nbits);
> +
> + value_low = map[index] & BITMAP_FIRST_WORD_MASK(start);
> + value_high = map[index + 1] & BITMAP_LAST_WORD_MASK(start + nbits);
> + return (value_low >> offset) | (value_high << space);
> +}
...
> +/**
> + * bitmap_write - write n-bit value within a memory region
> + * @map: address to the bitmap memory region
> + * @value: value to write, clamped to nbits
> + * @start: bit offset of the n-bit value
> + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG.
> + *
> + * bitmap_write() behaves as-if implemented as @nbits calls of __assign_bit(),
> + * i.e. bits beyond @nbits are ignored:
> + *
> + * for (bit = 0; bit < nbits; bit++)
> + * __assign_bit(start + bit, bitmap, val & BIT(bit));
> + * Note: callers on 32-bit systems must be careful to not attempt writing more
> + * than sizeof(unsigned long).
Ditto.
> + */
--
With Best Regards,
Andy Shevchenko
Powered by blists - more mailing lists