[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231218124033.551770-2-glider@google.com>
Date: Mon, 18 Dec 2023 13:40:27 +0100
From: Alexander Potapenko <glider@...gle.com>
To: glider@...gle.com, catalin.marinas@....com, will@...nel.org,
pcc@...gle.com, andreyknvl@...il.com, andriy.shevchenko@...ux.intel.com,
aleksander.lobakin@...el.com, linux@...musvillemoes.dk, yury.norov@...il.com,
alexandru.elisei@....com
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
eugenis@...gle.com, syednwaris@...il.com, william.gray@...aro.org,
Arnd Bergmann <arnd@...db.de>
Subject: [PATCH v11-mte 1/7] lib/bitmap: add bitmap_{read,write}()
From: Syed Nayyar Waris <syednwaris@...il.com>
The two new functions allow reading/writing values of length up to
BITS_PER_LONG bits at arbitrary position in the bitmap.
The code was taken from "bitops: Introduce the for_each_set_clump macro"
by Syed Nayyar Waris with a number of changes and simplifications:
- instead of using roundup(), which adds an unnecessary dependency
on <linux/math.h>, we calculate space as BITS_PER_LONG-offset;
- indentation is reduced by not using else-clauses (suggested by
checkpatch for bitmap_get_value());
- bitmap_get_value()/bitmap_set_value() are renamed to bitmap_read()
and bitmap_write();
- some redundant computations are omitted.
Cc: Arnd Bergmann <arnd@...db.de>
Signed-off-by: Syed Nayyar Waris <syednwaris@...il.com>
Signed-off-by: William Breathitt Gray <william.gray@...aro.org>
Link: https://lore.kernel.org/lkml/fe12eedf3666f4af5138de0e70b67a07c7f40338.1592224129.git.syednwaris@gmail.com/
Suggested-by: Yury Norov <yury.norov@...il.com>
Co-developed-by: Alexander Potapenko <glider@...gle.com>
Signed-off-by: Alexander Potapenko <glider@...gle.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
Acked-by: Yury Norov <yury.norov@...il.com>
---
v11-mte:
- add Yury's Acked-by:
v10-mte:
- send this patch together with the "Implement MTE tag compression for
swapped pages"
Revisions v8-v12 of bitmap patches were reviewed separately from the
"Implement MTE tag compression for swapped pages" series
(https://lore.kernel.org/lkml/20231109151106.2385155-1-glider@google.com/)
This patch was previously called "lib/bitmap: add
bitmap_{set,get}_value()"
(https://lore.kernel.org/lkml/20230720173956.3674987-2-glider@google.com/)
v11:
- rearrange whitespace as requested by Andy Shevchenko,
add Reviewed-by:, update a comment
v10:
- update comments as requested by Andy Shevchenko
v8:
- as suggested by Andy Shevchenko, handle reads/writes of more than
BITS_PER_LONG bits, add a note for 32-bit systems
v7:
- Address comments by Yury Norov, Andy Shevchenko, Rasmus Villemoes:
- update code comments;
- get rid of GENMASK();
- s/assign_bit/__assign_bit;
- more vertical whitespace for better readability;
- more compact code for bitmap_write() (now for real)
v6:
- As suggested by Yury Norov, do not require bitmap_read(..., 0) to
return 0.
v5:
- Address comments by Yury Norov:
- updated code comments and patch title/description
- replace GENMASK(nbits - 1, 0) with BITMAP_LAST_WORD_MASK(nbits)
- more compact bitmap_write() implementation
v4:
- Address comments by Andy Shevchenko and Yury Norov:
- prevent passing values >= 64 to GENMASK()
- fix commit authorship
- change comments
- check for unlikely(nbits==0)
- drop unnecessary const declarations
- fix kernel-doc comments
- rename bitmap_{get,set}_value() to bitmap_{read,write}()
---
include/linux/bitmap.h | 77 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 77 insertions(+)
diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
index 99451431e4d65..7ca0379be8c13 100644
--- a/include/linux/bitmap.h
+++ b/include/linux/bitmap.h
@@ -79,6 +79,10 @@ struct device;
* bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst
* bitmap_get_value8(map, start) Get 8bit value from map at start
* bitmap_set_value8(map, value, start) Set 8bit value to map at start
+ * bitmap_read(map, start, nbits) Read an nbits-sized value from
+ * map at start
+ * bitmap_write(map, value, start, nbits) Write an nbits-sized value to
+ * map at start
*
* Note, bitmap_zero() and bitmap_fill() operate over the region of
* unsigned longs, that is, bits behind bitmap till the unsigned long
@@ -636,6 +640,79 @@ static inline void bitmap_set_value8(unsigned long *map, unsigned long value,
map[index] |= value << offset;
}
+/**
+ * bitmap_read - read a value of n-bits from the memory region
+ * @map: address to the bitmap memory region
+ * @start: bit offset of the n-bit value
+ * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG
+ *
+ * Returns: value of @nbits bits located at the @start bit offset within the
+ * @map memory region. For @nbits = 0 and @nbits > BITS_PER_LONG the return
+ * value is undefined.
+ */
+static inline unsigned long bitmap_read(const unsigned long *map,
+ unsigned long start,
+ unsigned long nbits)
+{
+ size_t index = BIT_WORD(start);
+ unsigned long offset = start % BITS_PER_LONG;
+ unsigned long space = BITS_PER_LONG - offset;
+ unsigned long value_low, value_high;
+
+ if (unlikely(!nbits || nbits > BITS_PER_LONG))
+ return 0;
+
+ if (space >= nbits)
+ return (map[index] >> offset) & BITMAP_LAST_WORD_MASK(nbits);
+
+ value_low = map[index] & BITMAP_FIRST_WORD_MASK(start);
+ value_high = map[index + 1] & BITMAP_LAST_WORD_MASK(start + nbits);
+ return (value_low >> offset) | (value_high << space);
+}
+
+/**
+ * bitmap_write - write n-bit value within a memory region
+ * @map: address to the bitmap memory region
+ * @value: value to write, clamped to nbits
+ * @start: bit offset of the n-bit value
+ * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG.
+ *
+ * bitmap_write() behaves as-if implemented as @nbits calls of __assign_bit(),
+ * i.e. bits beyond @nbits are ignored:
+ *
+ * for (bit = 0; bit < nbits; bit++)
+ * __assign_bit(start + bit, bitmap, val & BIT(bit));
+ *
+ * For @nbits == 0 and @nbits > BITS_PER_LONG no writes are performed.
+ */
+static inline void bitmap_write(unsigned long *map, unsigned long value,
+ unsigned long start, unsigned long nbits)
+{
+ size_t index;
+ unsigned long offset;
+ unsigned long space;
+ unsigned long mask;
+ bool fit;
+
+ if (unlikely(!nbits || nbits > BITS_PER_LONG))
+ return;
+
+ mask = BITMAP_LAST_WORD_MASK(nbits);
+ value &= mask;
+ offset = start % BITS_PER_LONG;
+ space = BITS_PER_LONG - offset;
+ fit = space >= nbits;
+ index = BIT_WORD(start);
+
+ map[index] &= (fit ? (~(mask << offset)) : ~BITMAP_FIRST_WORD_MASK(start));
+ map[index] |= value << offset;
+ if (fit)
+ return;
+
+ map[index + 1] &= BITMAP_FIRST_WORD_MASK(start + nbits);
+ map[index + 1] |= (value >> space);
+}
+
#endif /* __ASSEMBLY__ */
#endif /* __LINUX_BITMAP_H */
--
2.43.0.472.g3155946c3a-goog
Powered by blists - more mailing lists