lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251028-binder-bitmap-v3-4-32822d4b3207@google.com>
Date: Tue, 28 Oct 2025 10:55:17 +0000
From: Alice Ryhl <aliceryhl@...gle.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Yury Norov <yury.norov@...il.com>
Cc: "Arve Hjønnevåg" <arve@...roid.com>, Todd Kjos <tkjos@...roid.com>, 
	Martijn Coenen <maco@...roid.com>, Joel Fernandes <joelagnelf@...dia.com>, 
	Christian Brauner <brauner@...nel.org>, Carlos Llamas <cmllamas@...gle.com>, 
	Suren Baghdasaryan <surenb@...gle.com>, Burak Emir <bqe@...gle.com>, Miguel Ojeda <ojeda@...nel.org>, 
	Boqun Feng <boqun.feng@...il.com>, Gary Guo <gary@...yguo.net>, 
	"Björn Roy Baron" <bjorn3_gh@...tonmail.com>, Benno Lossin <lossin@...nel.org>, 
	Andreas Hindborg <a.hindborg@...nel.org>, Trevor Gross <tmgross@...ch.edu>, 
	Danilo Krummrich <dakr@...nel.org>, rust-for-linux@...r.kernel.org, 
	linux-kernel@...r.kernel.org, Alice Ryhl <aliceryhl@...gle.com>
Subject: [PATCH v3 4/5] rust: id_pool: do not immediately acquire new ids

When Rust Binder assigns a new ID, it performs various fallible
operations before it "commits" to actually using the new ID. To support
this pattern, change acquire_next_id() so that it does not immediately
call set_bit(), but instead returns an object that may be used to call
set_bit() later.

The UnusedId type holds a exclusive reference to the IdPool, so it's
guaranteed that nobody else can call find_unused_id() while the UnusedId
object is live.

Signed-off-by: Alice Ryhl <aliceryhl@...gle.com>
---
 rust/kernel/id_pool.rs | 67 ++++++++++++++++++++++++++++++++++++++------------
 1 file changed, 51 insertions(+), 16 deletions(-)

diff --git a/rust/kernel/id_pool.rs b/rust/kernel/id_pool.rs
index d53628a357ed84a6e00ef9dfd03a75e85a87532c..e5651162db084f5dc7c16a493aa69ee253fe4885 100644
--- a/rust/kernel/id_pool.rs
+++ b/rust/kernel/id_pool.rs
@@ -25,24 +25,24 @@
 /// Basic usage
 ///
 /// ```
-/// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
-/// use kernel::id_pool::IdPool;
+/// use kernel::alloc::AllocError;
+/// use kernel::id_pool::{IdPool, UnusedId};
 ///
 /// let mut pool = IdPool::new();
 /// let cap = pool.capacity();
 ///
 /// for i in 0..cap {
-///     assert_eq!(i, pool.acquire_next_id(i).ok_or(ENOSPC)?);
+///     assert_eq!(i, pool.find_unused_id(i).ok_or(ENOSPC)?.acquire());
 /// }
 ///
 /// pool.release_id(5);
-/// assert_eq!(5, pool.acquire_next_id(0).ok_or(ENOSPC)?);
+/// assert_eq!(5, pool.find_unused_id(0).ok_or(ENOSPC)?.acquire());
 ///
-/// assert_eq!(None, pool.acquire_next_id(0));  // time to realloc.
+/// assert!(pool.find_unused_id(0).is_none());  // time to realloc.
 /// let resizer = pool.grow_request().ok_or(ENOSPC)?.realloc(GFP_KERNEL)?;
 /// pool.grow(resizer);
 ///
-/// assert_eq!(pool.acquire_next_id(0), Some(cap));
+/// assert_eq!(pool.find_unused_id(0).ok_or(ENOSPC)?.acquire(), cap);
 /// # Ok::<(), Error>(())
 /// ```
 ///
@@ -56,8 +56,8 @@
 /// fn get_id_maybe_realloc(guarded_pool: &SpinLock<IdPool>) -> Result<usize, AllocError> {
 ///     let mut pool = guarded_pool.lock();
 ///     loop {
-///         match pool.acquire_next_id(0) {
-///             Some(index) => return Ok(index),
+///         match pool.find_unused_id(0) {
+///             Some(index) => return Ok(index.acquire()),
 ///             None => {
 ///                 let alloc_request = pool.grow_request();
 ///                 drop(pool);
@@ -187,18 +187,17 @@ pub fn grow(&mut self, mut resizer: PoolResizer) {
         self.map = resizer.new;
     }
 
-    /// Acquires a new ID by finding and setting the next zero bit in the
-    /// bitmap.
+    /// Finds an unused ID in the bitmap.
     ///
     /// Upon success, returns its index. Otherwise, returns [`None`]
     /// to indicate that a [`Self::grow_request`] is needed.
     #[inline]
-    pub fn acquire_next_id(&mut self, offset: usize) -> Option<usize> {
-        let next_zero_bit = self.map.next_zero_bit(offset);
-        if let Some(nr) = next_zero_bit {
-            self.map.set_bit(nr);
-        }
-        next_zero_bit
+    #[must_use]
+    pub fn find_unused_id(&mut self, offset: usize) -> Option<UnusedId<'_>> {
+        Some(UnusedId {
+            id: self.map.next_zero_bit(offset)?,
+            pool: self,
+        })
     }
 
     /// Releases an ID.
@@ -208,6 +207,42 @@ pub fn release_id(&mut self, id: usize) {
     }
 }
 
+/// Represents an unused id in an [`IdPool`].
+pub struct UnusedId<'pool> {
+    id: usize,
+    pool: &'pool mut IdPool,
+}
+
+impl<'pool> UnusedId<'pool> {
+    /// Get the unused id as an usize.
+    ///
+    /// Be aware that the id has not yet been acquired in the pool. The
+    /// [`acquire`] method must be called to prevent others from taking the id.
+    #[inline]
+    #[must_use]
+    pub fn as_usize(&self) -> usize {
+        self.id
+    }
+
+    /// Get the unused id as an u32.
+    ///
+    /// Be aware that the id has not yet been acquired in the pool. The
+    /// [`acquire`] method must be called to prevent others from taking the id.
+    #[inline]
+    #[must_use]
+    pub fn as_u32(&self) -> u32 {
+        // CAST: The maximum possible value in an IdPool is i32::MAX.
+        self.id as u32
+    }
+
+    /// Acquire the unused id.
+    #[inline]
+    pub fn acquire(self) -> usize {
+        self.pool.map.set_bit(self.id);
+        self.id
+    }
+}
+
 impl Default for IdPool {
     #[inline]
     fn default() -> Self {

-- 
2.51.1.838.g19442a804e-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ