lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aQEOhS8VVrAgae3C@yury>
Date: Tue, 28 Oct 2025 14:42:13 -0400
From: Yury Norov <yury.norov@...il.com>
To: Alice Ryhl <aliceryhl@...gle.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Arve Hjønnevåg <arve@...roid.com>,
	Todd Kjos <tkjos@...roid.com>, Martijn Coenen <maco@...roid.com>,
	Joel Fernandes <joelagnelf@...dia.com>,
	Christian Brauner <brauner@...nel.org>,
	Carlos Llamas <cmllamas@...gle.com>,
	Suren Baghdasaryan <surenb@...gle.com>, Burak Emir <bqe@...gle.com>,
	Miguel Ojeda <ojeda@...nel.org>, Boqun Feng <boqun.feng@...il.com>,
	Gary Guo <gary@...yguo.net>,
	Björn Roy Baron <bjorn3_gh@...tonmail.com>,
	Benno Lossin <lossin@...nel.org>,
	Andreas Hindborg <a.hindborg@...nel.org>,
	Trevor Gross <tmgross@...ch.edu>,
	Danilo Krummrich <dakr@...nel.org>, rust-for-linux@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 4/5] rust: id_pool: do not immediately acquire new ids

On Tue, Oct 28, 2025 at 10:55:17AM +0000, Alice Ryhl wrote:
> When Rust Binder assigns a new ID, it performs various fallible
> operations before it "commits" to actually using the new ID. To support
> this pattern, change acquire_next_id() so that it does not immediately
> call set_bit(), but instead returns an object that may be used to call
> set_bit() later.
> 
> The UnusedId type holds a exclusive reference to the IdPool, so it's
> guaranteed that nobody else can call find_unused_id() while the UnusedId
> object is live.

Hi Alice,

I'm not sure about this change, but it looks like a lock wrapping
acquire_next_id().

If so, we don't protect functions with locks, we protect data
structures.

If the above is wrong, and this new UnusedId type serializes all
accesses to a bitmap (lock-like), or write-accesses (rw-lock like),
then this is still questionable.

Bitmaps are widely adopted as a lockless data structure among the
kernel. If you modify bitmaps with set_bit() and clear_bit() only,
with some precautions you are running race-proof. The kernel lacks
for atomic bit-aquire function, but you can implement it youself.

I actually proposed atomic acquire API, but it was rejected:

https://lore.kernel.org/all/20240620175703.605111-2-yury.norov@gmail.com/

You can check the above series for a number of examples.

Bitmaps are widely used because they allow to implement lockless data
access so cheap with just set_bit() and clear_bit(). There's nothing
wrong to allocate a bit and release it shortly in case of some error
just because it's really cheap.

So, with all the above said, I've nothing against this UnusedId,
but if you need it to only serialize the access to an underlying
bitmap, can you explain in more details what's wrong with the existing
pattern? If you have a performance impact in mind, can you show any
numbers?

Thanks,
Yury

> Signed-off-by: Alice Ryhl <aliceryhl@...gle.com>
> ---
>  rust/kernel/id_pool.rs | 67 ++++++++++++++++++++++++++++++++++++++------------
>  1 file changed, 51 insertions(+), 16 deletions(-)
> 
> diff --git a/rust/kernel/id_pool.rs b/rust/kernel/id_pool.rs
> index d53628a357ed84a6e00ef9dfd03a75e85a87532c..e5651162db084f5dc7c16a493aa69ee253fe4885 100644
> --- a/rust/kernel/id_pool.rs
> +++ b/rust/kernel/id_pool.rs
> @@ -25,24 +25,24 @@
>  /// Basic usage
>  ///
>  /// ```
> -/// use kernel::alloc::{AllocError, flags::GFP_KERNEL};
> -/// use kernel::id_pool::IdPool;
> +/// use kernel::alloc::AllocError;
> +/// use kernel::id_pool::{IdPool, UnusedId};
>  ///
>  /// let mut pool = IdPool::new();
>  /// let cap = pool.capacity();
>  ///
>  /// for i in 0..cap {
> -///     assert_eq!(i, pool.acquire_next_id(i).ok_or(ENOSPC)?);
> +///     assert_eq!(i, pool.find_unused_id(i).ok_or(ENOSPC)?.acquire());
>  /// }
>  ///
>  /// pool.release_id(5);
> -/// assert_eq!(5, pool.acquire_next_id(0).ok_or(ENOSPC)?);
> +/// assert_eq!(5, pool.find_unused_id(0).ok_or(ENOSPC)?.acquire());
>  ///
> -/// assert_eq!(None, pool.acquire_next_id(0));  // time to realloc.
> +/// assert!(pool.find_unused_id(0).is_none());  // time to realloc.
>  /// let resizer = pool.grow_request().ok_or(ENOSPC)?.realloc(GFP_KERNEL)?;
>  /// pool.grow(resizer);
>  ///
> -/// assert_eq!(pool.acquire_next_id(0), Some(cap));
> +/// assert_eq!(pool.find_unused_id(0).ok_or(ENOSPC)?.acquire(), cap);
>  /// # Ok::<(), Error>(())
>  /// ```
>  ///
> @@ -56,8 +56,8 @@
>  /// fn get_id_maybe_realloc(guarded_pool: &SpinLock<IdPool>) -> Result<usize, AllocError> {
>  ///     let mut pool = guarded_pool.lock();
>  ///     loop {
> -///         match pool.acquire_next_id(0) {
> -///             Some(index) => return Ok(index),
> +///         match pool.find_unused_id(0) {
> +///             Some(index) => return Ok(index.acquire()),
>  ///             None => {
>  ///                 let alloc_request = pool.grow_request();
>  ///                 drop(pool);
> @@ -187,18 +187,17 @@ pub fn grow(&mut self, mut resizer: PoolResizer) {
>          self.map = resizer.new;
>      }
>  
> -    /// Acquires a new ID by finding and setting the next zero bit in the
> -    /// bitmap.
> +    /// Finds an unused ID in the bitmap.
>      ///
>      /// Upon success, returns its index. Otherwise, returns [`None`]
>      /// to indicate that a [`Self::grow_request`] is needed.
>      #[inline]
> -    pub fn acquire_next_id(&mut self, offset: usize) -> Option<usize> {
> -        let next_zero_bit = self.map.next_zero_bit(offset);
> -        if let Some(nr) = next_zero_bit {
> -            self.map.set_bit(nr);
> -        }
> -        next_zero_bit
> +    #[must_use]
> +    pub fn find_unused_id(&mut self, offset: usize) -> Option<UnusedId<'_>> {
> +        Some(UnusedId {
> +            id: self.map.next_zero_bit(offset)?,
> +            pool: self,
> +        })
>      }
>  
>      /// Releases an ID.
> @@ -208,6 +207,42 @@ pub fn release_id(&mut self, id: usize) {
>      }
>  }
>  
> +/// Represents an unused id in an [`IdPool`].
> +pub struct UnusedId<'pool> {
> +    id: usize,
> +    pool: &'pool mut IdPool,
> +}
> +
> +impl<'pool> UnusedId<'pool> {
> +    /// Get the unused id as an usize.
> +    ///
> +    /// Be aware that the id has not yet been acquired in the pool. The
> +    /// [`acquire`] method must be called to prevent others from taking the id.
> +    #[inline]
> +    #[must_use]
> +    pub fn as_usize(&self) -> usize {
> +        self.id
> +    }
> +
> +    /// Get the unused id as an u32.
> +    ///
> +    /// Be aware that the id has not yet been acquired in the pool. The
> +    /// [`acquire`] method must be called to prevent others from taking the id.
> +    #[inline]
> +    #[must_use]
> +    pub fn as_u32(&self) -> u32 {
> +        // CAST: The maximum possible value in an IdPool is i32::MAX.
> +        self.id as u32
> +    }
> +
> +    /// Acquire the unused id.
> +    #[inline]
> +    pub fn acquire(self) -> usize {
> +        self.pool.map.set_bit(self.id);
> +        self.id
> +    }
> +}
> +
>  impl Default for IdPool {
>      #[inline]
>      fn default() -> Self {
> 
> -- 
> 2.51.1.838.g19442a804e-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ