lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <68b9f5e2.170a0220.1224d9.5d3d@mx.google.com>
Date: Thu, 4 Sep 2025 13:26:07 -0700
From: Mitchell Levy <levymitchell0@...il.com>
To: Yury Norov <yury.norov@...il.com>
Cc: Miguel Ojeda <ojeda@...nel.org>, Alex Gaynor <alex.gaynor@...il.com>,
	Boqun Feng <boqun.feng@...il.com>, Gary Guo <gary@...yguo.net>,
	Björn Roy Baron <bjorn3_gh@...tonmail.com>,
	Andreas Hindborg <a.hindborg@...nel.org>,
	Alice Ryhl <aliceryhl@...gle.com>, Trevor Gross <tmgross@...ch.edu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Dennis Zhou <dennis@...nel.org>, Tejun Heo <tj@...nel.org>,
	Christoph Lameter <cl@...ux.com>,
	Danilo Krummrich <dakr@...nel.org>,
	Benno Lossin <lossin@...nel.org>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	Tyler Hicks <code@...icks.com>, linux-kernel@...r.kernel.org,
	rust-for-linux@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v3 5/7] rust: percpu: Support non-zeroable types for
 DynamicPerCpu

On Wed, Sep 03, 2025 at 06:19:25PM -0400, Yury Norov wrote:
> On Thu, Aug 28, 2025 at 12:00:12PM -0700, Mitchell Levy wrote:
> > Add functionality to `PerCpuPtr` to compute pointers to per-CPU variable
> > slots on other CPUs. Use this facility to initialize per-CPU variables
> > on all possible CPUs when a dynamic per-CPU variable is created with a
> > non-zeroable type. Since `RefCell` and other `Cell`-like types fall into
> > this category, `impl CheckedPerCpu` on `DynamicPerCpu` for these
> > `InteriorMutable` types since they can now be used. Add examples of
> > these usages to `samples/rust/rust_percpu.rs`.
> > 
> > Signed-off-by: Mitchell Levy <levymitchell0@...il.com>
> > ---
> >  rust/helpers/percpu.c         |  5 +++
> >  rust/kernel/percpu.rs         | 15 +++++++
> >  rust/kernel/percpu/dynamic.rs | 40 +++++++++++++++++
> >  samples/rust/rust_percpu.rs   | 99 ++++++++++++++++++++++++++++++++++++++++---
> >  4 files changed, 152 insertions(+), 7 deletions(-)
> > 
> > diff --git a/rust/helpers/percpu.c b/rust/helpers/percpu.c
> > index 8cc01d094752..8d83b6b86106 100644
> > --- a/rust/helpers/percpu.c
> > +++ b/rust/helpers/percpu.c
> > @@ -8,6 +8,11 @@ void __percpu *rust_helper_alloc_percpu(size_t sz, size_t align)
> >  	return __alloc_percpu(sz, align);
> >  }
> >  
> > +void *rust_helper_per_cpu_ptr(void __percpu *ptr, unsigned int cpu)
> > +{
> > +	return per_cpu_ptr(ptr, cpu);
> > +}
> > +
> >  void rust_helper_on_each_cpu(smp_call_func_t func, void *info, int wait)
> >  {
> >  	on_each_cpu(func, info, wait);
> > diff --git a/rust/kernel/percpu.rs b/rust/kernel/percpu.rs
> > index 35afcdba3ccd..c68c7520b67f 100644
> > --- a/rust/kernel/percpu.rs
> > +++ b/rust/kernel/percpu.rs
> > @@ -14,6 +14,7 @@
> >  use bindings::{alloc_percpu, free_percpu};
> >  
> >  use crate::alloc::Flags;
> > +use crate::cpu::CpuId;
> >  use crate::percpu::cpu_guard::CpuGuard;
> >  use crate::prelude::*;
> >  use crate::sync::Arc;
> > @@ -115,6 +116,20 @@ pub fn get_ptr(&self) -> *mut MaybeUninit<T> {
> >          // the invariant that self.0 is a valid offset into the per-CPU area.
> >          (this_cpu_area).wrapping_add(self.0 as usize).cast()
> >      }
> > +
> > +    /// Get a `*mut MaybeUninit<T>` to the per-CPU variable on the CPU represented by `cpu`. Note
> > +    /// that without some kind of synchronization, use of the returned pointer may cause a data
> > +    /// race. It is the caller's responsibility to use the returned pointer in a reasonable way.
> > +    ///
> > +    /// # Safety
> > +    /// - The returned pointer is valid only if `self` is (that is, it points to a live allocation
> > +    ///   correctly sized and aligned to hold a `T`)
> > +    /// - The returned pointer is valid only if the bit corresponding to `cpu` is set in
> > +    ///   `Cpumask::possible()`.
> 
> Instead of explaining those rules in comments, can you just enforce
> them in code? Not sure about the 1st rule, but the 2nd one looks like
> a trivial check.

I don't think the first one is possible to check (at least, certainly
not without peeking deeply into the per-CPU allocation management
system).

For the second, I will probably be converting this function to a safe
function (see my reply to Miguel), with these being requirements for the
returned pointer to be valid, so panicking when the requirements aren't
met would probably not be expected behavior.

> > +    pub unsafe fn get_remote_ptr(&self, cpu: CpuId) -> *mut MaybeUninit<T> {
> > +        // SAFETY: The requirements of this function ensure this call is safe.
> > +        unsafe { bindings::per_cpu_ptr(self.0.cast(), cpu.as_u32()) }.cast()
> > +    }
> >  }
> >  
> >  // SAFETY: Sending a `PerCpuPtr<T>` to another thread is safe because as soon as it's sent, the
> > diff --git a/rust/kernel/percpu/dynamic.rs b/rust/kernel/percpu/dynamic.rs
> > index ce95e420f943..64f04cef3705 100644
> > --- a/rust/kernel/percpu/dynamic.rs
> > +++ b/rust/kernel/percpu/dynamic.rs
> > @@ -3,6 +3,8 @@
> >  
> >  use super::*;
> >  
> > +use crate::cpumask::Cpumask;
> > +
> >  /// Represents a dynamic allocation of a per-CPU variable via alloc_percpu. Calls free_percpu when
> >  /// dropped.
> >  pub struct PerCpuAllocation<T>(PerCpuPtr<T>);
> > @@ -74,6 +76,36 @@ pub fn new_zero(flags: Flags) -> Option<Self> {
> >      }
> >  }
> >  
> > +impl<T: Clone> DynamicPerCpu<T> {
> > +    /// Allocates a new per-CPU variable
> > +    ///
> > +    /// # Arguments
> > +    /// * `val` - The initial value of the per-CPU variable on all CPUs.
> > +    /// * `flags` - Flags used to allocate an `Arc` that keeps track of the underlying
> > +    ///   `PerCpuAllocation`.
> > +    pub fn new_with(val: T, flags: Flags) -> Option<Self> {
> > +        let alloc: PerCpuAllocation<T> = PerCpuAllocation::new_uninit()?;
> > +        let ptr = alloc.0;
> > +
> > +        for cpu in Cpumask::possible().iter() {
> 
> In C we've got the 'for_each_possible_cpu()'. Is there any way to
> preserve that semantics in rust? I really believe that similar
> semantics on higher level on both sides will help _a_lot_ for those
> transitioning into the rust world (like me).

I'm not sure I understand what you mean --- I believe the semantics
should be the same here (`cpu` takes on each value in
`cpu_possible_mask`). Could you please clarify?

Thanks,
Mitchell

> Thanks,
> Yury
>  
> > +            // SAFETY: `ptr` is a valid allocation, and `cpu` appears in `Cpumask::possible()`
> > +            let remote_ptr = unsafe { ptr.get_remote_ptr(cpu) };
> > +            // SAFETY: Each CPU's slot corresponding to `ptr` is currently uninitialized, and no
> > +            // one else has a reference to it. Therefore, we can freely write to it without
> > +            // worrying about the need to drop what was there or whether we're racing with someone
> > +            // else. `PerCpuPtr::get_remote_ptr` guarantees that the pointer is valid since we
> > +            // derived it from a valid allocation and `cpu`.
> > +            unsafe {
> > +                (*remote_ptr).write(val.clone());
> > +            }
> > +        }
> > +
> > +        let arc = Arc::new(alloc, flags).ok()?;
> > +
> > +        Some(Self { alloc: arc })
> > +    }
> > +}
> > +
> >  impl<T> PerCpu<T> for DynamicPerCpu<T> {
> >      unsafe fn get_mut(&mut self, guard: CpuGuard) -> PerCpuToken<'_, T> {
> >          // SAFETY: The requirements of `PerCpu::get_mut` and this type's invariant ensure that the
> > @@ -81,3 +113,11 @@ unsafe fn get_mut(&mut self, guard: CpuGuard) -> PerCpuToken<'_, T> {
> >          unsafe { PerCpuToken::new(guard, &self.alloc.0) }
> >      }
> >  }
> > +
> > +impl<T: InteriorMutable> CheckedPerCpu<T> for DynamicPerCpu<T> {
> > +    fn get(&mut self, guard: CpuGuard) -> CheckedPerCpuToken<'_, T> {
> > +        // SAFETY: By the invariant of `DynamicPerCpu`, the memory location in each CPU's
> > +        // per-CPU area corresponding to this variable has been initialized.
> > +        unsafe { CheckedPerCpuToken::new(guard, &self.alloc.0) }
> > +    }
> > +}
> > diff --git a/samples/rust/rust_percpu.rs b/samples/rust/rust_percpu.rs
> > index 98ca1c781b6b..06b322019134 100644
> > --- a/samples/rust/rust_percpu.rs
> > +++ b/samples/rust/rust_percpu.rs
> > @@ -130,13 +130,72 @@ fn init(_module: &'static ThisModule) -> Result<Self, Error> {
> >  
> >          // SAFETY: No prerequisites for on_each_cpu.
> >          unsafe {
> > -            on_each_cpu(Some(inc_percpu), (&raw mut test).cast(), 0);
> > -            on_each_cpu(Some(inc_percpu), (&raw mut test).cast(), 0);
> > -            on_each_cpu(Some(inc_percpu), (&raw mut test).cast(), 0);
> > -            on_each_cpu(Some(inc_percpu), (&raw mut test).cast(), 1);
> > -            on_each_cpu(Some(check_percpu), (&raw mut test).cast(), 1);
> > +            on_each_cpu(Some(inc_percpu_u64), (&raw mut test).cast(), 0);
> > +            on_each_cpu(Some(inc_percpu_u64), (&raw mut test).cast(), 0);
> > +            on_each_cpu(Some(inc_percpu_u64), (&raw mut test).cast(), 0);
> > +            on_each_cpu(Some(inc_percpu_u64), (&raw mut test).cast(), 1);
> > +            on_each_cpu(Some(check_percpu_u64), (&raw mut test).cast(), 1);
> >          }
> >  
> > +        let mut checked: DynamicPerCpu<RefCell<u64>> =
> > +            DynamicPerCpu::new_with(RefCell::new(100), GFP_KERNEL).unwrap();
> > +
> > +        // SAFETY: No prerequisites for on_each_cpu.
> > +        unsafe {
> > +            on_each_cpu(Some(inc_percpu_refcell_u64), (&raw mut checked).cast(), 0);
> > +            on_each_cpu(Some(inc_percpu_refcell_u64), (&raw mut checked).cast(), 0);
> > +            on_each_cpu(Some(inc_percpu_refcell_u64), (&raw mut checked).cast(), 0);
> > +            on_each_cpu(Some(inc_percpu_refcell_u64), (&raw mut checked).cast(), 1);
> > +            on_each_cpu(Some(check_percpu_refcell_u64), (&raw mut checked).cast(), 1);
> > +        }
> > +
> > +        checked.get(CpuGuard::new()).with(|val: &RefCell<u64>| {
> > +            assert!(*val.borrow() == 104);
> > +
> > +            let mut checked_native = 0;
> > +            *val.borrow_mut() = 0;
> > +
> > +            checked_native += 1;
> > +            *val.borrow_mut() += 1;
> > +            pr_info!(
> > +                "Checked native: {}, *checked: {}\n",
> > +                checked_native,
> > +                val.borrow()
> > +            );
> > +            assert!(checked_native == *val.borrow() && checked_native == 1);
> > +
> > +            checked_native = checked_native.wrapping_add((-1i64) as u64);
> > +            val.replace_with(|old: &mut u64| old.wrapping_add((-1i64) as u64));
> > +            pr_info!(
> > +                "Checked native: {}, *checked: {}\n",
> > +                checked_native,
> > +                val.borrow()
> > +            );
> > +            assert!(checked_native == *val.borrow() && checked_native == 0);
> > +
> > +            checked_native = checked_native.wrapping_add((-1i64) as u64);
> > +            val.replace_with(|old: &mut u64| old.wrapping_add((-1i64) as u64));
> > +            pr_info!(
> > +                "Checked native: {}, *checked: {}\n",
> > +                checked_native,
> > +                val.borrow()
> > +            );
> > +            assert!(checked_native == *val.borrow() && checked_native == (-1i64) as u64);
> > +
> > +            checked_native = 0;
> > +            *val.borrow_mut() = 0;
> > +
> > +            checked_native = checked_native.wrapping_sub(1);
> > +            val.replace_with(|old: &mut u64| old.wrapping_sub(1));
> > +            pr_info!(
> > +                "Checked native: {}, *checked: {}\n",
> > +                checked_native,
> > +                val.borrow()
> > +            );
> > +            assert!(checked_native == *val.borrow() && checked_native == (-1i64) as u64);
> > +            assert!(checked_native == *val.borrow() && checked_native == u64::MAX);
> > +        });
> > +
> >          pr_info!("rust dynamic percpu test done\n");
> >  
> >          // Return Err to unload the module
> > @@ -144,7 +203,7 @@ fn init(_module: &'static ThisModule) -> Result<Self, Error> {
> >      }
> >  }
> >  
> > -extern "C" fn inc_percpu(info: *mut c_void) {
> > +extern "C" fn inc_percpu_u64(info: *mut c_void) {
> >      // SAFETY: We know that info is a void *const DynamicPerCpu<u64> and DynamicPerCpu<u64> is Send.
> >      let mut pcpu = unsafe { (*(info as *const DynamicPerCpu<u64>)).clone() };
> >      pr_info!("Incrementing on {}\n", CpuId::current().as_u32());
> > @@ -153,7 +212,7 @@ extern "C" fn inc_percpu(info: *mut c_void) {
> >      unsafe { pcpu.get_mut(CpuGuard::new()) }.with(|val: &mut u64| *val += 1);
> >  }
> >  
> > -extern "C" fn check_percpu(info: *mut c_void) {
> > +extern "C" fn check_percpu_u64(info: *mut c_void) {
> >      // SAFETY: We know that info is a void *const DynamicPerCpu<u64> and DynamicPerCpu<u64> is Send.
> >      let mut pcpu = unsafe { (*(info as *const DynamicPerCpu<u64>)).clone() };
> >      pr_info!("Asserting on {}\n", CpuId::current().as_u32());
> > @@ -161,3 +220,29 @@ extern "C" fn check_percpu(info: *mut c_void) {
> >      // SAFETY: We don't have multiple clones of pcpu in scope
> >      unsafe { pcpu.get_mut(CpuGuard::new()) }.with(|val: &mut u64| assert!(*val == 4));
> >  }
> > +
> > +extern "C" fn inc_percpu_refcell_u64(info: *mut c_void) {
> > +    // SAFETY: We know that info is a void *const DynamicPerCpu<RefCell<u64>> and
> > +    // DynamicPerCpu<RefCell<u64>> is Send.
> > +    let mut pcpu = unsafe { (*(info as *const DynamicPerCpu<RefCell<u64>>)).clone() };
> > +    // SAFETY: smp_processor_id has no preconditions
> > +    pr_info!("Incrementing on {}\n", CpuId::current().as_u32());
> > +
> > +    pcpu.get(CpuGuard::new()).with(|val: &RefCell<u64>| {
> > +        let mut val = val.borrow_mut();
> > +        *val += 1;
> > +    });
> > +}
> > +
> > +extern "C" fn check_percpu_refcell_u64(info: *mut c_void) {
> > +    // SAFETY: We know that info is a void *const DynamicPerCpu<RefCell<u64>> and
> > +    // DynamicPerCpu<RefCell<u64>> is Send.
> > +    let mut pcpu = unsafe { (*(info as *const DynamicPerCpu<RefCell<u64>>)).clone() };
> > +    // SAFETY: smp_processor_id has no preconditions
> > +    pr_info!("Asserting on {}\n", CpuId::current().as_u32());
> > +
> > +    pcpu.get(CpuGuard::new()).with(|val: &RefCell<u64>| {
> > +        let val = val.borrow();
> > +        assert!(*val == 104);
> > +    });
> > +}
> > 
> > -- 
> > 2.34.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ