[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250527222254.565881-9-lyude@redhat.com>
Date: Tue, 27 May 2025 18:21:49 -0400
From: Lyude Paul <lyude@...hat.com>
To: rust-for-linux@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Boqun Feng <boqun.feng@...il.com>,
linux-kernel@...r.kernel.org,
Daniel Almeida <daniel.almeida@...labora.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Miguel Ojeda <ojeda@...nel.org>,
Alex Gaynor <alex.gaynor@...il.com>,
Gary Guo <gary@...yguo.net>,
Björn Roy Baron <bjorn3_gh@...tonmail.com>,
Benno Lossin <lossin@...nel.org>,
Andreas Hindborg <a.hindborg@...nel.org>,
Alice Ryhl <aliceryhl@...gle.com>,
Trevor Gross <tmgross@...ch.edu>,
Danilo Krummrich <dakr@...nel.org>
Subject: [RFC RESEND v10 08/14] rust: sync: lock: Add `Backend::BackendInContext`
From: Boqun Feng <boqun.feng@...il.com>
`SpinLock`'s backend can be used for `SpinLockIrq`, if the interrupts are
disabled. And it actually provides performance gains since interrupts are
not needed to be disabled anymore. So add `Backend::BackendInContext` to
describe the case where one backend can be used for another. Use it to
implement the `lock_with()` so that `SpinLockIrq` can avoid disabling
interrupts by using `SpinLock`'s backend.
Signed-off-by: Boqun Feng <boqun.feng@...il.com>
Co-authored-by: Lyude Paul <lyude@...hat.com>
---
V10:
* Fix typos - Dirk/Lyude
* Since we're adding support for context locks to GlobalLock as well, let's
also make sure to cover try_lock while we're at it and add try_lock_with
* Add a private function as_lock_in_context() for handling casting from a
Lock<T, B> to Lock<T, B::BackendInContext> so we don't have to duplicate
safety comments
Signed-off-by: Lyude Paul <lyude@...hat.com>
---
rust/kernel/sync/lock.rs | 61 ++++++++++++++++++++++++++++++-
rust/kernel/sync/lock/mutex.rs | 1 +
rust/kernel/sync/lock/spinlock.rs | 41 +++++++++++++++++++++
3 files changed, 101 insertions(+), 2 deletions(-)
diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
index f94ed1a825f6d..64a7a78ea2dde 100644
--- a/rust/kernel/sync/lock.rs
+++ b/rust/kernel/sync/lock.rs
@@ -30,10 +30,15 @@
/// is owned, that is, between calls to [`lock`] and [`unlock`].
/// - Implementers must also ensure that [`relock`] uses the same locking method as the original
/// lock operation.
+/// - Implementers must ensure if [`BackendInContext`] is a [`Backend`], it's safe to acquire the
+/// lock under the [`Context`], the [`State`] of two backends must be the same.
///
/// [`lock`]: Backend::lock
/// [`unlock`]: Backend::unlock
/// [`relock`]: Backend::relock
+/// [`BackendInContext`]: Backend::BackendInContext
+/// [`Context`]: Backend::Context
+/// [`State`]: Backend::State
pub unsafe trait Backend {
/// The state required by the lock.
type State;
@@ -47,6 +52,9 @@ pub unsafe trait Backend {
/// The context which can be provided to acquire the lock with a different backend.
type Context<'a>;
+ /// The alternative backend we can use if a [`Context`](Backend::Context) is provided.
+ type BackendInContext: Sized;
+
/// Initialises the lock.
///
/// # Safety
@@ -166,10 +174,59 @@ pub unsafe fn from_raw<'a>(ptr: *mut B::State) -> &'a Self {
}
impl<T: ?Sized, B: Backend> Lock<T, B> {
+ /// Casts the lock as a `Lock<T, B::BackendInContext>`.
+ fn as_lock_in_context<'a>(
+ &'a self,
+ _context: B::Context<'a>,
+ ) -> &'a Lock<T, B::BackendInContext>
+ where
+ B::BackendInContext: Backend,
+ {
+ // SAFETY:
+ // - Per the safety guarantee of `Backend`, if `B::BackendInContext` and `B` should
+ // have the same state, the layout of the lock is the same so it's safe to convert one to
+ // another.
+ // - The caller provided `B::Context<'a>`, so it is safe to recast and return this lock.
+ unsafe { &*(self as *const _ as *const Lock<T, B::BackendInContext>) }
+ }
+
/// Acquires the lock with the given context and gives the caller access to the data protected
/// by it.
- pub fn lock_with<'a>(&'a self, _context: B::Context<'a>) -> Guard<'a, T, B> {
- todo!()
+ pub fn lock_with<'a>(&'a self, context: B::Context<'a>) -> Guard<'a, T, B::BackendInContext>
+ where
+ B::BackendInContext: Backend,
+ {
+ let lock = self.as_lock_in_context(context);
+
+ // SAFETY: The constructor of the type calls `init`, so the existence of the object proves
+ // that `init` was called. Plus the safety guarantee of `Backend` guarantees that `B::State`
+ // is the same as `B::BackendInContext::State`, also it's safe to call another backend
+ // because there is `B::Context<'a>`.
+ let state = unsafe { B::BackendInContext::lock(lock.state.get()) };
+
+ // SAFETY: The lock was just acquired.
+ unsafe { Guard::new(lock, state) }
+ }
+
+ /// Tries to acquire the lock with the given context.
+ ///
+ /// Returns a guard that can be used to access the data protected by the lock if successful.
+ pub fn try_lock_with<'a>(
+ &'a self,
+ context: B::Context<'a>,
+ ) -> Option<Guard<'a, T, B::BackendInContext>>
+ where
+ B::BackendInContext: Backend,
+ {
+ let lock = self.as_lock_in_context(context);
+
+ // SAFETY: The constructor of the type calls `init`, so the existence of the object proves
+ // that `init` was called. Plus the safety guarantee of `Backend` guarantees that `B::State`
+ // is the same as `B::BackendInContext::State`, also it's safe to call another backend
+ // because there is `B::Context<'a>`.
+ unsafe {
+ B::BackendInContext::try_lock(lock.state.get()).map(|state| Guard::new(lock, state))
+ }
}
/// Acquires the lock and gives the caller access to the data protected by it.
diff --git a/rust/kernel/sync/lock/mutex.rs b/rust/kernel/sync/lock/mutex.rs
index be1e2e18cf42d..662a530750703 100644
--- a/rust/kernel/sync/lock/mutex.rs
+++ b/rust/kernel/sync/lock/mutex.rs
@@ -102,6 +102,7 @@ unsafe impl super::Backend for MutexBackend {
type State = bindings::mutex;
type GuardState = ();
type Context<'a> = ();
+ type BackendInContext = ();
unsafe fn init(
ptr: *mut Self::State,
diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
index f3dac0931f6a2..a2d60d5da5e11 100644
--- a/rust/kernel/sync/lock/spinlock.rs
+++ b/rust/kernel/sync/lock/spinlock.rs
@@ -102,6 +102,7 @@ unsafe impl super::Backend for SpinLockBackend {
type State = bindings::spinlock_t;
type GuardState = ();
type Context<'a> = ();
+ type BackendInContext = ();
unsafe fn init(
ptr: *mut Self::State,
@@ -221,6 +222,45 @@ macro_rules! new_spinlock_irq {
/// # Ok::<(), Error>(())
/// ```
///
+/// The next example demonstrates locking a [`SpinLockIrq`] using [`lock_with()`] in a function
+/// which can only be called when local processor interrupts are already disabled.
+///
+/// ```
+/// use kernel::sync::{new_spinlock_irq, SpinLockIrq};
+/// use kernel::interrupt::*;
+///
+/// struct Inner {
+/// a: u32,
+/// }
+///
+/// #[pin_data]
+/// struct Example {
+/// #[pin]
+/// inner: SpinLockIrq<Inner>,
+/// }
+///
+/// impl Example {
+/// fn new() -> impl PinInit<Self> {
+/// pin_init!(Self {
+/// inner <- new_spinlock_irq!(Inner { a: 20 }),
+/// })
+/// }
+/// }
+///
+/// // Accessing an `Example` from a function that can only be called in no-interrupt contexts.
+/// fn noirq_work(e: &Example, interrupt_disabled: &LocalInterruptDisabled) {
+/// // Because we know interrupts are disabled from interrupt_disable, we can skip toggling
+/// // interrupt state using lock_with() and the provided token
+/// assert_eq!(e.inner.lock_with(interrupt_disabled).a, 20);
+/// }
+///
+/// # let e = KBox::pin_init(Example::new(), GFP_KERNEL)?;
+/// # let interrupt_guard = local_interrupt_disable();
+/// # noirq_work(&e, &interrupt_guard);
+/// #
+/// # Ok::<(), Error>(())
+/// ```
+///
/// [`lock()`]: SpinLockIrq::lock
/// [`lock_with()`]: SpinLockIrq::lock_with
pub type SpinLockIrq<T> = super::Lock<T, SpinLockIrqBackend>;
@@ -245,6 +285,7 @@ unsafe impl super::Backend for SpinLockIrqBackend {
type State = bindings::spinlock_t;
type GuardState = ();
type Context<'a> = &'a LocalInterruptDisabled;
+ type BackendInContext = SpinLockBackend;
unsafe fn init(
ptr: *mut Self::State,
--
2.49.0
Powered by blists - more mailing lists