[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250225161625.7b868034@eugeo>
Date: Tue, 25 Feb 2025 16:16:25 +0000
From: Gary Guo <gary@...yguo.net>
To: Alice Ryhl <aliceryhl@...gle.com>
Cc: Miguel Ojeda <ojeda@...nel.org>, Matthew Wilcox <willy@...radead.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Vlastimil Babka
<vbabka@...e.cz>, John Hubbard <jhubbard@...dia.com>, "Liam R. Howlett"
<Liam.Howlett@...cle.com>, Andrew Morton <akpm@...ux-foundation.org>, Greg
Kroah-Hartman <gregkh@...uxfoundation.org>, Arnd Bergmann <arnd@...db.de>,
Jann Horn <jannh@...gle.com>, Suren Baghdasaryan <surenb@...gle.com>, Alex
Gaynor <alex.gaynor@...il.com>, Boqun Feng <boqun.feng@...il.com>, "
Björn Roy Baron" <bjorn3_gh@...tonmail.com>, Benno Lossin
<benno.lossin@...ton.me>, Andreas Hindborg <a.hindborg@...nel.org>, Trevor
Gross <tmgross@...ch.edu>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, rust-for-linux@...r.kernel.org
Subject: Re: [PATCH v14 5/8] mm: rust: add mmput_async support
On Thu, 13 Feb 2025 11:04:04 +0000
Alice Ryhl <aliceryhl@...gle.com> wrote:
> Adds an MmWithUserAsync type that uses mmput_async when dropped but is
> otherwise identical to MmWithUser. This has to be done using a separate
> type because the thing we are changing is the destructor.
>
> Rust Binder needs this to avoid a certain deadlock. See commit
> 9a9ab0d96362 ("binder: fix race between mmput() and do_exit()") for
> details. It's also needed in the shrinker to avoid cleaning up the mm in
> the shrinker's context.
>
> Reviewed-by: Andreas Hindborg <a.hindborg@...nel.org>
> Acked-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com> (for mm bits)
> Signed-off-by: Alice Ryhl <aliceryhl@...gle.com>
Reviewed-by: Gary Guo <gary@...yguo.net>
> ---
> rust/kernel/mm.rs | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 49 insertions(+)
>
> diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs
> index 618aa48e00a4..42decd311740 100644
> --- a/rust/kernel/mm.rs
> +++ b/rust/kernel/mm.rs
> @@ -110,6 +110,48 @@ fn deref(&self) -> &Mm {
> }
> }
>
> +/// A wrapper for the kernel's `struct mm_struct`.
> +///
> +/// This type is identical to `MmWithUser` except that it uses `mmput_async` when dropping a
> +/// refcount. This means that the destructor of `ARef<MmWithUserAsync>` is safe to call in atomic
> +/// context.
> +///
> +/// # Invariants
> +///
> +/// Values of this type are always refcounted using `mmget`. The value of `mm_users` is non-zero.
> +#[repr(transparent)]
> +pub struct MmWithUserAsync {
> + mm: MmWithUser,
> +}
> +
> +// SAFETY: It is safe to call `mmput_async` on another thread than where `mmget` was called.
> +unsafe impl Send for MmWithUserAsync {}
> +// SAFETY: All methods on `MmWithUserAsync` can be called in parallel from several threads.
> +unsafe impl Sync for MmWithUserAsync {}
> +
> +// SAFETY: By the type invariants, this type is always refcounted.
> +unsafe impl AlwaysRefCounted for MmWithUserAsync {
> + fn inc_ref(&self) {
> + // SAFETY: The pointer is valid since self is a reference.
> + unsafe { bindings::mmget(self.as_raw()) };
> + }
> +
> + unsafe fn dec_ref(obj: NonNull<Self>) {
> + // SAFETY: The caller is giving up their refcount.
> + unsafe { bindings::mmput_async(obj.cast().as_ptr()) };
> + }
> +}
> +
> +// Make all `MmWithUser` methods available on `MmWithUserAsync`.
> +impl Deref for MmWithUserAsync {
> + type Target = MmWithUser;
> +
> + #[inline]
> + fn deref(&self) -> &MmWithUser {
> + &self.mm
> + }
> +}
> +
> // These methods are safe to call even if `mm_users` is zero.
> impl Mm {
> /// Returns a raw pointer to the inner `mm_struct`.
> @@ -161,6 +203,13 @@ pub unsafe fn from_raw<'a>(ptr: *const bindings::mm_struct) -> &'a MmWithUser {
> unsafe { &*ptr.cast() }
> }
>
> + /// Use `mmput_async` when dropping this refcount.
> + #[inline]
> + pub fn into_mmput_async(me: ARef<MmWithUser>) -> ARef<MmWithUserAsync> {
> + // SAFETY: The layouts and invariants are compatible.
> + unsafe { ARef::from_raw(ARef::into_raw(me).cast()) }
> + }
> +
> /// Attempt to access a vma using the vma read lock.
> ///
> /// This is an optimistic trylock operation, so it may fail if there is contention. In that
>
Powered by blists - more mailing lists