[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAG48ez28kzjrvMN66Yp9n+WziPzE5LU_Y320405Q=PoOzdzStg@mail.gmail.com>
Date: Tue, 26 Nov 2024 23:09:26 +0100
From: Jann Horn <jannh@...gle.com>
To: Alice Ryhl <aliceryhl@...gle.com>
Cc: Miguel Ojeda <ojeda@...nel.org>, Matthew Wilcox <willy@...radead.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Vlastimil Babka <vbabka@...e.cz>,
John Hubbard <jhubbard@...dia.com>, "Liam R. Howlett" <Liam.Howlett@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>, Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Arnd Bergmann <arnd@...db.de>, Christian Brauner <brauner@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>, Alex Gaynor <alex.gaynor@...il.com>,
Boqun Feng <boqun.feng@...il.com>, Gary Guo <gary@...yguo.net>,
Björn Roy Baron <bjorn3_gh@...tonmail.com>,
Benno Lossin <benno.lossin@...ton.me>, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
rust-for-linux@...r.kernel.org, Andreas Hindborg <a.hindborg@...nel.org>
Subject: Re: [PATCH v9 2/8] mm: rust: add vm_area_struct methods that require
read access
On Fri, Nov 22, 2024 at 4:41 PM Alice Ryhl <aliceryhl@...gle.com> wrote:
> This adds a type called VmAreaRef which is used when referencing a vma
> that you have read access to. Here, read access means that you hold
> either the mmap read lock or the vma read lock (or stronger).
>
> Additionally, a vma_lookup method is added to the mmap read guard, which
> enables you to obtain a &VmAreaRef in safe Rust code.
>
> This patch only provides a way to lock the mmap read lock, but a
> follow-up patch also provides a way to just lock the vma read lock.
>
> Acked-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com> (for mm bits)
> Signed-off-by: Alice Ryhl <aliceryhl@...gle.com>
Reviewed-by: Jann Horn <jannh@...gle.com>
with one comment:
> + /// Zap pages in the given page range.
> + ///
> + /// This clears page table mappings for the range at the leaf level, leaving all other page
> + /// tables intact, and freeing any memory referenced by the VMA in this range. That is,
> + /// anonymous memory is completely freed, file-backed memory has its reference count on page
> + /// cache folio's dropped, any dirty data will still be written back to disk as usual.
> + #[inline]
> + pub fn zap_page_range_single(&self, address: usize, size: usize) {
> + // SAFETY: By the type invariants, the caller has read access to this VMA, which is
> + // sufficient for this method call. This method has no requirements on the vma flags. Any
> + // value of `address` and `size` is allowed.
If we really want to allow any address and size, we might want to add
an early bailout in zap_page_range_single(). The comment on top of
zap_page_range_single() currently says "The range must fit into one
VMA", and it looks like by the point we reach a bailout, we could have
gone through an interval tree walk via
mmu_notifier_invalidate_range_start()->__mmu_notifier_invalidate_range_start()->mn_itree_invalidate()
for a range that ends before it starts; I don't know how safe that is.
> + unsafe {
> + bindings::zap_page_range_single(
> + self.as_ptr(),
> + address as _,
> + size as _,
> + core::ptr::null_mut(),
> + )
> + };
> + }
> +}
Powered by blists - more mailing lists