[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH5fLggh7HeOm8wGZH=hnR+SyPoW8Hik3uy4RodibyjY1UT36w@mail.gmail.com>
Date: Wed, 27 Nov 2024 13:01:31 +0100
From: Alice Ryhl <aliceryhl@...gle.com>
To: Jann Horn <jannh@...gle.com>
Cc: Miguel Ojeda <ojeda@...nel.org>, Matthew Wilcox <willy@...radead.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Vlastimil Babka <vbabka@...e.cz>,
John Hubbard <jhubbard@...dia.com>, "Liam R. Howlett" <Liam.Howlett@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>, Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Arnd Bergmann <arnd@...db.de>, Christian Brauner <brauner@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>, Alex Gaynor <alex.gaynor@...il.com>,
Boqun Feng <boqun.feng@...il.com>, Gary Guo <gary@...yguo.net>,
Björn Roy Baron <bjorn3_gh@...tonmail.com>,
Benno Lossin <benno.lossin@...ton.me>, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
rust-for-linux@...r.kernel.org, Andreas Hindborg <a.hindborg@...nel.org>
Subject: Re: [PATCH v9 2/8] mm: rust: add vm_area_struct methods that require
read access
On Tue, Nov 26, 2024 at 11:10 PM Jann Horn <jannh@...gle.com> wrote:
>
> On Fri, Nov 22, 2024 at 4:41 PM Alice Ryhl <aliceryhl@...gle.com> wrote:
> > This adds a type called VmAreaRef which is used when referencing a vma
> > that you have read access to. Here, read access means that you hold
> > either the mmap read lock or the vma read lock (or stronger).
> >
> > Additionally, a vma_lookup method is added to the mmap read guard, which
> > enables you to obtain a &VmAreaRef in safe Rust code.
> >
> > This patch only provides a way to lock the mmap read lock, but a
> > follow-up patch also provides a way to just lock the vma read lock.
> >
> > Acked-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com> (for mm bits)
> > Signed-off-by: Alice Ryhl <aliceryhl@...gle.com>
>
> Reviewed-by: Jann Horn <jannh@...gle.com>
Thanks!
> with one comment:
>
> > + /// Zap pages in the given page range.
> > + ///
> > + /// This clears page table mappings for the range at the leaf level, leaving all other page
> > + /// tables intact, and freeing any memory referenced by the VMA in this range. That is,
> > + /// anonymous memory is completely freed, file-backed memory has its reference count on page
> > + /// cache folio's dropped, any dirty data will still be written back to disk as usual.
> > + #[inline]
> > + pub fn zap_page_range_single(&self, address: usize, size: usize) {
> > + // SAFETY: By the type invariants, the caller has read access to this VMA, which is
> > + // sufficient for this method call. This method has no requirements on the vma flags. Any
> > + // value of `address` and `size` is allowed.
>
> If we really want to allow any address and size, we might want to add
> an early bailout in zap_page_range_single(). The comment on top of
> zap_page_range_single() currently says "The range must fit into one
> VMA", and it looks like by the point we reach a bailout, we could have
> gone through an interval tree walk via
> mmu_notifier_invalidate_range_start()->__mmu_notifier_invalidate_range_start()->mn_itree_invalidate()
> for a range that ends before it starts; I don't know how safe that is.
I could change the comment on zap_page_range_single() to say:
"The range should be contained within a single VMA. Otherwise an error
is returned."
And then I can add an overflow check at the top of
zap_page_range_single(). Sounds ok?
Alice
Powered by blists - more mailing lists