[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aSBSh39-ih3rk0Ab@kernel.org>
Date: Fri, 21 Nov 2025 13:52:39 +0200
From: Mike Rapoport <rppt@...nel.org>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>
Cc: linux-mm@...ck.org, Andrea Arcangeli <aarcange@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Hugh Dickins <hughd@...gle.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Michal Hocko <mhocko@...e.com>,
Nikita Kalyazin <kalyazin@...zon.com>,
Paolo Bonzini <pbonzini@...hat.com>, Peter Xu <peterx@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Shuah Khan <shuah@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [RFC PATCH 2/4] userfaultfd, shmem: use a VMA callback to handle
UFFDIO_CONTINUE
On Mon, Nov 17, 2025 at 06:08:57PM +0100, David Hildenbrand (Red Hat) wrote:
> On 17.11.25 12:46, Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)" <rppt@...nel.org>
> >
> > When userspace resolves a page fault in a shmem VMA with UFFDIO_CONTINUE
> > it needs to get a folio that already exists in the pagecache backing
> > that VMA.
> >
> > Instead of using shmem_get_folio() for that, add a get_pagecache_folio()
> > method to 'struct vm_operations_struct' that will return a folio if it
> > exists in the VMA's pagecache at given pgoff.
> >
> > Implement get_pagecache_folio() method for shmem and slightly refactor
> > userfaultfd's mfill_atomic() and mfill_atomic_pte_continue() to support
> > this new API.
> >
> > Signed-off-by: Mike Rapoport (Microsoft) <rppt@...nel.org>
> > ---
> > include/linux/mm.h | 9 +++++++
> > mm/shmem.c | 20 ++++++++++++++++
> > mm/userfaultfd.c | 60 ++++++++++++++++++++++++++++++----------------
> > 3 files changed, 69 insertions(+), 20 deletions(-)
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index d16b33bacc32..c35c1e1ac4dd 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -690,6 +690,15 @@ struct vm_operations_struct {
> > struct page *(*find_normal_page)(struct vm_area_struct *vma,
> > unsigned long addr);
> > #endif /* CONFIG_FIND_NORMAL_PAGE */
> > +#ifdef CONFIG_USERFAULTFD
> > + /*
> > + * Called by userfault to resolve UFFDIO_CONTINUE request.
> > + * Should return the folio found at pgoff in the VMA's pagecache if it
> > + * exists or ERR_PTR otherwise.
> > + */
>
> What are the locking +refcount rules? Without looking at the code, I would
> assume we return with a folio reference held and the folio locked?
Right, will add it to the comment
> > + struct folio *(*get_pagecache_folio)(struct vm_area_struct *vma,
> > + pgoff_t pgoff);
>
>
> The combination of VMA + pgoff looks weird at first. Would vma + addr or
> vma+vma_offset into vma be better?
Copied from map_pages() :)
> But it also makes me wonder if the callback would ever even require the VMA,
> or actually only vma->vm_file?
It's actually inode, I'm going to pass that instead of vma.
> Thinking out loud, I wonder if one could just call that "get_folio" or
> "get_shared_folio" (IOW, never an anon folio in a MAP_PRIVATE mapping).
Naming is hard :)
get_shared_folio() sounds good to me so unless there other suggestions I'll
stick with it.
> > +#endif
> > };
> > #ifdef CONFIG_NUMA_BALANCING
...
> > +static __always_inline bool vma_can_mfill_atomic(struct vm_area_struct *vma,
> > + uffd_flags_t flags)
> > +{
> > + if (uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) {
> > + if (vma->vm_ops && vma->vm_ops->get_pagecache_folio)
> > + return true;
> > + else
> > + return false;
>
> Probably easier to read is
>
> return vma->vm_ops && vma->vm_ops->get_pagecache_folio;
>
> > + }
> > +
> > + if (vma_is_anonymous(vma) || vma_is_shmem(vma))
> > + return true;
> > +
> > + return false;
>
>
> Could also be simplified to:
>
> return vma_is_anonymous(vma) || vma_is_shmem(vma);
Agree with for both of them.
> --
> Cheers
>
> David
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists