lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ulpftgdk6hgorwsrbtv2tv47b7usn3cow362knuxlzq2az2cl2@krwo6e3zxryn>
Date: Mon, 10 Nov 2025 11:34:59 -0500
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Mike Rapoport <rppt@...nel.org>
Cc: "David Hildenbrand (Red Hat)" <david@...nel.org>,
        Peter Xu <peterx@...hat.com>,
        Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
        David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, Muchun Song <muchun.song@...ux.dev>,
        Nikita Kalyazin <kalyazin@...zon.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Axel Rasmussen <axelrasmussen@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        James Houghton <jthoughton@...gle.com>,
        Hugh Dickins <hughd@...gle.com>, Michal Hocko <mhocko@...e.com>,
        Ujwal Kundur <ujwal.kundur@...il.com>,
        Oscar Salvador <osalvador@...e.de>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Andrea Arcangeli <aarcange@...hat.com>, conduct@...nel.org
Subject: Re: [PATCH v4 0/4] mm/userfaultfd: modulize memory types

* Mike Rapoport <rppt@...nel.org> [251109 02:12]:
> Hi Liam,
> 
> On Thu, Nov 06, 2025 at 11:32:46AM -0500, Liam R. Howlett wrote:
> > * Mike Rapoport <rppt@...nel.org> [251104 02:22]:
> > > On Mon, Nov 03, 2025 at 10:27:05PM +0100, David Hildenbrand (Red Hat) wrote:
> > > > 
> > > > And maybe that's the main problem here: Liam talks about general uffd
> > > > cleanups while you are focused on supporting guest_memfd minor mode "as
> > > > simple as possible" (as you write below).
> > > 
> > > Hijacking for the technical part for a moment ;-)
> > > 
> > > It seems that "as simple as possible" can even avoid data members in struct
> > > vm_uffd_ops, e.g something along these lines:
> > 
> > I like this because it removes the flag.
> > 
> > If we don't want to return the folio, we could modify the
> > mfill_atomic_pte_continue() to __mfill_atomic_pte_continue() which takes
> > a function pointer and have the callers pass a different get_folio() by
> > memory type.  Each memory type (anon, shmem, and guest_memfd) would have
> > a small stub that would be set in the vm_ops.
> 
> I'm not sure I follow you here.
> What do you mean by "don't want to return the folio"? 

I didn't get this far in my prototyping, but if we have a way to service
the minor fault for the memory types then we could use the function
pointer as the way to change how to get the folio vs passing in a
pointer to get/set the folio.

> 
> Isn't ->minor_get_folio() is already a different get_folio() by memory
> type?

Yes.  If you are dead set with handing the folio to the module, then
this is what you do.

If you wanted to avoid leaking the **folio out, then we might be able to
do that by having a small section of code live in mm for guest_memfd.
Everyone seemed to have abandoned this idea, but I'm not sure why it's
not workable.  It seems like we have a viable decoupling method here.

> 
> > It also looks similar to vma_get_uffd_ops() in 1fa9377e57eb1
> > ("mm/userfaultfd: Introduce userfaultfd ops and use it for destination
> > validation") [1].  But I always returned a uffd ops, which passes all
> > uffd testing.  When would your NULL uffd ops be hit?  That is, when
> > would uffd_ops not be set and not be anon?
> 
> The patch is a prototype. Quite possibly you are right and there's no need
> to return NULL there.

I might be putting too much trust in the testing that exists as well.

Either way, this approach is removing the growth of more
flags/middleware in uffd.

>  
> > [1].  https://git.infradead.org/?p=users/jedix/linux-maple.git;a=blobdiff;f=mm/userfaultfd.c;h=e2570e72242e5a350508f785119c5dee4d8176c1;hp=e8341a45e7e8d239c64f460afeb5b2b8b29ed853;hb=1fa9377e57eb16d7fa579ea7f8eb832164d209ac;hpb=2166e91882eb195677717ac2f8fbfc58171196ce
> > 
> > Thanks,
> > Liam
> > 
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index d16b33bacc32..840986780cb5 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -605,6 +605,8 @@ struct vm_fault {
> > >  					 */
> > >  };
> > >  
> > > +struct vm_uffd_ops;
> > > +
> > >  /*
> > >   * These are the virtual MM functions - opening of an area, closing and
> > >   * unmapping it (needed to keep files on disk up-to-date etc), pointer
> > > @@ -690,6 +692,9 @@ struct vm_operations_struct {
> > >  	struct page *(*find_normal_page)(struct vm_area_struct *vma,
> > >  					 unsigned long addr);
> > >  #endif /* CONFIG_FIND_NORMAL_PAGE */
> > > +#ifdef CONFIG_USERFAULTFD
> > > +	const struct vm_uffd_ops *uffd_ops;
> > > +#endif
> > >  };
> > >  
> > >  #ifdef CONFIG_NUMA_BALANCING
> > > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
> > > index c0e716aec26a..aac7ac616636 100644
> > > --- a/include/linux/userfaultfd_k.h
> > > +++ b/include/linux/userfaultfd_k.h
> > > @@ -111,6 +111,11 @@ static inline uffd_flags_t uffd_flags_set_mode(uffd_flags_t flags, enum mfill_at
> > >  /* Flags controlling behavior. These behavior changes are mode-independent. */
> > >  #define MFILL_ATOMIC_WP MFILL_ATOMIC_FLAG(0)
> > >  
> > > +struct vm_uffd_ops {
> > > +	int (*minor_get_folio)(struct inode *inode, pgoff_t pgoff,
> > > +			       struct folio **folio);
> > > +};
> > > +
> > >  extern int mfill_atomic_install_pte(pmd_t *dst_pmd,
> > >  				    struct vm_area_struct *dst_vma,
> > >  				    unsigned long dst_addr, struct page *page,
> > > diff --git a/mm/shmem.c b/mm/shmem.c
> > > index b9081b817d28..b4318ad3bdf9 100644
> > > --- a/mm/shmem.c
> > > +++ b/mm/shmem.c
> > > @@ -3260,6 +3260,17 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
> > >  	shmem_inode_unacct_blocks(inode, 1);
> > >  	return ret;
> > >  }
> > > +
> > > +static int shmem_uffd_minor_get_folio(struct inode *inode, pgoff_t pgoff,
> > > +				      struct folio **folio)
> > > +{
> > > +	return shmem_get_folio(inode, pgoff, 0, folio, SGP_NOALLOC);
> > > +}
> > > +
> > > +static const struct vm_uffd_ops shmem_uffd_ops = {
> > > +	.minor_get_folio = shmem_uffd_minor_get_folio,
> > > +};
> > > +
> > >  #endif /* CONFIG_USERFAULTFD */
> > >  
> > >  #ifdef CONFIG_TMPFS
> > > @@ -5292,6 +5303,9 @@ static const struct vm_operations_struct shmem_vm_ops = {
> > >  	.set_policy     = shmem_set_policy,
> > >  	.get_policy     = shmem_get_policy,
> > >  #endif
> > > +#ifdef CONFIG_USERFAULTFD
> > > +	.uffd_ops	= &shmem_uffd_ops,
> > > +#endif
> > >  };
> > >  
> > >  static const struct vm_operations_struct shmem_anon_vm_ops = {
> > > @@ -5301,6 +5315,9 @@ static const struct vm_operations_struct shmem_anon_vm_ops = {
> > >  	.set_policy     = shmem_set_policy,
> > >  	.get_policy     = shmem_get_policy,
> > >  #endif
> > > +#ifdef CONFIG_USERFAULTFD
> > > +	.uffd_ops	= &shmem_uffd_ops,
> > > +#endif
> > >  };
> > >  
> > >  int shmem_init_fs_context(struct fs_context *fc)
> > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > > index af61b95c89e4..6b30a8f39f4d 100644
> > > --- a/mm/userfaultfd.c
> > > +++ b/mm/userfaultfd.c
> > > @@ -20,6 +20,20 @@
> > >  #include "internal.h"
> > >  #include "swap.h"
> > >  
> > > +static const struct vm_uffd_ops anon_uffd_ops = {
> > > +};
> > > +
> > > +static inline const struct vm_uffd_ops *vma_get_uffd_ops(struct vm_area_struct *vma)
> > > +{
> > > +	if (vma->vm_ops && vma->vm_ops->uffd_ops)
> > > +		return vma->vm_ops->uffd_ops;
> > > +
> > > +	if (vma_is_anonymous(vma))
> > > +		return &anon_uffd_ops;
> > > +
> > > +	return NULL;
> > > +}
> > > +
> > >  static __always_inline
> > >  bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end)
> > >  {
> > > @@ -382,13 +396,14 @@ static int mfill_atomic_pte_continue(pmd_t *dst_pmd,
> > >  				     unsigned long dst_addr,
> > >  				     uffd_flags_t flags)
> > >  {
> > > +	const struct vm_uffd_ops *uffd_ops = vma_get_uffd_ops(dst_vma);
> > >  	struct inode *inode = file_inode(dst_vma->vm_file);
> > >  	pgoff_t pgoff = linear_page_index(dst_vma, dst_addr);
> > >  	struct folio *folio;
> > >  	struct page *page;
> > >  	int ret;
> > >  
> > > -	ret = shmem_get_folio(inode, pgoff, 0, &folio, SGP_NOALLOC);
> > > +	ret = uffd_ops->minor_get_folio(inode, pgoff, &folio);
> > >  	/* Our caller expects us to return -EFAULT if we failed to find folio */
> > >  	if (ret == -ENOENT)
> > >  		ret = -EFAULT;
> > > @@ -707,6 +722,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
> > >  	unsigned long src_addr, dst_addr;
> > >  	long copied;
> > >  	struct folio *folio;
> > > +	const struct vm_uffd_ops *uffd_ops;
> > >  
> > >  	/*
> > >  	 * Sanitize the command parameters:
> > > @@ -766,10 +782,11 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
> > >  		return  mfill_atomic_hugetlb(ctx, dst_vma, dst_start,
> > >  					     src_start, len, flags);
> > >  
> > > -	if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma))
> > > +	uffd_ops = vma_get_uffd_ops(dst_vma);
> > > +	if (!uffd_ops)
> > >  		goto out_unlock;
> > > -	if (!vma_is_shmem(dst_vma) &&
> > > -	    uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE))
> > > +	if (uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE) &&
> > > +	    !uffd_ops->minor_get_folio)
> > >  		goto out_unlock;
> > >  
> > >  	while (src_addr < src_start + len) {
> > >  
> > > -- 
> > > Sincerely yours,
> > > Mike.
> 
> -- 
> Sincerely yours,
> Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ