lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250613231657.GO1174925@nvidia.com>
Date: Fri, 13 Jun 2025 20:16:57 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Peter Xu <peterx@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, kvm@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Alex Williamson <alex.williamson@...hat.com>,
	Zi Yan <ziy@...dia.com>, Alex Mastro <amastro@...com>,
	David Hildenbrand <david@...hat.com>,
	Nico Pache <npache@...hat.com>
Subject: Re: [PATCH 5/5] vfio-pci: Best-effort huge pfnmaps with !MAP_FIXED
 mappings

On Fri, Jun 13, 2025 at 03:15:19PM -0400, Peter Xu wrote:
> > > > > +	if (phys_len >= PMD_SIZE) {
> > > > > +		ret = mm_get_unmapped_area_aligned(file, addr, len, phys_addr,
> > > > > +						   flags, PMD_SIZE, 0);
> > > > > +		if (ret)
> > > > > +			return ret;
> > > > > +	}
> > > > 
> > > > Hurm, we have contiguous pages now, so PMD_SIZE is not so great, eg on
> > > > 4k ARM with we can have a 16*2M=32MB contiguity, and 16k ARM uses
> > > > contiguity to get a 32*16k=1GB option.
> > > > 
> > > > Forcing to only align to the PMD or PUD seems suboptimal..
> > > 
> > > Right, however the cont-pte / cont-pmd are still not supported in huge
> > > pfnmaps in general?  It'll definitely be nice if someone could look at that
> > > from ARM perspective, then provide support of both in one shot.
> > 
> > Maybe leave behind a comment about this. I've been poking around if
> > somone would do the ARM PFNMAP support but can't report any commitment.
> 
> I didn't know what's the best part to take a note for the whole pfnmap
> effort, but I added a note into the commit message on this patch:
> 
>         Note 2: Currently continuous pgtable entries (for example, cont-pte) is not
>         yet supported for huge pfnmaps in general.  It also is not considered in
>         this patch so far.  Separate work will be needed to enable continuous
>         pgtable entries on archs that support it.
> 
> > 
> > > > > +fallback:
> > > > > +	return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags);
> > > > 
> > > > Why not put this into mm_get_unmapped_area_vmflags() and get rid of
> > > > thp_get_unmapped_area_vmflags() too?
> > > > 
> > > > Is there any reason the caller should have to do a retry?
> > > 
> > > We would still need thp_get_unmapped_area_vmflags() because that encodes
> > > PMD_SIZE for THPs; we need the flexibility of providing any size alignment
> > > as a generic helper.
> > 
> > There is only one caller for thp_get_unmapped_area_vmflags(), just
> > open code PMD_SIZE there and thin this whole thing out. It reads
> > better like that anyhow:
> > 
> > 	} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && !file
> > 		   && !addr /* no hint */
> > 		   && IS_ALIGNED(len, PMD_SIZE)) {
> > 		/* Ensures that larger anonymous mappings are THP aligned. */
> > 		addr = mm_get_unmapped_area_aligned(file, 0, len, pgoff,
> > 						    flags, vm_flags, PMD_SIZE);
> > 
> > > That was ok, however that loses some flexibility when the caller wants to
> > > try with different alignments, exactly like above: currently, it was trying
> > > to do a first attempt of PUD mapping then fallback to PMD if that fails.
> > 
> > Oh, that's a good point, I didn't notice that subtle bit.
> > 
> > But then maybe that is showing the API is just wrong and the core code
> > should be trying to find the best alignment not the caller. Like we
> > can have those PUD/PMD size ifdefs inside the mm instead of in VFIO?
> > 
> > VFIO would just pass the BAR size, implying the best alignment, and
> > the core implementation will try to get the largest VMA alignment that
> > snaps to an arch supported page contiguity, testing each of the arches
> > page size possibilities in turn.
> > 
> > That sounds like a much better API than pushing this into drivers??
> 
> Yes it would be nice if the core mm can evolve to make supporting such
> easier.  Though the question is how to pass information over to core mm.

I was just thinking something simple, change how your new 
mm_get_unmapped_area_aligned() works so that the caller is expected to
pass in the size of the biggest folio/pfn page in as
align.

The mm_get_unmapped_area_aligned() returns a vm address that
will result in large mappings.

pgoff works the same way, the assumption is the biggest folio is at
pgoff 0 and followed by another biggest folio so the pgoff logic tries
to make the second folio map fully.

ie what a hugetlb fd or thp memfd would like.

Then you still hook the file operations and still figure out what BAR
and so on to call mm_get_unmapped_area_aligned() with the correct
aligned parameter.

mm_get_unmapped_area_aligned() goes through the supported page sizes
of the arch and selects the best one for the indicated biggest folio

If we were happy writing this in vfio then it can work just as well in
the core mm side.

> It's similar to many other use cases of get_unmapped_area() users.  For
> example, see v4l2_m2m_get_unmapped_area() which has similar treatment on at
> least knowing which part of the file was being mapped:
> 
> 	if (offset < DST_QUEUE_OFF_BASE) {
> 		vq = v4l2_m2m_get_src_vq(fh->m2m_ctx);
> 	} else {
> 		vq = v4l2_m2m_get_dst_vq(fh->m2m_ctx);
> 		pgoff -= (DST_QUEUE_OFF_BASE >> PAGE_SHIFT);
> 	}

Careful thats only use for nommu :)

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ