[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250916170723.GO1086830@nvidia.com>
Date: Tue, 16 Sep 2025 14:07:23 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Jonathan Corbet <corbet@....net>,
Matthew Wilcox <willy@...radead.org>, Guo Ren <guoren@...nel.org>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>,
"David S . Miller" <davem@...emloft.net>,
Andreas Larsson <andreas@...sler.com>,
Arnd Bergmann <arnd@...db.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Dan Williams <dan.j.williams@...el.com>,
Vishal Verma <vishal.l.verma@...el.com>,
Dave Jiang <dave.jiang@...el.com>, Nicolas Pitre <nico@...xnic.net>,
Muchun Song <muchun.song@...ux.dev>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Konstantin Komarov <almaz.alexandrovich@...agon-software.com>,
Baoquan He <bhe@...hat.com>, Vivek Goyal <vgoyal@...hat.com>,
Dave Young <dyoung@...hat.com>, Tony Luck <tony.luck@...el.com>,
Reinette Chatre <reinette.chatre@...el.com>,
Dave Martin <Dave.Martin@....com>,
James Morse <james.morse@....com>,
Alexander Viro <viro@...iv.linux.org.uk>,
Christian Brauner <brauner@...nel.org>, Jan Kara <jack@...e.cz>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
Vlastimil Babka <vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Michal Hocko <mhocko@...e.com>, Hugh Dickins <hughd@...gle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Uladzislau Rezki <urezki@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>,
Jann Horn <jannh@...gle.com>, Pedro Falcato <pfalcato@...e.de>,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-csky@...r.kernel.org,
linux-mips@...r.kernel.org, linux-s390@...r.kernel.org,
sparclinux@...r.kernel.org, nvdimm@...ts.linux.dev,
linux-cxl@...r.kernel.org, linux-mm@...ck.org,
ntfs3@...ts.linux.dev, kexec@...ts.infradead.org,
kasan-dev@...glegroups.com, iommu@...ts.linux.dev,
Kevin Tian <kevin.tian@...el.com>, Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>
Subject: Re: [PATCH v3 06/13] mm: add remap_pfn_range_prepare(),
remap_pfn_range_complete()
On Tue, Sep 16, 2025 at 03:11:52PM +0100, Lorenzo Stoakes wrote:
> We need the ability to split PFN remap between updating the VMA and
> performing the actual remap, in order to do away with the legacy
> f_op->mmap hook.
>
> To do so, update the PFN remap code to provide shared logic, and also make
> remap_pfn_range_notrack() static, as its one user, io_mapping_map_user()
> was removed in commit 9a4f90e24661 ("mm: remove mm/io-mapping.c").
>
> Then, introduce remap_pfn_range_prepare(), which accepts VMA descriptor
> and PFN parameters, and remap_pfn_range_complete() which accepts the same
> parameters as remap_pfn_rangte().
>
> remap_pfn_range_prepare() will set the cow vma->vm_pgoff if necessary, so
> it must be supplied with a correct PFN to do so. If the caller must hold
> locks to be able to do this, those locks should be held across the
> operation, and mmap_abort() should be provided to revoke the lock should
> an error arise.
It looks like the patches have changed to remove mmap_abort so this
paragraph can probably be dropped.
> static int remap_pfn_range_internal(struct vm_area_struct *vma, unsigned long addr,
> - unsigned long pfn, unsigned long size, pgprot_t prot)
> + unsigned long pfn, unsigned long size, pgprot_t prot, bool set_vma)
> {
> pgd_t *pgd;
> unsigned long next;
> @@ -2912,32 +2931,17 @@ static int remap_pfn_range_internal(struct vm_area_struct *vma, unsigned long ad
> if (WARN_ON_ONCE(!PAGE_ALIGNED(addr)))
> return -EINVAL;
>
> - /*
> - * Physically remapped pages are special. Tell the
> - * rest of the world about it:
> - * VM_IO tells people not to look at these pages
> - * (accesses can have side effects).
> - * VM_PFNMAP tells the core MM that the base pages are just
> - * raw PFN mappings, and do not have a "struct page" associated
> - * with them.
> - * VM_DONTEXPAND
> - * Disable vma merging and expanding with mremap().
> - * VM_DONTDUMP
> - * Omit vma from core dump, even when VM_IO turned off.
> - *
> - * There's a horrible special case to handle copy-on-write
> - * behaviour that some programs depend on. We mark the "original"
> - * un-COW'ed pages by matching them up with "vma->vm_pgoff".
> - * See vm_normal_page() for details.
> - */
> - if (is_cow_mapping(vma->vm_flags)) {
> - if (addr != vma->vm_start || end != vma->vm_end)
> - return -EINVAL;
> - vma->vm_pgoff = pfn;
> + if (set_vma) {
> + err = get_remap_pgoff(vma->vm_flags, addr, end,
> + vma->vm_start, vma->vm_end,
> + pfn, &vma->vm_pgoff);
> + if (err)
> + return err;
> + vm_flags_set(vma, VM_REMAP_FLAGS);
> + } else {
> + VM_WARN_ON_ONCE((vma->vm_flags & VM_REMAP_FLAGS) != VM_REMAP_FLAGS);
> }
It looks like you can avoid the changes to add set_vma by making
remap_pfn_range_internal() only do the complete portion and giving
the legacy calls a helper to do prepare in line:
int remap_pfn_range_prepare_vma(..)
{
int err;
err = get_remap_pgoff(vma->vm_flags, addr, end,
vma->vm_start, vma->vm_end,
pfn, &vma->vm_pgoff);
if (err)
return err;
vm_flags_set(vma, VM_REMAP_FLAGS);
return 0;
}
int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn, unsigned long size, pgprot_t prot)
{
int err;
err = remap_pfn_range_prepare_vma(vma, addr, pfn, size)
if (err)
return err;
if (IS_ENABLED(__HAVE_PFNMAP_TRACKING))
return remap_pfn_range_track(vma, addr, pfn, size, prot);
return remap_pfn_range_notrack(vma, addr, pfn, size, prot);
}
(fix pgtable_Types.h to #define to 1 so IS_ENABLED works)
But the logic here is all fine
Reviewed-by: Jason Gunthorpe <jgg@...dia.com>
Jason
Powered by blists - more mailing lists