[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250217145221.20f17b7a.alex.williamson@redhat.com>
Date: Mon, 17 Feb 2025 14:52:21 -0700
From: Alex Williamson <alex.williamson@...hat.com>
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, peterx@...hat.com,
mitchell.augustin@...onical.com, clg@...hat.com, akpm@...ux-foundation.org,
linux-mm@...ck.org
Subject: Re: [PATCH 5/5] vfio/type1: Use mapping page mask for pfnmaps
On Fri, 14 Feb 2025 15:27:04 -0400
Jason Gunthorpe <jgg@...pe.ca> wrote:
> On Wed, Feb 05, 2025 at 04:17:21PM -0700, Alex Williamson wrote:
> > @@ -590,15 +592,23 @@ static int vaddr_get_pfns(struct mm_struct *mm, unsigned long vaddr,
> > vma = vma_lookup(mm, vaddr);
> >
> > if (vma && vma->vm_flags & VM_PFNMAP) {
> > - ret = follow_fault_pfn(vma, mm, vaddr, pfn, prot & IOMMU_WRITE);
> > + unsigned long pgmask;
> > +
> > + ret = follow_fault_pfn(vma, mm, vaddr, pfn, &pgmask,
> > + prot & IOMMU_WRITE);
> > if (ret == -EAGAIN)
> > goto retry;
> >
> > if (!ret) {
> > - if (is_invalid_reserved_pfn(*pfn))
> > - ret = 1;
> > - else
> > + if (is_invalid_reserved_pfn(*pfn)) {
> > + unsigned long epfn;
> > +
> > + epfn = (((*pfn << PAGE_SHIFT) + ~pgmask + 1)
> > + & pgmask) >> PAGE_SHIFT;
>
> That seems a bit indirect
>
> epfn = ((*pfn) | (~pgmask >> PAGE_SHIFT)) + 1;
>
> ?
That is simpler, for sure. Thanks!
> > + ret = min_t(int, npages, epfn - *pfn);
>
> It is nitty but the int's here should be long, and npages should be
> unsigned long..
Added a new patch that uses unsigned long consistently for passed page
counts and long for returns. Now we just need a system with a 16TiB
huge page size. Thanks,
Alex
Powered by blists - more mailing lists