lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210309010406.GE4247@nvidia.com>
Date:   Mon, 8 Mar 2021 21:04:06 -0400
From:   Jason Gunthorpe <jgg@...dia.com>
To:     Alex Williamson <alex.williamson@...hat.com>
Cc:     cohuck@...hat.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, peterx@...hat.com
Subject: Re: [PATCH v1 12/14] vfio/type1: Support batching of device mappings

On Mon, Mar 08, 2021 at 02:49:31PM -0700, Alex Williamson wrote:
> Populate the page array to the extent available to enable batching.
> 
> Signed-off-by: Alex Williamson <alex.williamson@...hat.com>
>  drivers/vfio/vfio_iommu_type1.c |   10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index e89f11141dee..d499bccfbe3f 100644
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -628,6 +628,8 @@ static int vaddr_get_pfns(struct vfio_iommu *iommu, struct vfio_dma *dma,
>  	vma = find_vma_intersection(mm, vaddr, vaddr + 1);
>  
>  	if (vma && vma->vm_flags & VM_PFNMAP) {
> +		unsigned long count, i;
> +
>  		if ((dma->prot & IOMMU_WRITE && !(vma->vm_flags & VM_WRITE)) ||
>  		    (dma->prot & IOMMU_READ && !(vma->vm_flags & VM_READ))) {
>  			ret = -EFAULT;
> @@ -678,7 +680,13 @@ static int vaddr_get_pfns(struct vfio_iommu *iommu, struct vfio_dma *dma,
>  
>  		*pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) +
>  							dma->pfnmap->base_pfn;
> -		ret = 1;
> +		count = min_t(long,
> +			      (vma->vm_end - vaddr) >> PAGE_SHIFT, npages);
> +
> +		for (i = 0; i < count; i++)
> +			pages[i] = pfn_to_page(*pfn + i);

This isn't safe, we can't pass a VM_PFNMAP pfn into pfn_to_page(). The
whole api here with the batch should be using pfns not struct pages

Also.. this is not nice at all:

static int put_pfn(unsigned long pfn, int prot)
{
        if (!is_invalid_reserved_pfn(pfn)) {
                struct page *page = pfn_to_page(pfn);

                unpin_user_pages_dirty_lock(&page, 1, prot & IOMMU_WRITE);

The manner in which the PFN was obtained should be tracked internally
to VFIO, not deduced externally by the pfn type. *only* pages returned
by pin_user_pages() should be used with unpin_user_pages() - the other
stuff must be kept distinct.

This is actually another bug with the way things are today, as if the
user gets a PFNMAP VMA that happens to point to a struct page (eg a
MIXEDMAP, these things exist in the kernel), the unpin will explode
when it gets here.

Something like what hmm_range_fault() does where the high bits of the
pfn encode information about it (there is always PAGE_SHIFT high bits
available for use) is much cleaner/safer.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ