lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200501235033.GA19929@ziepe.ca>
Date:   Fri, 1 May 2020 20:50:33 -0300
From:   Jason Gunthorpe <jgg@...pe.ca>
To:     Alex Williamson <alex.williamson@...hat.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        cohuck@...hat.com, peterx@...hat.com
Subject: Re: [PATCH 1/3] vfio/type1: Support faulting PFNMAP vmas

On Fri, May 01, 2020 at 03:39:08PM -0600, Alex Williamson wrote:
> With conversion to follow_pfn(), DMA mapping a PFNMAP range depends on
> the range being faulted into the vma.  Add support to manually provide
> that, in the same way as done on KVM with hva_to_pfn_remapped().
> 
> Signed-off-by: Alex Williamson <alex.williamson@...hat.com>
>  drivers/vfio/vfio_iommu_type1.c |   36 +++++++++++++++++++++++++++++++++---
>  1 file changed, 33 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index cc1d64765ce7..4a4cb7cd86b2 100644
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -317,6 +317,32 @@ static int put_pfn(unsigned long pfn, int prot)
>  	return 0;
>  }
>  
> +static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
> +			    unsigned long vaddr, unsigned long *pfn,
> +			    bool write_fault)
> +{
> +	int ret;
> +
> +	ret = follow_pfn(vma, vaddr, pfn);
> +	if (ret) {
> +		bool unlocked = false;
> +
> +		ret = fixup_user_fault(NULL, mm, vaddr,
> +				       FAULT_FLAG_REMOTE |
> +				       (write_fault ?  FAULT_FLAG_WRITE : 0),
> +				       &unlocked);
> +		if (unlocked)
> +			return -EAGAIN;
> +
> +		if (ret)
> +			return ret;
> +
> +		ret = follow_pfn(vma, vaddr, pfn);
> +	}
> +
> +	return ret;
> +}
> +
>  static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
>  			 int prot, unsigned long *pfn)
>  {
> @@ -339,12 +365,16 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
>  
>  	vaddr = untagged_addr(vaddr);
>  
> +retry:
>  	vma = find_vma_intersection(mm, vaddr, vaddr + 1);
>  
>  	if (vma && vma->vm_flags & VM_PFNMAP) {
> -		if (!follow_pfn(vma, vaddr, pfn) &&
> -		    is_invalid_reserved_pfn(*pfn))
> -			ret = 0;
> +		ret = follow_fault_pfn(vma, mm, vaddr, pfn, prot & IOMMU_WRITE);
> +		if (ret == -EAGAIN)
> +			goto retry;
> +
> +		if (!ret && !is_invalid_reserved_pfn(*pfn))
> +			ret = -EFAULT;

I suggest checking vma->vm_ops == &vfio_pci_mmap_ops and adding a
comment that this is racy and needs to be fixed up. The ops check
makes this only used by other vfio bars and should prevent some
abuses of this hacky thing

However, I wonder if this chould just link itself into the
vma->private data so that when the vfio that owns the bar goes away,
so does the iommu mapping?

I feel like this patch set is not complete unless it also handles the
shootdown of this path too?

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ