lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <926e2e52-2fbd-9342-ed1f-920322985d68@redhat.com>
Date:   Tue, 2 Apr 2019 21:14:25 +0200
From:   Auger Eric <eric.auger@...hat.com>
To:     Alex Williamson <alex.williamson@...hat.com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        cohuck@...hat.com, peterx@...hat.com
Subject: Re: [PATCH v2] vfio/type1: Limit DMA mappings per container

Hi Alex,

On 4/2/19 6:15 PM, Alex Williamson wrote:
> Memory backed DMA mappings are accounted against a user's locked
> memory limit, including multiple mappings of the same memory.  This
> accounting bounds the number of such mappings that a user can create.
> However, DMA mappings that are not backed by memory, such as DMA
> mappings of device MMIO via mmaps, do not make use of page pinning
> and therefore do not count against the user's locked memory limit.
> These mappings still consume memory, but the memory is not well
> associated to the process for the purpose of oom killing a task.
> 
> To add bounding on this use case, we introduce a limit to the total
> number of concurrent DMA mappings that a user is allowed to create.
> This limit is exposed as a tunable module option where the default
> value of 64K is expected to be well in excess of any reasonable use
> case (a large virtual machine configuration would typically only make
> use of tens of concurrent mappings).
> 
> This fixes CVE-2019-3882.
> 
> Signed-off-by: Alex Williamson <alex.williamson@...hat.com>
Reviewed-by: Eric Auger <eric.auger@...hat.com>
Tested-by: Eric Auger <eric.auger@...hat.com>

Thanks

Eric

> ---
> 
> v2: Remove unnecessary atomic, all runtime access occurs while
>     holding vfio_iommu.lock.  Change to unsigned int since we're
>     no longer bound by the atomic_t.
> 
>  drivers/vfio/vfio_iommu_type1.c |   14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 73652e21efec..d0f731c9920a 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -58,12 +58,18 @@ module_param_named(disable_hugepages,
>  MODULE_PARM_DESC(disable_hugepages,
>  		 "Disable VFIO IOMMU support for IOMMU hugepages.");
>  
> +static unsigned int dma_entry_limit __read_mostly = U16_MAX;
> +module_param_named(dma_entry_limit, dma_entry_limit, uint, 0644);
> +MODULE_PARM_DESC(dma_entry_limit,
> +		 "Maximum number of user DMA mappings per container (65535).");
> +
>  struct vfio_iommu {
>  	struct list_head	domain_list;
>  	struct vfio_domain	*external_domain; /* domain for external user */
>  	struct mutex		lock;
>  	struct rb_root		dma_list;
>  	struct blocking_notifier_head notifier;
> +	unsigned int		dma_avail;
>  	bool			v2;
>  	bool			nesting;
>  };
> @@ -836,6 +842,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>  	vfio_unlink_dma(iommu, dma);
>  	put_task_struct(dma->task);
>  	kfree(dma);
> +	iommu->dma_avail++;
>  }
>  
>  static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> @@ -1081,12 +1088,18 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
>  		goto out_unlock;
>  	}
>  
> +	if (!iommu->dma_avail) {
> +		ret = -ENOSPC;
> +		goto out_unlock;
> +	}
> +
>  	dma = kzalloc(sizeof(*dma), GFP_KERNEL);
>  	if (!dma) {
>  		ret = -ENOMEM;
>  		goto out_unlock;
>  	}
>  
> +	iommu->dma_avail--;
>  	dma->iova = iova;
>  	dma->vaddr = vaddr;
>  	dma->prot = prot;
> @@ -1583,6 +1596,7 @@ static void *vfio_iommu_type1_open(unsigned long arg)
>  
>  	INIT_LIST_HEAD(&iommu->domain_list);
>  	iommu->dma_list = RB_ROOT;
> +	iommu->dma_avail = dma_entry_limit;
>  	mutex_init(&iommu->lock);
>  	BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier);
>  
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ