[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190403091812.197e2dcb.cohuck@redhat.com>
Date: Wed, 3 Apr 2019 09:18:12 +0200
From: Cornelia Huck <cohuck@...hat.com>
To: Alex Williamson <alex.williamson@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
eric.auger@...hat.com, peterx@...hat.com
Subject: Re: [PATCH v2] vfio/type1: Limit DMA mappings per container
On Tue, 02 Apr 2019 10:15:38 -0600
Alex Williamson <alex.williamson@...hat.com> wrote:
> Memory backed DMA mappings are accounted against a user's locked
> memory limit, including multiple mappings of the same memory. This
> accounting bounds the number of such mappings that a user can create.
> However, DMA mappings that are not backed by memory, such as DMA
> mappings of device MMIO via mmaps, do not make use of page pinning
> and therefore do not count against the user's locked memory limit.
> These mappings still consume memory, but the memory is not well
> associated to the process for the purpose of oom killing a task.
>
> To add bounding on this use case, we introduce a limit to the total
> number of concurrent DMA mappings that a user is allowed to create.
> This limit is exposed as a tunable module option where the default
> value of 64K is expected to be well in excess of any reasonable use
> case (a large virtual machine configuration would typically only make
> use of tens of concurrent mappings).
>
> This fixes CVE-2019-3882.
>
> Signed-off-by: Alex Williamson <alex.williamson@...hat.com>
> ---
>
> v2: Remove unnecessary atomic, all runtime access occurs while
> holding vfio_iommu.lock. Change to unsigned int since we're
> no longer bound by the atomic_t.
>
> drivers/vfio/vfio_iommu_type1.c | 14 ++++++++++++++
> 1 file changed, 14 insertions(+)
Non-atomic seems fine.
Reviewed-by: Cornelia Huck <cohuck@...hat.com>
Powered by blists - more mailing lists