[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190403192426.GA16117@redhat.com>
Date: Wed, 3 Apr 2019 15:24:26 -0400
From: Jerome Glisse <jglisse@...hat.com>
To: Alex Williamson <alex.williamson@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
eric.auger@...hat.com, cohuck@...hat.com, peterx@...hat.com
Subject: Re: [PATCH v2] vfio/type1: Limit DMA mappings per container
On Tue, Apr 02, 2019 at 10:15:38AM -0600, Alex Williamson wrote:
> Memory backed DMA mappings are accounted against a user's locked
> memory limit, including multiple mappings of the same memory. This
> accounting bounds the number of such mappings that a user can create.
> However, DMA mappings that are not backed by memory, such as DMA
> mappings of device MMIO via mmaps, do not make use of page pinning
> and therefore do not count against the user's locked memory limit.
> These mappings still consume memory, but the memory is not well
> associated to the process for the purpose of oom killing a task.
>
> To add bounding on this use case, we introduce a limit to the total
> number of concurrent DMA mappings that a user is allowed to create.
> This limit is exposed as a tunable module option where the default
> value of 64K is expected to be well in excess of any reasonable use
> case (a large virtual machine configuration would typically only make
> use of tens of concurrent mappings).
>
> This fixes CVE-2019-3882.
>
> Signed-off-by: Alex Williamson <alex.williamson@...hat.com>
Have you tested with GPU passthrough ? GPU have huge BAR from
hundred of mega bytes to giga bytes (some driver resize them
to cover the whole GPU memory). Driver need to map those to
properly work. I am not sure what path is taken by mmap of
mmio BAR by a guest on the host but i just thought i would
point that out.
Cheers,
Jérôme
Powered by blists - more mailing lists