lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200514222415.GA24575@ziepe.ca>
Date:   Thu, 14 May 2020 19:24:15 -0300
From:   Jason Gunthorpe <jgg@...pe.ca>
To:     Alex Williamson <alex.williamson@...hat.com>
Cc:     Peter Xu <peterx@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, cohuck@...hat.com
Subject: Re: [PATCH 0/2] vfio/type1/pci: IOMMU PFNMAP invalidation

On Thu, May 14, 2020 at 04:17:12PM -0600, Alex Williamson wrote:

> that much.  I think this would also address Jason's primary concern.
> It's better to get an IOMMU fault from the user trying to access those
> mappings than it is to leave them in place.

Yes, there are few options here - if the pages are available for use
by the IOMMU and *asynchronously* someone else revokes them, then the
only way to protect the kernel is to block them from the IOMMUU.

For this to be sane the revokation must be under complete control of
the VFIO user. ie if a user decides to disable MMIO traffic then of
course the IOMMU should block P2P transfer to the MMIO bar. It is user
error to have not disabled those transfers in the first place.

When this is all done inside a guest the whole logic applies. On bare
metal you might get some AER or crash or MCE. In virtualization you'll
get an IOMMU fault.

> due to the memory enable bit.  If we could remap the range to a kernel
> page we could maybe avoid the IOMMU fault and maybe even have a crude
> test for whether any data was written to the page while that mapping
> was in place (ie. simulating more restricted error handling, though
> more asynchronous than done at the platform level).  

I'm not if this makes sense, can't we arrange to directly trap the
IOMMU failure and route it into qemu if that is what is desired?

I'll try to look at this next week, swamped right now

Thanks,
Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ