[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZkKeiTE7184F6isF@ziepe.ca>
Date: Mon, 13 May 2024 20:13:13 -0300
From: Jason Gunthorpe <jgg@...pe.ca>
To: "Suthikulpanit, Suravee" <suravee.suthikulpanit@....com>
Cc: linux-kernel@...r.kernel.org, iommu@...ts.linux.dev, joro@...tes.org,
thomas.lendacky@....com, vasant.hegde@....com, michael.roth@....com,
jon.grimm@....com, rientjes@...gle.com
Subject: Re: [PATCH 1/9] iommu/amd: Introduce helper functions for managing
IOMMU memory
On Tue, May 14, 2024 at 01:59:33AM +0700, Suthikulpanit, Suravee wrote:
> Jason
>
> On 5/1/2024 11:17 PM, Jason Gunthorpe wrote:
> > On Tue, Apr 30, 2024 at 03:24:22PM +0000, Suravee Suthikulpanit wrote:
> > > Depending on the modes of operation, certain AMD IOMMU data structures are
> > > allocated with constraints. For example:
> > >
> > > * Some buffers must be 4K-aligned when running in SNP-enabled host
> > >
> > > * To support AMD IOMMU emulation in an SEV guest, some data structures
> > > cannot be encrypted so that the VMM can access the memory successfully.
> >
> > Uh, this seems like a really bad idea. The VM's integrity strongly
> > depends on the correct function of the HW. If the IOMMU datastructures
> > are not protected then the whole thing is not secure.
> >
> > For instance allowing hostile VMs to manipulate the DTE, or interfere
> > with the command queue, destroys any possibility to have secure DMA.
>
> Currently, we have already set the area used for guest SWIOTLB region as
> shared memory to support DMA in SEV guest. Here, we are setting additional
> guest IOMMU data structures as shared:
>
> * Device Table
> * Command Buffer
> * Completion-Wait Semaphore Buffer
> * Per-device Interrupt Remapping Table
And if a hostile VMM starts messing with this is everything going to
hold up? Or will you get crashes and security bugs?
I don't think it is a good idea to put things in non-secure memory
without also doing a full security audit.
> > Is this some precursor to implementing a secure iommu where the data
> > structures will remain encrypted?
>
> Yes, the is precursor to secure vIOMMU support in the guest.
How does the guest tell if the vIOMMU is secure, and shouldn't you in
this patch refuse to load on a secure vIOMMU at all?
Maybe it would be a better idea to have a mini irq side only driver
that is audited and safe to use non-secure memory than trying to
repurpose the entire complex driver?
Jason
Powered by blists - more mailing lists