[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240620140142.GH2494510@nvidia.com>
Date: Thu, 20 Jun 2024 11:01:42 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: David Hildenbrand <david@...hat.com>
Cc: Fuad Tabba <tabba@...gle.com>, John Hubbard <jhubbard@...dia.com>,
Elliot Berman <quic_eberman@...cinc.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Shuah Khan <shuah@...nel.org>, Matthew Wilcox <willy@...radead.org>,
maz@...nel.org, kvm@...r.kernel.org, linux-arm-msm@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org, pbonzini@...hat.com
Subject: Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning
On Thu, Jun 20, 2024 at 11:00:45AM +0200, David Hildenbrand wrote:
> > Not sure if IOMMU + private makes that much sense really, but I think
> > I might not really understand what you mean by this.
>
> A device might be able to access private memory. In the TDX world, this
> would mean that a device "speaks" encrypted memory.
>
> At the same time, a device might be able to access shared memory. Maybe
> devices can do both?
>
> What do do when converting between private and shared? I think it depends on
> various factors (e.g., device capabilities).
The whole thing is complicated once you put the pages into the VMA. We
have hmm_range_fault and IOMMU SVA paths that both obtain the pfns
without any of the checks here.
(and I suspect many of the target HW's for pKVM have/will have SVA
capable GPUs so SVA is an attack vector worth considering)
What happens if someone does DMA to these PFNs? It seems like nothing
good in either scenario..
Really the only way to do it properly is to keep the memory unmapped,
that must be the starting point to any solution. Denying GUP is just
an ugly hack.
Jason
Powered by blists - more mailing lists