lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e26ec0bb-3c20-4732-a09b-83b6b6a6419a@amazon.co.uk>
Date: Fri, 12 Jul 2024 16:59:05 +0100
From: Patrick Roy <roypat@...zon.co.uk>
To: David Hildenbrand <david@...hat.com>, <seanjc@...gle.com>, Fuad Tabba
	<tabba@...gle.com>
CC: <pbonzini@...hat.com>, <akpm@...ux-foundation.org>, <dwmw@...zon.co.uk>,
	<rppt@...nel.org>, <tglx@...utronix.de>, <mingo@...hat.com>, <bp@...en8.de>,
	<dave.hansen@...ux.intel.com>, <x86@...nel.org>, <hpa@...or.com>,
	<willy@...radead.org>, <graf@...zon.com>, <derekmn@...zon.com>,
	<kalyazin@...zon.com>, <kvm@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
	<linux-mm@...ck.org>, <dmatlack@...gle.com>, <chao.p.peng@...ux.intel.com>,
	<xmarcalx@...zon.co.uk>, James Gowans <jgowans@...zon.com>
Subject: Re: [RFC PATCH 8/8] kvm: gmem: Allow restricted userspace mappings

Hey,

On Wed, 2024-07-10 at 22:12 +0100, David Hildenbrand wrote:
> On 10.07.24 11:51, Patrick Roy wrote:
>>
>>
>> On 7/9/24 22:13, David Hildenbrand wrote:
>>> On 09.07.24 16:48, Fuad Tabba wrote:
>>>> Hi Patrick,
>>>>
>>>> On Tue, Jul 9, 2024 at 2:21 PM Patrick Roy <roypat@...zon.co.uk> wrote:
>>>>>
>>>>> Allow mapping guest_memfd into userspace. Since AS_INACCESSIBLE is set
>>>>> on the underlying address_space struct, no GUP of guest_memfd will be
>>>>> possible.
>>>>
>>>> This patch allows mapping guest_memfd() unconditionally. Even if it's
>>>> not guppable, there are other reasons why you wouldn't want to allow
>>>> this. Maybe a config flag to gate it? e.g.,
>>>
>>>
>>> As discussed with Jason, maybe not the direction we want to take with
>>> guest_memfd.
>>> If it's private memory, it shall not be mapped. Also not via magic
>>> config options.
>>>
>>> We'll likely discuss some of that in the meeting MM tomorrow I guess
>>> (having both shared and private memory in guest_memfd).
>>
>> Oh, nice. I'm assuming you mean this meeting:
>> https://lore.kernel.org/linux-mm/197a2f19-c71c-fbde-a62a-213dede1f4fd@google.com/T/?
>> Would it be okay if I also attend? I see it also mentions huge pages,
>> which is another thing we are interested in, actually :)
> 
> Hi,
> 
> sorry for the late reply. Yes, you could have joined .... too late.

No worries, I did end up joining to listen in to y'all's discussion
anyway :)

> There will be a summary posted soon. So far the agreement is that we're
> planning on allowing shared memory as part guest_memfd, and will allow
> that to get mapped and pinned. Private memory is not going to get mapped
> and pinned.
> 
> If we have to disallow pinning of shared memory on top for some use
> cases (i.e., no directmap), I assume that could be added.
> 
>>
>>> Note that just from staring at this commit, I don't understand the
>>> motivation *why* we would want to do that.
>>
>> Fair - I admittedly didn't get into that as much as I probably should
>> have. In our usecase, we do not have anything that pKVM would (I think)
>> call "guest-private" memory. I think our memory can be better described
>> as guest-owned, but always shared with the VMM (e.g. userspace), but
>> ideally never shared with the host kernel. This model lets us do a lot
>> of simplifying assumptions: Things like I/O can be handled in userspace
>> without the guest explicitly sharing I/O buffers (which is not exactly
>> what we would want long-term anyway, as sharing in the guest_memfd
>> context means sharing with the host kernel), we can easily do VM
>> snapshotting without needing things like TDX's TDH.EXPORT.MEM APIs, etc.
> 
> Okay, so essentially you would want to use guest_memfd to only contain
> shard memory and disallow any pinning like for secretmem.

Yeah, this is pretty much what I thought we wanted before listening in
on Wednesday.

I've actually be thinking about this some more since then though. With
hugepages, if the VM is backed by, say, 2M pages, our on-demand direct
map insertion approach runs into the same problem that CoCo VMs have
when they're backed by hugepages: How to deal with the guest only
sharing a 4K range in a hugepage? If we want to restore the direct map
for e.g. the page containing kvm-clock data, then we can't simply go
ahead and restore the direct map for the entire 2M page, because there
very well might be stuff in the other 511 small guest pages that we
really do not want in the direct map. And we can't even take the
approach of letting the guest deal with the problem, because here
"sharing" is driven by the host, not the guest, so the guest cannot
possibly know that it maybe should avoid putting stuff it doesn't want
shared into those remaining 511 pages! To me that sounds a lot like the
whole "breaking down huge folios to allow GUP to only some parts of it"
thing mentioned on Wednesday.

Now, if we instead treat "guest memory without direct map entries" as
"private", and "guest memory with direct map entries" as "shared", then
the above will be solved by whatever mechanism allows gupping/mapping of
only the "shared" parts of huge folios, IIUC. The fact that GUP is then
also allowed for the "shared" parts is not actually a problem for us -
we went down the route of disabling GUP altogether here because based on
[1] it sounded like GUP for anything gmem related would never happen.
But after something is re-inserted into the direct map, we don't very
much care if it can be GUP-ed or not. In fact, allowing GUP for the
shared parts probably makes some things easier for us, as we can then do
I/O without bounce buffers by just in-place converting I/O-buffers to
shared, and then treating that shared slice of guest_memfd the same way
we treat traditional guest memory today. In a very far-off future, we'd
like to be able to do I/O without ever reinserting pages into the direct
map, but I don't think adopting this private/shared model for gmem would
block us from doing that?

Although all of this does hinge on us being able to do the in-place
shared/private conversion without any guest involvement. Do you envision
that to be possible?

> If so, I wonder if it wouldn't be better to simply add KVM support to
> consume *real* secretmem memory? IIRC so far there was only demand to
> probably remove the directmap of private memory in guest_memfd, not of
> shared memory.
> -- 
> Cheers,
> 
> David / dhildenb

Best, 
Patrick

[1]: https://lore.kernel.org/all/ZdfoR3nCEP3HTtm1@casper.infradead.org/


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ