[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8840b360-cdb2-244c-bfb6-9a0e7306c188@kernel.org>
Date: Fri, 20 May 2022 10:57:41 -0700
From: Andy Lutomirski <luto@...nel.org>
To: Chao Peng <chao.p.peng@...ux.intel.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-api@...r.kernel.org,
linux-doc@...r.kernel.org, qemu-devel@...gnu.org
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
Hugh Dickins <hughd@...gle.com>,
Jeff Layton <jlayton@...nel.org>,
"J . Bruce Fields" <bfields@...ldses.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...nel.org>,
Steven Price <steven.price@....com>,
"Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
Vlastimil Babka <vbabka@...e.cz>,
Vishal Annapurve <vannapurve@...gle.com>,
Yu Zhang <yu.c.zhang@...ux.intel.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
jun.nakajima@...el.com, dave.hansen@...el.com, ak@...ux.intel.com,
david@...hat.com, aarcange@...hat.com, ddutile@...hat.com,
dhildenb@...hat.com, Quentin Perret <qperret@...gle.com>,
Michael Roth <michael.roth@....com>, mhocko@...e.com
Subject: Re: [PATCH v6 4/8] KVM: Extend the memslot to support fd-based
private memory
On 5/19/22 08:37, Chao Peng wrote:
> Extend the memslot definition to provide guest private memory through a
> file descriptor(fd) instead of userspace_addr(hva). Such guest private
> memory(fd) may never be mapped into userspace so no userspace_addr(hva)
> can be used. Instead add another two new fields
> (private_fd/private_offset), plus the existing memory_size to represent
> the private memory range. Such memslot can still have the existing
> userspace_addr(hva). When use, a single memslot can maintain both
> private memory through private fd(private_fd/private_offset) and shared
> memory through hva(userspace_addr). A GPA is considered private by KVM
> if the memslot has private fd and that corresponding page in the private
> fd is populated, otherwise, it's shared.
>
So this is a strange API and, IMO, a layering violation. I want to make
sure that we're all actually on board with making this a permanent part
of the Linux API. Specifically, we end up with a multiplexing situation
as you have described. For a given GPA, there are *two* possible host
backings: an fd-backed one (from the fd, which is private for now might
might end up potentially shared depending on future extensions) and a
VMA-backed one. The selection of which one backs the address is made
internally by whatever backs the fd.
This, IMO, a clear layering violation. Normally, an fd has an
associated address space, and pages in that address space can have
contents, can be holes that appear to contain all zeros, or could have
holes that are inaccessible. If you try to access a hole, you get
whatever is in the hole.
But now, with this patchset, the fd is more of an overlay and you get
*something else* if you try to access through the hole.
This results in operations on the fd bubbling up to the KVM mapping in
what is, IMO, a strange way. If the user punches a hole, KVM has to
modify its mappings such that the GPA goes to whatever VMA may be there.
(And update the RMP, the hypervisor's tables, or whatever else might
actually control privateness.) Conversely, if the user does fallocate
to fill a hole, the guest mapping *to an unrelated page* has to be
zapped so that the fd's page shows up. And the RMP needs updating, etc.
I am lukewarm on this for a few reasons.
1. This is weird. AFAIK nothing else works like this. Obviously this
is subjecting, but "weird" and "layering violation" sometimes translate
to "problematic locking".
2. fd-backed private memory can't have normal holes. If I make a memfd,
punch a hole in it, and mmap(MAP_SHARED) it, I end up with a page that
reads as zero. If I write to it, the page gets allocated. But with
this new mechanism, if I punch a hole and put it in a memslot, reads and
writes go somewhere else. So what if I actually wanted lazily allocated
private zeros?
2b. For a hypothetical future extension in which an fd can also have
shared pages (for conversion, for example, or simply because the fd
backing might actually be more efficient than indirecting through VMAs
and therefore get used for shared memory or entirely-non-confidential
VMs), lazy fd-backed zeros sound genuinely useful.
3. TDX hardware capability is not fully exposed. TDX can have a private
page and a shared page at GPAs that differ only by the private bit.
Sure, no one plans to use this today, but baking this into the user ABI
throws away half the potential address space.
3b. Any software solution that works like TDX (which IMO seems like an
eminently reasonable design to me) has the same issue.
The alternative would be to have some kind of separate table or bitmap
(part of the memslot?) that tells KVM whether a GPA should map to the fd.
What do you all think?
Powered by blists - more mailing lists