[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YkcTTY4YjQs5BRhE@google.com>
Date: Fri, 1 Apr 2022 14:59:25 +0000
From: Quentin Perret <qperret@...gle.com>
To: Andy Lutomirski <luto@...nel.org>
Cc: Sean Christopherson <seanjc@...gle.com>,
Steven Price <steven.price@....com>,
Chao Peng <chao.p.peng@...ux.intel.com>,
kvm list <kvm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
Linux API <linux-api@...r.kernel.org>, qemu-devel@...gnu.org,
Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
the arch/x86 maintainers <x86@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>, Hugh Dickins <hughd@...gle.com>,
Jeff Layton <jlayton@...nel.org>,
"J . Bruce Fields" <bfields@...ldses.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...nel.org>,
"Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
Vlastimil Babka <vbabka@...e.cz>,
Vishal Annapurve <vannapurve@...gle.com>,
Yu Zhang <yu.c.zhang@...ux.intel.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
"Nakajima, Jun" <jun.nakajima@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
David Hildenbrand <david@...hat.com>,
Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>
Subject: Re: [PATCH v5 00/13] KVM: mm: fd-based approach for supporting KVM
guest private memory
On Thursday 31 Mar 2022 at 09:04:56 (-0700), Andy Lutomirski wrote:
> On Wed, Mar 30, 2022, at 10:58 AM, Sean Christopherson wrote:
> > On Wed, Mar 30, 2022, Quentin Perret wrote:
> >> On Wednesday 30 Mar 2022 at 09:58:27 (+0100), Steven Price wrote:
> >> > On 29/03/2022 18:01, Quentin Perret wrote:
> >> > > Is implicit sharing a thing? E.g., if a guest makes a memory access in
> >> > > the shared gpa range at an address that doesn't have a backing memslot,
> >> > > will KVM check whether there is a corresponding private memslot at the
> >> > > right offset with a hole punched and report a KVM_EXIT_MEMORY_ERROR? Or
> >> > > would that just generate an MMIO exit as usual?
> >> >
> >> > My understanding is that the guest needs some way of tagging whether a
> >> > page is expected to be shared or private. On the architectures I'm aware
> >> > of this is done by effectively stealing a bit from the IPA space and
> >> > pretending it's a flag bit.
> >>
> >> Right, and that is in fact the main point of divergence we have I think.
> >> While I understand this might be necessary for TDX and the likes, this
> >> makes little sense for pKVM. This would effectively embed into the IPA a
> >> purely software-defined non-architectural property/protocol although we
> >> don't actually need to: we (pKVM) can reasonably expect the guest to
> >> explicitly issue hypercalls to share pages in-place. So I'd be really
> >> keen to avoid baking in assumptions about that model too deep in the
> >> host mm bits if at all possible.
> >
> > There is no assumption about stealing PA bits baked into this API. Even within
> > x86 KVM, I consider it a hard requirement that the common flows not assume the
> > private vs. shared information is communicated through the PA.
>
> Quentin, I think we might need a clarification. The API in this patchset indeed has no requirement that a PA bit distinguish between private and shared, but I think it makes at least a weak assumption that *something*, a priori, distinguishes them. In particular, there are private memslots and shared memslots, so the logical flow of resolving a guest memory access looks like:
>
> 1. guest accesses a GVA
>
> 2. read guest paging structures
>
> 3. determine whether this is a shared or private access
>
> 4. read host (KVM memslots and anything else, EPT, NPT, RMP, etc) structures accordingly. In particular, the memslot to reference is different depending on the access type.
>
> For TDX, this maps on to the fd-based model perfectly: the host-side paging structures for the shared and private slots are completely separate. For SEV, the structures are shared and KVM will need to figure out what to do in case a private and shared memslot overlap. Presumably it's sufficient to declare that one of them wins, although actually determining which one is active for a given GPA may involve checking whether the backing store for a given page actually exists.
>
> But I don't understand pKVM well enough to understand how it fits in. Quentin, how is the shared vs private mode of a memory access determined? How do the paging structures work? Can a guest switch between shared and private by issuing a hypercall without changing any guest-side paging structures or anything else?
My apologies, I've indeed shared very little details about how pKVM
works. We'll be posting patches upstream really soon that will hopefully
help with this, but in the meantime, here is the idea.
pKVM is designed around MMU-based protection as opposed to encryption as
is the case for many confidential computing solutions. It's probably
worth mentioning that, although it targets arm64, pKVM is distinct from
the Arm CC-A stuff and requires no fancy hardware extensions -- it is
applicable all the way back to Arm v8.0 which makes it an interesting
solution for mobile.
Another particularity of the pKVM approach is that the code of the
hypervisor itself lives in the kernel source tree (see
arch/arm64/kvm/hyp/nvhe/). The hypervisor is built with the rest of the
kernel but as a self-sufficient object, and ends up in its own dedicated
ELF section (.hyp.*) in the kernel image. The main requirement for pKVM
(and KVM on arm64 in general) is to have the bootloader enter the kernel
at the hypervisor exception level (a.k.a EL2). The boot procedure is a
bit involved, but eventually the hypervisor object is installed at EL2,
and the kernel is deprivileged to EL1 and proceeds to boot. From that
point on the hypervisor no longer trusts the kernel and will enable the
stage-2 MMU to impose access-control restrictions to all memory accesses
from the host.
All that to say: the pKVM approach offers a great deal of flexibility
when it comes to hypervisor behaviour. We have control over the
hypervisor code and can change it as we see fit. Since both the
hypervisor and the host kernel are part of the same image, the ABI
between them is very much *not* stable and can be adjusted to whatever
makes the most sense. So, I think we'd be quite keen to use that
flexibility to align some of the pKVM behaviours with other players
(TDX, SEV, CC-A), especially when it comes to host mm APIs. But that
flexibility also means we can do some things a bit better (e.g. pKVM can
handle illegal accesses from the host mostly fine -- the hypervisor can
re-inject the fault in the host) so I would definitely like to use this
to our advantage and not be held back by unrelated constraints.
To answer your original question about memory 'conversion', the key
thing is that the pKVM hypervisor controls the stage-2 page-tables for
everyone in the system, all guests as well as the host. As such, a page
'conversion' is nothing more than a permission change in the relevant
page-tables.
The typical flow is as follows:
- the host asks the hypervisor to run a guest;
- the hypervisor does the context switch, which includes switching
stage-2 page-tables;
- initially the guest has an empty stage-2 (we don't require
pre-faulting everything), which means it'll immediately fault;
- the hypervisor switches back to host context to handle the guest
fault;
- the host handler finds the corresponding memslot and does the
ipa->hva conversion. In our current implementation it uses a longterm
GUP pin on the corresponding page;
- once it has a page, the host handler issues a hypercall to donate the
page to the guest;
- the hypervisor does a bunch of checks to make sure the host owns the
page, and if all is fine it will unmap it from the host stage-2 and
map it in the guest stage-2, and do some bookkeeping as it needs to
track page ownership, etc;
- the guest can then proceed to run, and possibly faults in many more
pages;
- when it wants to, the guest can then issue a hypercall to share a
page back with the host;
- the hypervisor checks the request, maps the page back in the host
stage-2, does more bookkeeping and returns back to the host to notify
it of the share;
- the host kernel at that point can exit back to userspace to relay
that information to the VMM;
- rinse and repeat.
We currently don't allow the host punching holes in the guest IPA space.
Once it has donated a page to a guest, it can't have it back until the
guest has been entirely torn down (at which point all of memory is
poisoned by the hypervisor obviously). But we could certainly reconsider
that part. OTOH, I'm still inclined to think that in-place sharing is
desirable. In our case it's dirt cheap, and could even work on huge
pages, which would allow very efficient sharing of large amounts of
data. So, I'm a bit hesitant to use the private-fd approach as-is since
it's not immediately obvious how we'll ever be able reconcile these
things if mmap-ing the fd is a firm no. With that said, I don't think
our *current* use-cases have a strong need for this, so I mostly agree
with Sean's point earlier. But since we're talking about committing to a
userspace ABI, I would feel better if there was a clear path towards
having support for in-place sharing -- I can certainly see it being
useful. I'll think about it, but if folks have ideas in the meantime
I'll be happy to discuss.
I hope the above was useful and clears up the confusion.
Thanks,
Quentin
Powered by blists - more mailing lists