lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YkSaUQX89ZEojsQb@google.com>
Date:   Wed, 30 Mar 2022 17:58:41 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Quentin Perret <qperret@...gle.com>
Cc:     Steven Price <steven.price@....com>,
        Chao Peng <chao.p.peng@...ux.intel.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        linux-fsdevel@...r.kernel.org, linux-api@...r.kernel.org,
        qemu-devel@...gnu.org, Paolo Bonzini <pbonzini@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
        Hugh Dickins <hughd@...gle.com>,
        Jeff Layton <jlayton@...nel.org>,
        "J . Bruce Fields" <bfields@...ldses.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mike Rapoport <rppt@...nel.org>,
        "Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
        Vlastimil Babka <vbabka@...e.cz>,
        Vishal Annapurve <vannapurve@...gle.com>,
        Yu Zhang <yu.c.zhang@...ux.intel.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        luto@...nel.org, jun.nakajima@...el.com, dave.hansen@...el.com,
        ak@...ux.intel.com, david@...hat.com, maz@...nel.org,
        will@...nel.org
Subject: Re: [PATCH v5 00/13] KVM: mm: fd-based approach for supporting KVM
 guest private memory

On Wed, Mar 30, 2022, Quentin Perret wrote:
> On Wednesday 30 Mar 2022 at 09:58:27 (+0100), Steven Price wrote:
> > On 29/03/2022 18:01, Quentin Perret wrote:
> > > Is implicit sharing a thing? E.g., if a guest makes a memory access in
> > > the shared gpa range at an address that doesn't have a backing memslot,
> > > will KVM check whether there is a corresponding private memslot at the
> > > right offset with a hole punched and report a KVM_EXIT_MEMORY_ERROR? Or
> > > would that just generate an MMIO exit as usual?
> > 
> > My understanding is that the guest needs some way of tagging whether a
> > page is expected to be shared or private. On the architectures I'm aware
> > of this is done by effectively stealing a bit from the IPA space and
> > pretending it's a flag bit.
> 
> Right, and that is in fact the main point of divergence we have I think.
> While I understand this might be necessary for TDX and the likes, this
> makes little sense for pKVM. This would effectively embed into the IPA a
> purely software-defined non-architectural property/protocol although we
> don't actually need to: we (pKVM) can reasonably expect the guest to
> explicitly issue hypercalls to share pages in-place. So I'd be really
> keen to avoid baking in assumptions about that model too deep in the
> host mm bits if at all possible.

There is no assumption about stealing PA bits baked into this API.  Even within
x86 KVM, I consider it a hard requirement that the common flows not assume the
private vs. shared information is communicated through the PA.

> > > I'm overall inclined to think that while this abstraction works nicely
> > > for TDX and the likes, it might not suit pKVM all that well in the
> > > current form, but it's close.
> > > 
> > > What do you think of extending the model proposed here to also address
> > > the needs of implementations that support in-place sharing? One option
> > > would be to have KVM notify the private-fd backing store when a page is
> > > shared back by a guest, which would then allow host userspace to mmap
> > > that particular page in the private fd instead of punching a hole.
> > > 
> > > This should retain the main property you're after: private pages that
> > > are actually mapped in the guest SPTE aren't mmap-able, but all the
> > > others are fair game.
> > > 
> > > Thoughts?
> > How do you propose this works if the page shared by the guest then needs
> > to be made private again? If there's no hole punched then it's not
> > possible to just repopulate the private-fd. I'm struggling to see how
> > that could work.
> 
> Yes, some discussion might be required, but I was thinking about
> something along those lines:
> 
>  - a guest requests a shared->private page conversion;
> 
>  - the conversion request is routed all the way back to the VMM;
> 
>  - the VMM is expected to either decline the conversion (which may be
>    fatal for the guest if it can't handle this), or to tear-down its
>    mappings (via munmap()) of the shared page, and accept the
>    conversion;
> 
>  - upon return from the VMM, KVM will be expected to check how many
>    references to the shared page are still held (probably by asking the
>    fd backing store) to check that userspace has indeed torn down its
>    mappings. If all is fine, KVM will instruct the hypervisor to
>    repopulate the private range of the guest, otherwise it'll return an
>    error to the VMM;
> 
>  - if the conversion has been successful, the guest can resume its
>    execution normally.
> 
> Note: this should still allow to use the hole-punching method just fine
> on systems that require it. The invariant here is just that KVM (with
> help from the backing store) is now responsible for refusing to
> instruct the hypervisor (or TDX module, or RMM, or whatever) to map a
> private page if there are existing mappings to it.
> 
> > Having said that; if we can work out a way to safely
> > mmap() pages from the private-fd there's definitely some benefits to be
> > had - e.g. it could be used to populate the initial memory before the
> > guest is started.
> 
> Right, so assuming the approach proposed above isn't entirely bogus,
> this might now become possible by having the VMM mmap the private-fd,
> load the payload, and then unmap it all, and only then instruct the
> hypervisor to use this as private memory.

Hard "no" on mapping the private-fd.  Having the invariant tha the private-fd
can never be mapped greatly simplifies the responsibilities of the backing store,
as well as the interface between the private-fd and the in-kernel consumers of the
memory (KVM in this case).

What is the use case for shared->private conversion?  x86, both TDX and SNP,
effectively do have a flavor of shared->private conversion; SNP can definitely
be in-place, and I think TDX too.  But the only use case in x86 is to populate
the initial guest image, and due to other performance bottlenecks, it's strongly
recommended to keep the initial image as small as possible.  Based on your previous
response about the guest firmware loading the full guest image, my understanding is
that pKVM will also utilize a minimal initial image.

As a result, true in-place conversion to reduce the number of memcpy()s is low
priority, i.e. not planned at this time.  Unless the use case expects to convert
large swaths of memory, the simplest approach would be to have pKVM memcpy() between
the private and shared backing pages during conversion.

In-place conversion that preserves data needs to be a separate and/or additional
hypercall, because "I want to map this page as private/shared" is very, very different
than "I want to map this page as private/shared and consume/expose non-zero data".
I.e. the host is guaranteed to get an explicit request to do the memcpy(), so there
shouldn't be a need to implicitly allow this on any conversion.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ