lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ykc+QapbAdpd41PK@google.com>
Date:   Fri, 1 Apr 2022 18:02:41 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Marc Orr <marcorr@...gle.com>
Cc:     Peter Gonda <pgonda@...gle.com>,
        "Nikunj A. Dadhania" <nikunj@....com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>,
        Brijesh Singh <brijesh.singh@....com>,
        Tom Lendacky <thomas.lendacky@....com>,
        Bharata B Rao <bharata@....com>,
        "Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
        Mingwei Zhang <mizhang@...gle.com>,
        David Hildenbrand <david@...hat.com>,
        kvm list <kvm@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RFC v1 0/9] KVM: SVM: Defer page pinning for SEV guests

On Fri, Apr 01, 2022, Marc Orr wrote:
> On Thu, Mar 31, 2022 at 12:01 PM Sean Christopherson <seanjc@...gle.com> wrote:
> > Yep, that's a big reason why I view purging the existing SEV memory management as
> > a long term goal.  The other being that userspace obviously needs to be updated to
> > support UPM[*].   I suspect the only feasible way to enable this for SEV/SEV-ES
> > would be to restrict it to new VM types that have a disclaimer regarding additional
> > requirements.
> >
> > [*] I believe Peter coined the UPM acronym for "Unmapping guest Private Memory".  We've
> >     been using it iternally for discussion and it rolls off the tongue a lot easier than
> >     the full phrase, and is much more precise/descriptive than just "private fd".
> 
> Can we really "purge the existing SEV memory management"? This seems
> like a non-starter because it violates userspace API (i.e., the
> ability for the userspace VMM to run a guest without
> KVM_FEATURE_HC_MAP_GPA_RANGE). Or maybe I'm not quite following what
> you mean by purge.

I really do mean purge, but I also really do mean "long term", as in 5+ years
(probably 10+ if I'm being realistic).

Removing support is completely ok, as is changing the uABI, the rule is that we
can't break userspace.  If all users are migrated to private-fd, e.g. by carrots
and/or sticks such as putting the code into maintenance-only mode, then at some
point in the future there will be no users left to break and we can drop the
current code and make use of private-fd mandatory for SEV/SEV-ES guests.

> Assuming that UPM-based lazy pinning comes together via a new VM type
> that only supports new images based on a minimum kernel version with
> KVM_FEATURE_HC_MAP_GPA_RANGE, then I think this would like as follows:
> 
> 1. Userspace VMM: Check SEV VM type. If type is legacy SEV type then
> do upfront pinning. Else, skip up front pinning.

Yep, if by legacy "SEV type" you mean "SEV/SEV-ES guest that isn't required to
use MAP_GPA_RANGE", which I'm pretty sure you do based on #3.

> 2. KVM: I'm not sure anything special needs to happen here. For the
> legacy VM types, it can be configured to use legacy memslots,
> presumably the same as non-CVMs will be configured. For the new VM
> type, it should be configured to use UPM.

Correct, for now, KVM does nothing different for SEV/SEV-ES guests.

> 3. Control plane (thing creating VMs): Responsible for not allowing
> legacy SEV images (i.e., images without KVM_FEATURE_HC_MAP_GPA_RANGE)
> with the new SEV VM types that use UPM and have support for demand
> pinning.
> 
> Sean: Did I get this right?

Yep.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ