[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YkcSDeJDHOv+MZA7@google.com>
Date: Fri, 1 Apr 2022 14:54:05 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: "Nikunj A. Dadhania" <nikunj@....com>
Cc: Peter Gonda <pgonda@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Brijesh Singh <brijesh.singh@....com>,
Tom Lendacky <thomas.lendacky@....com>,
Bharata B Rao <bharata@....com>,
"Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
Mingwei Zhang <mizhang@...gle.com>,
David Hildenbrand <david@...hat.com>,
kvm list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RFC v1 0/9] KVM: SVM: Defer page pinning for SEV guests
On Fri, Apr 01, 2022, Nikunj A. Dadhania wrote:
>
> On 4/1/2022 12:30 AM, Sean Christopherson wrote:
> > On Thu, Mar 31, 2022, Peter Gonda wrote:
> >> On Wed, Mar 30, 2022 at 10:48 PM Nikunj A. Dadhania <nikunj@....com> wrote:
> >>> So with guest supporting KVM_FEATURE_HC_MAP_GPA_RANGE and host (KVM) supporting
> >>> KVM_HC_MAP_GPA_RANGE hypercall, SEV/SEV-ES guest should communicate private/shared
> >>> pages to the hypervisor, this information can be used to mark page shared/private.
> >>
> >> One concern here may be that the VMM doesn't know which guests have
> >> KVM_FEATURE_HC_MAP_GPA_RANGE support and which don't. Only once the
> >> guest boots does the guest tell KVM that it supports
> >> KVM_FEATURE_HC_MAP_GPA_RANGE. If the guest doesn't we need to pin all
> >> the memory before we run the guest to be safe to be safe.
> >
> > Yep, that's a big reason why I view purging the existing SEV memory management as
> > a long term goal. The other being that userspace obviously needs to be updated to
> > support UPM[*]. I suspect the only feasible way to enable this for SEV/SEV-ES
> > would be to restrict it to new VM types that have a disclaimer regarding additional
> > requirements.
>
> For SEV/SEV-ES could we base demand pinning on my first RFC[*].
No, because as David pointed out, elevating the refcount is not the same as actually
pinning the page. Things like NUMA balancing will still try to migrate the page,
and even go so far as to zap the PTE, before bailing due to the outstanding reference.
In other words, not actually pinning makes the mm subsystem less efficient. Would it
functionally work? Yes. Is it acceptable KVM behavior? No.
> Those patches does not touch the core KVM flow.
I don't mind touching core KVM code. If this goes forward, I actually strongly
prefer having the x86 MMU code handle the pinning as opposed to burying it in SEV
via kvm_x86_ops. The reason I don't think it's worth pursuing this approach is
because (a) we know that the current SEV/SEV-ES memory management scheme is flawed
and is a deadend, and (b) this is not so trivial as we (or at least I) originally
thought/hoped it would be. In other words, it's not that I think demand pinning
is a bad idea, nor do I think the issues are unsolvable, it's that I think the
cost of getting a workable solution, e.g. code churn, ongoing maintenance, reviewer
time, etc..., far outweighs the benefits.
> Moreover, it does not expect any guest/firmware changes.
>
> [*] https://lore.kernel.org/kvm/20220118110621.62462-1-nikunj@amd.com/
Powered by blists - more mailing lists