[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZiwLKI_xtZk3BG_B@google.com>
Date: Fri, 26 Apr 2024 13:14:32 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Michael Roth <michael.roth@....com>
Cc: kvm@...r.kernel.org, linux-coco@...ts.linux.dev, linux-mm@...ck.org,
linux-crypto@...r.kernel.org, x86@...nel.org, linux-kernel@...r.kernel.org,
tglx@...utronix.de, mingo@...hat.com, jroedel@...e.de,
thomas.lendacky@....com, hpa@...or.com, ardb@...nel.org, pbonzini@...hat.com,
vkuznets@...hat.com, jmattson@...gle.com, luto@...nel.org,
dave.hansen@...ux.intel.com, slp@...hat.com, pgonda@...gle.com,
peterz@...radead.org, srinivas.pandruvada@...ux.intel.com,
rientjes@...gle.com, dovmurik@...ux.ibm.com, tobin@....com, bp@...en8.de,
vbabka@...e.cz, kirill@...temov.name, ak@...ux.intel.com, tony.luck@...el.com,
sathyanarayanan.kuppuswamy@...ux.intel.com, alpergun@...gle.com,
jarkko@...nel.org, ashish.kalra@....com, nikunj.dadhania@....com,
pankaj.gupta@....com, liam.merwick@...cle.com,
Brijesh Singh <brijesh.singh@....com>
Subject: Re: [PATCH v14 09/22] KVM: SEV: Add support to handle MSR based Page
State Change VMGEXIT
On Fri, Apr 26, 2024, Michael Roth wrote:
> On Thu, Apr 25, 2024 at 03:13:40PM -0700, Sean Christopherson wrote:
> > On Thu, Apr 25, 2024, Michael Roth wrote:
> > > On Wed, Apr 24, 2024 at 01:59:48PM -0700, Sean Christopherson wrote:
> > > > On Sun, Apr 21, 2024, Michael Roth wrote:
> > > > > +static int snp_begin_psc_msr(struct kvm_vcpu *vcpu, u64 ghcb_msr)
> > > > > +{
> > > > > + u64 gpa = gfn_to_gpa(GHCB_MSR_PSC_REQ_TO_GFN(ghcb_msr));
> > > > > + u8 op = GHCB_MSR_PSC_REQ_TO_OP(ghcb_msr);
> > > > > + struct vcpu_svm *svm = to_svm(vcpu);
> > > > > +
> > > > > + if (op != SNP_PAGE_STATE_PRIVATE && op != SNP_PAGE_STATE_SHARED) {
> > > > > + set_ghcb_msr(svm, GHCB_MSR_PSC_RESP_ERROR);
> > > > > + return 1; /* resume guest */
> > > > > + }
> > > > > +
> > > > > + vcpu->run->exit_reason = KVM_EXIT_VMGEXIT;
> > > > > + vcpu->run->vmgexit.type = KVM_USER_VMGEXIT_PSC_MSR;
> > > > > + vcpu->run->vmgexit.psc_msr.gpa = gpa;
> > > > > + vcpu->run->vmgexit.psc_msr.op = op;
> > > >
> > > > Argh, no.
> > > >
> > > > This is the same crud that TDX tried to push[*]. Use KVM's existing user exits,
> > > > and extend as *needed*. There is no good reason page state change requests need
> > > > *two* exit reasons. The *only* thing KVM supports right now is private<=>shared
> > > > conversions, and that can be handled with either KVM_HC_MAP_GPA_RANGE or
> > > > KVM_EXIT_MEMORY_FAULT.
> > > >
> > > > The non-MSR flavor can batch requests, but I'm willing to bet that the overwhelming
> > > > majority of requests are contiguous, i.e. can be combined into a range by KVM,
> > > > and that handling any outliers by performing multiple exits to userspace will
> > > > provide sufficient performance.
> > >
> > > That does tend to be the case. We won't have as much granularity with
> > > the per-entry error codes, but KVM_SET_MEMORY_ATTRIBUTES would be
> > > expected to be for the entire range anyway, and if that fails for
> > > whatever reason then we KVM_BUG_ON() anyway. We do have to have handling
> > > for cases where the entries aren't contiguous however, which would
> > > involve multiple KVM_EXIT_HYPERCALLs until everything is satisfied. But
> > > not a huge deal since it doesn't seem to be a common case.
> >
> > If it was less complex overall, I wouldn't be opposed to KVM marshalling everything
> > into a buffer, but I suspect it will be simpler to just have KVM loop until the
> > PSC request is complete.
>
> Agreed. But *if* we decided to introduce a buffer, where would you
> suggest adding it? The kvm_run union fields are set to 256 bytes, and
> we'd need close to 4K to handle a full GHCB PSC buffer in 1 go. Would
> additional storage at the end of struct kvm_run be acceptable?
Don't even need more memory, just use vcpu->arch.pio_data, which is always
allocated and is mmap()able by userspace via KVM_PIO_PAGE_OFFSET.
> > > KVM_HC_MAP_GPA_RANGE seems like a nice option because we'd also have the
> > > flexibility to just issue that directly within a guest rather than
> > > relying on SNP/TDX specific hcalls. I don't know if that approach is
> > > practical for a real guest, but it could be useful for having re-usable
> > > guest code in KVM selftests that "just works" for all variants of
> > > SNP/TDX/sw-protected. (though we'd still want stuff that exercises
> > > SNP/TDX->KVM_HC_MAP_GPA_RANGE translation).
> > >
> > > I think we'd there is some potential baggage there with the previous SEV
> > > live migration use cases. There's some potential that existing guest kernels
> > > will use it once it gets advertised and issue them alongside GHCB-based
> > > page-state changes. It might make sense to use one of the reserved bits
> > > to denote this flavor of KVM_HC_MAP_GPA_RANGE as being for
> > > hardware/software-protected VMs and not interchangeable with calls that
> > > were used for SEV live migration stuff.
> >
> > I don't think I follow, what exactly wouldn't be interchangeable, and why?
>
> For instance, if KVM_FEATURE_MIGRATION_CONTROL is advertised, then when
> amd_enc_status_change_finish() is triggered as a result of
> set_memory_encrypted(), we'd see
>
> 1) a GHCB PSC for SNP, which will get forwarded to userspace via
> KVM_HC_MAP_GPA_RANGE
> 2) KVM_HC_MAP_GPA_RANGE issued directly by the guest.
>
> In that case, we'd be duplicating PSCs but it wouldn't necessarily hurt
> anything. But ideally we'd be able to distinguish the 2 cases so we
> could rightly treat 1) as only being expected for SNP, and 2) as only
> being expected for SEV/SEV-ES.
Why would the guest issue both? That's a guest bug. Or if supressing the second
hypercall is an issue, simply don't enumerate MIGRATION_CONTROL for SNP guests.
Powered by blists - more mailing lists