lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240426222457.7yn66athor2jxsrj@amd.com>
Date: Fri, 26 Apr 2024 17:24:57 -0500
From: Michael Roth <michael.roth@....com>
To: Sean Christopherson <seanjc@...gle.com>
CC: <kvm@...r.kernel.org>, <linux-coco@...ts.linux.dev>, <linux-mm@...ck.org>,
	<linux-crypto@...r.kernel.org>, <x86@...nel.org>,
	<linux-kernel@...r.kernel.org>, <tglx@...utronix.de>, <mingo@...hat.com>,
	<jroedel@...e.de>, <thomas.lendacky@....com>, <hpa@...or.com>,
	<ardb@...nel.org>, <pbonzini@...hat.com>, <vkuznets@...hat.com>,
	<jmattson@...gle.com>, <luto@...nel.org>, <dave.hansen@...ux.intel.com>,
	<slp@...hat.com>, <pgonda@...gle.com>, <peterz@...radead.org>,
	<srinivas.pandruvada@...ux.intel.com>, <rientjes@...gle.com>,
	<dovmurik@...ux.ibm.com>, <tobin@....com>, <bp@...en8.de>, <vbabka@...e.cz>,
	<kirill@...temov.name>, <ak@...ux.intel.com>, <tony.luck@...el.com>,
	<sathyanarayanan.kuppuswamy@...ux.intel.com>, <alpergun@...gle.com>,
	<jarkko@...nel.org>, <ashish.kalra@....com>, <nikunj.dadhania@....com>,
	<pankaj.gupta@....com>, <liam.merwick@...cle.com>, Brijesh Singh
	<brijesh.singh@....com>
Subject: Re: [PATCH v14 09/22] KVM: SEV: Add support to handle MSR based Page
 State Change VMGEXIT

On Fri, Apr 26, 2024 at 01:14:32PM -0700, Sean Christopherson wrote:
> On Fri, Apr 26, 2024, Michael Roth wrote:
> > On Thu, Apr 25, 2024 at 03:13:40PM -0700, Sean Christopherson wrote:
> > > On Thu, Apr 25, 2024, Michael Roth wrote:
> > > > On Wed, Apr 24, 2024 at 01:59:48PM -0700, Sean Christopherson wrote:
> > > > > On Sun, Apr 21, 2024, Michael Roth wrote:
> > > > > > +static int snp_begin_psc_msr(struct kvm_vcpu *vcpu, u64 ghcb_msr)
> > > > > > +{
> > > > > > +	u64 gpa = gfn_to_gpa(GHCB_MSR_PSC_REQ_TO_GFN(ghcb_msr));
> > > > > > +	u8 op = GHCB_MSR_PSC_REQ_TO_OP(ghcb_msr);
> > > > > > +	struct vcpu_svm *svm = to_svm(vcpu);
> > > > > > +
> > > > > > +	if (op != SNP_PAGE_STATE_PRIVATE && op != SNP_PAGE_STATE_SHARED) {
> > > > > > +		set_ghcb_msr(svm, GHCB_MSR_PSC_RESP_ERROR);
> > > > > > +		return 1; /* resume guest */
> > > > > > +	}
> > > > > > +
> > > > > > +	vcpu->run->exit_reason = KVM_EXIT_VMGEXIT;
> > > > > > +	vcpu->run->vmgexit.type = KVM_USER_VMGEXIT_PSC_MSR;
> > > > > > +	vcpu->run->vmgexit.psc_msr.gpa = gpa;
> > > > > > +	vcpu->run->vmgexit.psc_msr.op = op;
> > > > > 
> > > > > Argh, no.
> > > > > 
> > > > > This is the same crud that TDX tried to push[*].  Use KVM's existing user exits,
> > > > > and extend as *needed*.  There is no good reason page state change requests need
> > > > > *two* exit reasons.  The *only* thing KVM supports right now is private<=>shared
> > > > > conversions, and that can be handled with either KVM_HC_MAP_GPA_RANGE or
> > > > > KVM_EXIT_MEMORY_FAULT.
> > > > > 
> > > > > The non-MSR flavor can batch requests, but I'm willing to bet that the overwhelming
> > > > > majority of requests are contiguous, i.e. can be combined into a range by KVM,
> > > > > and that handling any outliers by performing multiple exits to userspace will
> > > > > provide sufficient performance.
> > > > 
> > > > That does tend to be the case. We won't have as much granularity with
> > > > the per-entry error codes, but KVM_SET_MEMORY_ATTRIBUTES would be
> > > > expected to be for the entire range anyway, and if that fails for
> > > > whatever reason then we KVM_BUG_ON() anyway. We do have to have handling
> > > > for cases where the entries aren't contiguous however, which would
> > > > involve multiple KVM_EXIT_HYPERCALLs until everything is satisfied. But
> > > > not a huge deal since it doesn't seem to be a common case.
> > > 
> > > If it was less complex overall, I wouldn't be opposed to KVM marshalling everything
> > > into a buffer, but I suspect it will be simpler to just have KVM loop until the
> > > PSC request is complete.
> > 
> > Agreed. But *if* we decided to introduce a buffer, where would you
> > suggest adding it? The kvm_run union fields are set to 256 bytes, and
> > we'd need close to 4K to handle a full GHCB PSC buffer in 1 go. Would
> > additional storage at the end of struct kvm_run be acceptable?
> 
> Don't even need more memory, just use vcpu->arch.pio_data, which is always
> allocated and is mmap()able by userspace via KVM_PIO_PAGE_OFFSET.

Nice, that seems like a good option if needed.

> 
> > > > KVM_HC_MAP_GPA_RANGE seems like a nice option because we'd also have the
> > > > flexibility to just issue that directly within a guest rather than
> > > > relying on SNP/TDX specific hcalls. I don't know if that approach is
> > > > practical for a real guest, but it could be useful for having re-usable
> > > > guest code in KVM selftests that "just works" for all variants of
> > > > SNP/TDX/sw-protected. (though we'd still want stuff that exercises
> > > > SNP/TDX->KVM_HC_MAP_GPA_RANGE translation).
> > > > 
> > > > I think we'd there is some potential baggage there with the previous SEV
> > > > live migration use cases. There's some potential that existing guest kernels
> > > > will use it once it gets advertised and issue them alongside GHCB-based
> > > > page-state changes. It might make sense to use one of the reserved bits
> > > > to denote this flavor of KVM_HC_MAP_GPA_RANGE as being for
> > > > hardware/software-protected VMs and not interchangeable with calls that
> > > > were used for SEV live migration stuff.
> > > 
> > > I don't think I follow, what exactly wouldn't be interchangeable, and why?
> > 
> > For instance, if KVM_FEATURE_MIGRATION_CONTROL is advertised, then when
> > amd_enc_status_change_finish() is triggered as a result of
> > set_memory_encrypted(), we'd see
> > 
> >   1) a GHCB PSC for SNP, which will get forwarded to userspace via
> >      KVM_HC_MAP_GPA_RANGE
> >   2) KVM_HC_MAP_GPA_RANGE issued directly by the guest.
> > 
> > In that case, we'd be duplicating PSCs but it wouldn't necessarily hurt
> > anything. But ideally we'd be able to distinguish the 2 cases so we
> > could rightly treat 1) as only being expected for SNP, and 2) as only
> > being expected for SEV/SEV-ES.
> 
> Why would the guest issue both?  That's a guest bug.  Or if supressing the second
> hypercall is an issue, simply don't enumerate MIGRATION_CONTROL for SNP guests.

At the time of its inception, KVM_HC_MAP_GPA_RANGE was simply
KVM_HC_PAGE_ENC_STATUS and got a more generic name over the course of
development. But its purpose never changed: to inform the hypervisor of
the current encryption status of a GPA range so VMMs could build up a
list of shared guest regions that don't need to go through firmware for
migration.. And it was and still is asynchronous to a degree, since the
the migration control MSRs signals when that list of shared pages is
usable.

These are very different semantics the proposal to use KVM_HC_MAP_GPA_RANGE
as a means to set memory attributes via KVM_SET_MEMORY_ATTRIBUTES, and
the 2 purposes aren't necessarily mutually exclusive to one another. It
only really becomes a bug if we begin to interpret the original use-case
as something other than it's initial intent in the case of SNP.

But at the same time, it's hard to imagine this older SEV live migration
use-case being useful for SNP, since userspace will necessarily have all
the information it needs to determine what is/isn't shared with relying
on an additional hypercall.

So treating the older use case as specific to non-SNP and disallowing the
use of MIGRATION_CONTROL does seems reasonable. But it's really the CPUID
bit that advertises it, SEV just happens to only use it for when
MIGRATION_CONTROL is also advertised. So we could disable that as well,
but I did like the idea of being able to handle guest-issued
KVM_HC_MAP_GPA_RANGE calls even with SNP/TDX enabled, which is less of an
option if we can't advertised KVM_HC_MAP_GPA_RANGE via cpuid. But I
suppose we could do that with KVM selftests which is probably where
that's more likely to be useful.

-Mike

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ