lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Jan 2021 17:10:21 +0200
From:   Maxim Levitsky <mlevitsk@...hat.com>
To:     "Dr. David Alan Gilbert" <dgilbert@...hat.com>,
        Wei Huang <wei.huang2@....com>
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        pbonzini@...hat.com, vkuznets@...hat.com, seanjc@...gle.com,
        joro@...tes.org, bp@...en8.de, tglx@...utronix.de,
        mingo@...hat.com, x86@...nel.org, jmattson@...gle.com,
        wanpengli@...cent.com, bsd@...hat.com, luto@...capital.net
Subject: Re: [PATCH v2 4/4] KVM: SVM: Support #GP handling for the case of
 nested on nested

On Thu, 2021-01-21 at 14:56 +0000, Dr. David Alan Gilbert wrote:
> * Wei Huang (wei.huang2@....com) wrote:
> > Under the case of nested on nested (e.g. L0->L1->L2->L3), #GP triggered
> > by SVM instructions can be hided from L1. Instead the hypervisor can
> > inject the proper #VMEXIT to inform L1 of what is happening. Thus L1
> > can avoid invoking the #GP workaround. For this reason we turns on
> > guest VM's X86_FEATURE_SVME_ADDR_CHK bit for KVM running inside VM to
> > receive the notification and change behavior.
> 
> Doesn't this mean a VM migrated between levels (hmm L2 to L1???) would
> see different behaviour?
> (I've never tried such a migration, but I thought in principal it should
> work).

This is not an issue. The VM will always see the X86_FEATURE_SVME_ADDR_CHK set,
(regardless if host has it, or if KVM emulates it).
This is not different from what KVM does for guest's x2apic.
KVM also always emulates it regardless of the host support.

The hypervisor on the other hand can indeed either see or not that bit set,
but it is prepared to handle both cases, so it will support migrating VMs
between hosts that have and don't have that bit.

I hope that I understand this correctly.

Best regards,
	Maxim Levitsky


> 
> Dave
> 
> 
> > Co-developed-by: Bandan Das <bsd@...hat.com>
> > Signed-off-by: Bandan Das <bsd@...hat.com>
> > Signed-off-by: Wei Huang <wei.huang2@....com>
> > ---
> >  arch/x86/kvm/svm/svm.c | 19 ++++++++++++++++++-
> >  1 file changed, 18 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index 2a12870ac71a..89512c0e7663 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -2196,6 +2196,11 @@ static int svm_instr_opcode(struct kvm_vcpu *vcpu)
> >  
> >  static int emulate_svm_instr(struct kvm_vcpu *vcpu, int opcode)
> >  {
> > +	const int guest_mode_exit_codes[] = {
> > +		[SVM_INSTR_VMRUN] = SVM_EXIT_VMRUN,
> > +		[SVM_INSTR_VMLOAD] = SVM_EXIT_VMLOAD,
> > +		[SVM_INSTR_VMSAVE] = SVM_EXIT_VMSAVE,
> > +	};
> >  	int (*const svm_instr_handlers[])(struct vcpu_svm *svm) = {
> >  		[SVM_INSTR_VMRUN] = vmrun_interception,
> >  		[SVM_INSTR_VMLOAD] = vmload_interception,
> > @@ -2203,7 +2208,14 @@ static int emulate_svm_instr(struct kvm_vcpu *vcpu, int opcode)
> >  	};
> >  	struct vcpu_svm *svm = to_svm(vcpu);
> >  
> > -	return svm_instr_handlers[opcode](svm);
> > +	if (is_guest_mode(vcpu)) {
> > +		svm->vmcb->control.exit_code = guest_mode_exit_codes[opcode];
> > +		svm->vmcb->control.exit_info_1 = 0;
> > +		svm->vmcb->control.exit_info_2 = 0;
> > +
> > +		return nested_svm_vmexit(svm);
> > +	} else
> > +		return svm_instr_handlers[opcode](svm);
> >  }
> >  
> >  /*
> > @@ -4034,6 +4046,11 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> >  	/* Check again if INVPCID interception if required */
> >  	svm_check_invpcid(svm);
> >  
> > +	if (nested && guest_cpuid_has(vcpu, X86_FEATURE_SVM)) {
> > +		best = kvm_find_cpuid_entry(vcpu, 0x8000000A, 0);
> > +		best->edx |= (1 << 28);
> > +	}
> > +
> >  	/* For sev guests, the memory encryption bit is not reserved in CR3.  */
> >  	if (sev_guest(vcpu->kvm)) {
> >  		best = kvm_find_cpuid_entry(vcpu, 0x8000001F, 0);
> > -- 
> > 2.27.0
> > 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ