lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu,  9 Jul 2020 05:55:25 -0400
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: [PATCH] KVM: nSVM: vmentry ignores EFER.LMA and possibly RFLAGS.VM

AMD doesn't specify (unlike Intel) that EFER.LME, CR0.PG and
EFER.LMA must be consistent, and for SMM state restore they say that
"The EFER.LMA register bit is set to the value obtained by logically
ANDing the SMRAM values of EFER.LME, CR0.PG, and CR4.PAE".  It turns
out that this is also true for vmentry: the EFER.LMA value in the VMCB
is completely ignored, and so is EFLAGS.VM if the processor is in
long mode or real mode.

Implement these quirks; the EFER.LMA part is needed because svm_set_efer
looks at the LMA bit in order to support EFER.NX=0, while the EFLAGS.VM
part is just because we can.

Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
---
 arch/x86/kvm/svm/nested.c | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 402ea5b412f0..1c82a1789e0e 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -337,6 +337,24 @@ static void nested_vmcb_save_pending_event(struct vcpu_svm *svm,
 
 static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_vmcb)
 {
+	u64 efer = nested_vmcb->save.efer;
+
+	/* The processor ignores EFER.LMA, but svm_set_efer needs it.  */
+	efer &= ~EFER_LMA;
+	if ((nested_vmcb->save.cr0 & X86_CR0_PG)
+	    && (nested_vmcb->save.cr4 & X86_CR4_PAE)
+	    && (efer & EFER_LME))
+		efer |= EFER_LMA;
+
+	/*
+	 * Likewise RFLAGS.VM is cleared if inconsistent with other processor
+	 * state.  This is sort-of documented in "10.4 Leaving SMM" but applies
+	 * to SVM as well.
+	 */
+	if (!(nested_vmcb->save.cr0 & X86_CR0_PE)
+	    || (efer & EFER_LMA))
+		nested_vmcb->save.rflags &= ~X86_EFLAGS_VM;
+
 	/* Load the nested guest state */
 	svm->vmcb->save.es = nested_vmcb->save.es;
 	svm->vmcb->save.cs = nested_vmcb->save.cs;
@@ -345,7 +363,7 @@ static void nested_prepare_vmcb_save(struct vcpu_svm *svm, struct vmcb *nested_v
 	svm->vmcb->save.gdtr = nested_vmcb->save.gdtr;
 	svm->vmcb->save.idtr = nested_vmcb->save.idtr;
 	kvm_set_rflags(&svm->vcpu, nested_vmcb->save.rflags);
-	svm_set_efer(&svm->vcpu, nested_vmcb->save.efer);
+	svm_set_efer(&svm->vcpu, efer);
 	svm_set_cr0(&svm->vcpu, nested_vmcb->save.cr0);
 	svm_set_cr4(&svm->vcpu, nested_vmcb->save.cr4);
 	(void)kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3);
-- 
2.26.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ