[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220207155447.840194-5-mlevitsk@redhat.com>
Date: Mon, 7 Feb 2022 17:54:21 +0200
From: Maxim Levitsky <mlevitsk@...hat.com>
To: kvm@...r.kernel.org
Cc: Tony Luck <tony.luck@...el.com>,
"Chang S. Bae" <chang.seok.bae@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Wanpeng Li <wanpengli@...cent.com>,
Ingo Molnar <mingo@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>,
linux-kernel@...r.kernel.org,
Rodrigo Vivi <rodrigo.vivi@...el.com>,
"H. Peter Anvin" <hpa@...or.com>,
intel-gvt-dev@...ts.freedesktop.org,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Joerg Roedel <joro@...tes.org>,
Sean Christopherson <seanjc@...gle.com>,
David Airlie <airlied@...ux.ie>,
Zhi Wang <zhi.a.wang@...el.com>,
Brijesh Singh <brijesh.singh@....com>,
Jim Mattson <jmattson@...gle.com>, x86@...nel.org,
Daniel Vetter <daniel@...ll.ch>,
Borislav Petkov <bp@...en8.de>,
Zhenyu Wang <zhenyuw@...ux.intel.com>,
Kan Liang <kan.liang@...ux.intel.com>,
Jani Nikula <jani.nikula@...ux.intel.com>,
Maxim Levitsky <mlevitsk@...hat.com>, stable@...r.kernel.org
Subject: [PATCH RESEND 04/30] KVM: x86: nSVM/nVMX: set nested_run_pending on VM entry which is a result of RSM
While RSM induced VM entries are not full VM entries,
they still need to be followed by actual VM entry to complete it,
unlike setting the nested state.
This patch fixes boot of hyperv and SMM enabled
windows VM running nested on KVM, which fail due
to this issue combined with lack of dirty bit setting.
Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
Cc: stable@...r.kernel.org
---
arch/x86/kvm/svm/svm.c | 5 +++++
arch/x86/kvm/vmx/vmx.c | 1 +
2 files changed, 6 insertions(+)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3f1d11e652123..71bfa52121622 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4274,6 +4274,11 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
nested_copy_vmcb_save_to_cache(svm, &vmcb12->save);
ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false);
+ if (ret)
+ goto unmap_save;
+
+ svm->nested.nested_run_pending = 1;
+
unmap_save:
kvm_vcpu_unmap(vcpu, &map_save, true);
unmap_map:
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 8ac5a6fa77203..fc9c4eca90a78 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7659,6 +7659,7 @@ static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
if (ret)
return ret;
+ vmx->nested.nested_run_pending = 1;
vmx->nested.smm.guest_mode = false;
}
return 0;
--
2.26.3
Powered by blists - more mailing lists