[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210118084720.1585-1-jianjay.zhou@huawei.com>
Date: Mon, 18 Jan 2021 16:47:20 +0800
From: Jay Zhou <jianjay.zhou@...wei.com>
To: <kvm@...r.kernel.org>, <pbonzini@...hat.com>
CC: <linux-kernel@...r.kernel.org>, <seanjc@...gle.com>,
<vkuznets@...hat.com>, <weidong.huang@...wei.com>,
<wangxinxin.wang@...wei.com>, <jianjay.zhou@...wei.com>,
<zhuangshengen@...wei.com>
Subject: [PATCH] KVM: x86: get smi pending status correctly
The injection process of smi has two steps:
Qemu KVM
Step1:
cpu->interrupt_request &= \
~CPU_INTERRUPT_SMI;
kvm_vcpu_ioctl(cpu, KVM_SMI)
call kvm_vcpu_ioctl_smi() and
kvm_make_request(KVM_REQ_SMI, vcpu);
Step2:
kvm_vcpu_ioctl(cpu, KVM_RUN, 0)
call process_smi() if
kvm_check_request(KVM_REQ_SMI, vcpu) is
true, mark vcpu->arch.smi_pending = true;
The vcpu->arch.smi_pending will be set true in step2, unfortunately if
vcpu paused between step1 and step2, the kvm_run->immediate_exit will be
set and vcpu has to exit to Qemu immediately during step2 before mark
vcpu->arch.smi_pending true.
During VM migration, Qemu will get the smi pending status from KVM using
KVM_GET_VCPU_EVENTS ioctl at the downtime, then the smi pending status
will be lost.
Signed-off-by: Jay Zhou <jianjay.zhou@...wei.com>
Signed-off-by: Shengen Zhuang <zhuangshengen@...wei.com>
---
arch/x86/kvm/x86.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9a8969a..9025c76 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -105,6 +105,7 @@
static void update_cr8_intercept(struct kvm_vcpu *vcpu);
static void process_nmi(struct kvm_vcpu *vcpu);
+static void process_smi(struct kvm_vcpu *vcpu);
static void enter_smm(struct kvm_vcpu *vcpu);
static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
static void store_regs(struct kvm_vcpu *vcpu);
@@ -4230,6 +4231,9 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
{
process_nmi(vcpu);
+ if (kvm_check_request(KVM_REQ_SMI, vcpu))
+ process_smi(vcpu);
+
/*
* In guest mode, payload delivery should be deferred,
* so that the L1 hypervisor can intercept #PF before
--
1.8.3.1
Powered by blists - more mailing lists