lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4b2565ca-83da-c337-ccf3-ee31a28fd605@redhat.com>
Date:   Mon, 18 Jan 2021 18:57:51 +0100
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Jay Zhou <jianjay.zhou@...wei.com>, kvm@...r.kernel.org
Cc:     linux-kernel@...r.kernel.org, seanjc@...gle.com,
        vkuznets@...hat.com, weidong.huang@...wei.com,
        wangxinxin.wang@...wei.com, zhuangshengen@...wei.com
Subject: Re: [PATCH] KVM: x86: get smi pending status correctly

On 18/01/21 09:47, Jay Zhou wrote:
> The injection process of smi has two steps:
> 
>      Qemu                        KVM
> Step1:
>      cpu->interrupt_request &= \
>          ~CPU_INTERRUPT_SMI;
>      kvm_vcpu_ioctl(cpu, KVM_SMI)
> 
>                                  call kvm_vcpu_ioctl_smi() and
>                                  kvm_make_request(KVM_REQ_SMI, vcpu);
> 
> Step2:
>      kvm_vcpu_ioctl(cpu, KVM_RUN, 0)
> 
>                                  call process_smi() if
>                                  kvm_check_request(KVM_REQ_SMI, vcpu) is
>                                  true, mark vcpu->arch.smi_pending = true;
> 
> The vcpu->arch.smi_pending will be set true in step2, unfortunately if
> vcpu paused between step1 and step2, the kvm_run->immediate_exit will be
> set and vcpu has to exit to Qemu immediately during step2 before mark
> vcpu->arch.smi_pending true.
> During VM migration, Qemu will get the smi pending status from KVM using
> KVM_GET_VCPU_EVENTS ioctl at the downtime, then the smi pending status
> will be lost.
> 
> Signed-off-by: Jay Zhou <jianjay.zhou@...wei.com>
> Signed-off-by: Shengen Zhuang <zhuangshengen@...wei.com>
> ---
>   arch/x86/kvm/x86.c | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 9a8969a..9025c76 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -105,6 +105,7 @@
>   
>   static void update_cr8_intercept(struct kvm_vcpu *vcpu);
>   static void process_nmi(struct kvm_vcpu *vcpu);
> +static void process_smi(struct kvm_vcpu *vcpu);
>   static void enter_smm(struct kvm_vcpu *vcpu);
>   static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
>   static void store_regs(struct kvm_vcpu *vcpu);
> @@ -4230,6 +4231,9 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
>   {
>   	process_nmi(vcpu);
>   
> +	if (kvm_check_request(KVM_REQ_SMI, vcpu))
> +		process_smi(vcpu);
> +
>   	/*
>   	 * In guest mode, payload delivery should be deferred,
>   	 * so that the L1 hypervisor can intercept #PF before
> 

Queued, thanks.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ