[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0467288d-1e3d-0ea0-dd8a-c034dcac6ee1@redhat.com>
Date: Tue, 22 Aug 2017 18:09:58 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Wanpeng Li <kernellwp@...il.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: Radim Krčmář <rkrcmar@...hat.com>,
Wanpeng Li <wanpeng.li@...mail.com>
Subject: Re: [PATCH v3] KVM: nVMX: Fix trying to cancel vmlauch/vmresume
On 22/08/2017 01:08, Wanpeng Li wrote:
> From: Wanpeng Li <wanpeng.li@...mail.com>
>
> ------------[ cut here ]------------
> WARNING: CPU: 7 PID: 3861 at /home/kernel/ssd/kvm/arch/x86/kvm//vmx.c:11299 nested_vmx_vmexit+0x176e/0x1980 [kvm_intel]
> CPU: 7 PID: 3861 Comm: qemu-system-x86 Tainted: G W OE 4.13.0-rc4+ #11
> RIP: 0010:nested_vmx_vmexit+0x176e/0x1980 [kvm_intel]
> Call Trace:
> ? kvm_multiple_exception+0x149/0x170 [kvm]
> ? handle_emulation_failure+0x79/0x230 [kvm]
> ? load_vmcs12_host_state+0xa80/0xa80 [kvm_intel]
> ? check_chain_key+0x137/0x1e0
> ? reexecute_instruction.part.168+0x130/0x130 [kvm]
> nested_vmx_inject_exception_vmexit+0xb7/0x100 [kvm_intel]
> ? nested_vmx_inject_exception_vmexit+0xb7/0x100 [kvm_intel]
> vmx_queue_exception+0x197/0x300 [kvm_intel]
> kvm_arch_vcpu_ioctl_run+0x1b0c/0x2c90 [kvm]
> ? kvm_arch_vcpu_runnable+0x220/0x220 [kvm]
> ? preempt_count_sub+0x18/0xc0
> ? restart_apic_timer+0x17d/0x300 [kvm]
> ? kvm_lapic_restart_hv_timer+0x37/0x50 [kvm]
> ? kvm_arch_vcpu_load+0x1d8/0x350 [kvm]
> kvm_vcpu_ioctl+0x4e4/0x910 [kvm]
> ? kvm_vcpu_ioctl+0x4e4/0x910 [kvm]
> ? kvm_dev_ioctl+0xbe0/0xbe0 [kvm]
>
> The flag "nested_run_pending", which can override the decision of which should run
> next, L1 or L2. nested_run_pending=1 means that we *must* run L2 next, not L1. This
> is necessary in particular when L1 did a VMLAUNCH of L2 and therefore expects L2 to
> be run (and perhaps be injected with an event it specified, etc.). Nested_run_pending
> is especially intended to avoid switching to L1 in the injection decision-point.
>
> I catch this in the queue exception path, this patch fixes it by running L2 next
> instead of L1 in the queue exception path and injecting the pending exception to
> L2 directly.
>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: Radim Krčmář <rkrcmar@...hat.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
> ---
> v2 -> v3:
> * move the nested_run_pending to the else branch
> v1 -> v2:
> * request an immediate VM exit from L2 and keep the exception for
> L1 pending for a subsequent nested VM exit
>
> arch/x86/kvm/vmx.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index e398946..685f51e 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -2488,6 +2488,10 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu)
> }
> } else {
> unsigned long exit_qual = 0;
> +
> + if (to_vmx(vcpu)->nested.nested_run_pending)
> + return 0;
> +
> if (nr == DB_VECTOR)
> exit_qual = vcpu->arch.dr6;
>
>
Hmm, why would this not apply to page faults? It doesn't make much sense...
Paolo
Powered by blists - more mailing lists