[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CyeKaJVSpin3ijn2nT_hRsLwaABhOho4G3eynfde2ojwQ@mail.gmail.com>
Date: Tue, 22 Aug 2017 10:33:41 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Radim Krčmář <rkrcmar@...hat.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
kvm <kvm@...r.kernel.org>, Paolo Bonzini <pbonzini@...hat.com>,
Wanpeng Li <wanpeng.li@...mail.com>
Subject: Re: [PATCH v2] KVM: nVMX: Fix trying to cancel vmlauch/vmresume
2017-08-22 7:09 GMT+08:00 Wanpeng Li <kernellwp@...il.com>:
> 2017-08-22 6:55 GMT+08:00 Wanpeng Li <kernellwp@...il.com>:
>> 2017-08-22 0:20 GMT+08:00 Radim Krčmář <rkrcmar@...hat.com>:
>>> 2017-08-18 07:11-0700, Wanpeng Li:
>>>> From: Wanpeng Li <wanpeng.li@...mail.com>
>>>>
>>>> ------------[ cut here ]------------
>>>> WARNING: CPU: 7 PID: 3861 at /home/kernel/ssd/kvm/arch/x86/kvm//vmx.c:11299 nested_vmx_vmexit+0x176e/0x1980 [kvm_intel]
>>>> CPU: 7 PID: 3861 Comm: qemu-system-x86 Tainted: G W OE 4.13.0-rc4+ #11
>>>> RIP: 0010:nested_vmx_vmexit+0x176e/0x1980 [kvm_intel]
>>>> Call Trace:
>>>> ? kvm_multiple_exception+0x149/0x170 [kvm]
>>>> ? handle_emulation_failure+0x79/0x230 [kvm]
>>>> ? load_vmcs12_host_state+0xa80/0xa80 [kvm_intel]
>>>> ? check_chain_key+0x137/0x1e0
>>>> ? reexecute_instruction.part.168+0x130/0x130 [kvm]
>>>> nested_vmx_inject_exception_vmexit+0xb7/0x100 [kvm_intel]
>>>> ? nested_vmx_inject_exception_vmexit+0xb7/0x100 [kvm_intel]
>>>> vmx_queue_exception+0x197/0x300 [kvm_intel]
>>>> kvm_arch_vcpu_ioctl_run+0x1b0c/0x2c90 [kvm]
>>>> ? kvm_arch_vcpu_runnable+0x220/0x220 [kvm]
>>>> ? preempt_count_sub+0x18/0xc0
>>>> ? restart_apic_timer+0x17d/0x300 [kvm]
>>>> ? kvm_lapic_restart_hv_timer+0x37/0x50 [kvm]
>>>> ? kvm_arch_vcpu_load+0x1d8/0x350 [kvm]
>>>> kvm_vcpu_ioctl+0x4e4/0x910 [kvm]
>>>> ? kvm_vcpu_ioctl+0x4e4/0x910 [kvm]
>>>> ? kvm_dev_ioctl+0xbe0/0xbe0 [kvm]
>>>>
>>>> The flag "nested_run_pending", which can override the decision of which should run
>>>> next, L1 or L2. nested_run_pending=1 means that we *must* run L2 next, not L1. This
>>>> is necessary in particular when L1 did a VMLAUNCH of L2 and therefore expects L2 to
>>>> be run (and perhaps be injected with an event it specified, etc.). Nested_run_pending
>>>> is especially intended to avoid switching to L1 in the injection decision-point.
>>>>
>>>> I catch this in the queue exception path, this patch fixes it by requesting
>>>> an immediate VM exit from L2 and keeping the exception for L1 pending for a
>>>> subsequent nested VM exit.
>>>>
>>>> Cc: Paolo Bonzini <pbonzini@...hat.com>
>>>> Cc: Radim Krčmář <rkrcmar@...hat.com>
>>>> Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
>>>> ---
>>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>>> @@ -6356,8 +6356,8 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
>>>> kvm_update_dr7(vcpu);
>>>> }
>>>
>>> Hm, we shouldn't execute the code above if exception won't be injected.
>>>
>>>>
>>>> - kvm_x86_ops->queue_exception(vcpu);
>>>> - return 0;
>>>
>>> vmx_complete_interrupts() assumes that the exception is always injected,
>>> so it would be dropped by kvm_clear_exception_queue().
>>>
>>> I'm starting to wonder whether getting rid of nested_run_pending
>>> wouldn't be nicer.
>>
>> Yeah, I rethink of your concern for nested_run_pending w/ return value
>> is 0, actually the path in the calltrace is the else branch in
>> nested_vmx_check_exception(), an exception will be injected to L2 by
>> L1 if L1 owns this exception, otherwise injected by L0 directly. For
>> the nested_run_pending w/ return value is 0 stuff, we can treat it as
>> L0 injects the exception to L2 directly. So there is no exception is
>> injected to wrong guest.
>
> I just sent out v3 to move the nested_run_pending stuff to the else branch.
I can still encounter the splat w/ v3 due to #PF, so v1 is still better.
Regards,
Wanpeng Li
Powered by blists - more mailing lists