[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0ad7daf1-08f9-8d0a-4642-09014b4ae6d1@redhat.com>
Date: Mon, 27 Feb 2017 12:52:43 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Wanpeng Li <kernellwp@...il.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
kvm <kvm@...r.kernel.org>,
Radim Krčmář <rkrcmar@...hat.com>,
Wanpeng Li <wanpeng.li@...mail.com>,
Jan Kiszka <jan.kiszka@...mens.com>
Subject: Re: [PATCH] KVM: nVMX: Fix pending events injection
On 27/02/2017 12:35, Wanpeng Li wrote:
> 2017-02-27 18:19 GMT+08:00 Paolo Bonzini <pbonzini@...hat.com>:
>>
>>
>> On 26/02/2017 09:46, Wanpeng Li wrote:
>>> From: Wanpeng Li <wanpeng.li@...mail.com>
>>>
>>> L2 fails to boot on a non-APICv box dues to 'commit 0ad3bed6c5ec
>>> ("kvm: nVMX: move nested events check to kvm_vcpu_running")'
>>>
>>> KVM internal error. Suberror: 3
>>> extra data[0]: 800000ef
>>> extra data[1]: 1
>>> RAX=0000000000000000 RBX=ffffffff81f36140 RCX=0000000000000000 RDX=0000000000000000
>>> RSI=0000000000000000 RDI=0000000000000000 RBP=ffff88007c92fe90 RSP=ffff88007c92fe90
>>> R8 =ffff88007fccdca0 R9 =0000000000000000 R10=00000000fffedb3d R11=0000000000000000
>>> R12=0000000000000003 R13=0000000000000000 R14=0000000000000000 R15=ffff88007c92c000
>>> RIP=ffffffff810645e6 RFL=00000246 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
>>> ES =0000 0000000000000000 ffffffff 00c00000
>>> CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA]
>>> SS =0000 0000000000000000 ffffffff 00c00000
>>> DS =0000 0000000000000000 ffffffff 00c00000
>>> FS =0000 0000000000000000 ffffffff 00c00000
>>> GS =0000 ffff88007fcc0000 ffffffff 00c00000
>>> LDT=0000 0000000000000000 ffffffff 00c00000
>>> TR =0040 ffff88007fcd4200 00002087 00008b00 DPL=0 TSS64-busy
>>> GDT= ffff88007fcc9000 0000007f
>>> IDT= ffffffffff578000 00000fff
>>> CR0=80050033 CR2=00000000ffffffff CR3=0000000001e0a000 CR4=003406e0
>>> DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
>>> DR6=00000000fffe0ff0 DR7=0000000000000400
>>> EFER=0000000000000d01
>>>
>>> We should try to reinject previous events if any before trying to inject
>>> new event if pending. If vmexit is triggered by L2 guest and L0 interested
>>> in, we should reinject IDT-vectoring info to L2 through vmcs02 if any,
>>> otherwise, we can consider new IRQs/NMIs which can be injected and call
>>> nested events callback to switch from L2 to L1 if needed and inject the
>>> proper vmexit events. However, 'commit 0ad3bed6c5ec ("kvm: nVMX: move
>>> nested events check to kvm_vcpu_running")' results in the handle events
>>> order reversely on non-APICv box. This patch fixes it by checking nested
>>> events if there is no KVM_REQ_EVENT since APICv interrupt injection doesn't
>>> use KVM_REQ_EVENT any more.
>>>
>>> Cc: Paolo Bonzini <pbonzini@...hat.com>
>>> Cc: Radim Krčmář <rkrcmar@...hat.com>
>>> Cc: Jan Kiszka <jan.kiszka@...mens.com>
>>> Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
>>
>> I need to understand this better. I would hope that something like
>>
>> @@ -10668,7 +10598,8 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr)
>>
>> if ((kvm_cpu_has_interrupt(vcpu) || external_intr) &&
>> nested_exit_on_intr(vcpu)) {
>> - if (vmx->nested.nested_run_pending)
>> + if (vmx->nested.nested_run_pending ||
>> + vcpu->arch.interrupt.pending)
>> return -EBUSY;
>> nested_vmx_vmexit(vcpu, EXIT_REASON_EXTERNAL_INTERRUPT, 0, 0);
>> return 0;
>>
>
> This is insufficient, L2 guest boot with some mitigation, but very
> slowly and finally stuck will the same crash as above. How about
> something like below:
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index ef4ba71..d46af65 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -10642,6 +10642,11 @@ static int vmx_check_nested_events(struct
> kvm_vcpu *vcpu, bool external_intr)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> + if (vcpu->arch.exception.pending ||
> + vcpu->arch.nmi_injected ||
> + vcpu->arch.interrupt.pending)
> + return -EBUSY;
> +
> if (nested_cpu_has_preemption_timer(get_vmcs12(vcpu)) &&
> vmx->nested.preemption_timer_expired) {
> if (vmx->nested.nested_run_pending)
I think this would be okay for kvm_vcpu_running, and also for the first
call in inject_pending_event (you would never exit here, because all
three conditions are handled earlier in inject_pending_event).
It would also let us the vcpu->arch.interrupt.pending condition here:
if (vmx->nested.nested_run_pending ||
vcpu->arch.interrupt.pending)
return -EBUSY;
which isn't bad either.
I think I did test vmx.flat with apicv=0. If it passes, we really
should add more testcases related to this bug! And also one for the
second call in inject_pending_event.
Paolo
Powered by blists - more mailing lists