lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 18 Aug 2017 15:00:55 +0200
From:   Radim Krčmář <rkrcmar@...hat.com>
To:     Wanpeng Li <kernellwp@...il.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Wanpeng Li <wanpeng.li@...mail.com>
Subject: Re: [PATCH] KVM: nVMX: Fix trying to cancel vmlauch/vmresume

2017-08-17 18:30-0700, Wanpeng Li:
> From: Wanpeng Li <wanpeng.li@...mail.com>
> 
> ------------[ cut here ]------------
> WARNING: CPU: 7 PID: 3861 at /home/kernel/ssd/kvm/arch/x86/kvm//vmx.c:11299 nested_vmx_vmexit+0x176e/0x1980 [kvm_intel]
> CPU: 7 PID: 3861 Comm: qemu-system-x86 Tainted: G        W  OE   4.13.0-rc4+ #11
> RIP: 0010:nested_vmx_vmexit+0x176e/0x1980 [kvm_intel]
> Call Trace:
>  ? kvm_multiple_exception+0x149/0x170 [kvm]
>  ? handle_emulation_failure+0x79/0x230 [kvm]
>  ? load_vmcs12_host_state+0xa80/0xa80 [kvm_intel]
>  ? check_chain_key+0x137/0x1e0
>  ? reexecute_instruction.part.168+0x130/0x130 [kvm]
>  nested_vmx_inject_exception_vmexit+0xb7/0x100 [kvm_intel]
>  ? nested_vmx_inject_exception_vmexit+0xb7/0x100 [kvm_intel]
>  vmx_queue_exception+0x197/0x300 [kvm_intel]
>  kvm_arch_vcpu_ioctl_run+0x1b0c/0x2c90 [kvm]
>  ? kvm_arch_vcpu_runnable+0x220/0x220 [kvm]
>  ? preempt_count_sub+0x18/0xc0
>  ? restart_apic_timer+0x17d/0x300 [kvm]
>  ? kvm_lapic_restart_hv_timer+0x37/0x50 [kvm]
>  ? kvm_arch_vcpu_load+0x1d8/0x350 [kvm]
>  kvm_vcpu_ioctl+0x4e4/0x910 [kvm]
>  ? kvm_vcpu_ioctl+0x4e4/0x910 [kvm]
>  ? kvm_dev_ioctl+0xbe0/0xbe0 [kvm]
> 
> The flag "nested_run_pending", which can override the decision of which should run 
> next, L1 or L2. nested_run_pending=1 means that we *must* run L2 next, not L1. This 
> is necessary in particular when L1 did a VMLAUNCH of L2 and therefore expects L2 to 
> be run (and perhaps be injected with an event it specified, etc.). Nested_run_pending 
> is especially intended to avoid switching  to L1 in the injection decision-point.
> 
> I catch this in the queue exception path, this patch fixes it by running L2 next 
> instead of L1 in the queue exception path.
> 
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: Radim Krčmář <rkrcmar@...hat.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
> ---
>  arch/x86/kvm/vmx.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index e398946..3e64a9b 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -2466,6 +2466,9 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu)
>  	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
>  	unsigned int nr = vcpu->arch.exception.nr;
>  
> +	if (to_vmx(vcpu)->nested.nested_run_pending)
> +		return 0;

This will inject the exception into L2, even though L1 should get it.
We can't return 1 either, as that would just drop the exception ...

Seems like we should request an immediate VM exit from L2 and keep the
exception for L1 pending for a subsequent nested VM exit.

Thanks.

Powered by blists - more mailing lists