lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87d0oq2ef1.fsf@vitty.brq.redhat.com>
Date:   Mon, 21 Jan 2019 16:55:46 +0100
From:   Vitaly Kuznetsov <vkuznets@...hat.com>
To:     kvm@...r.kernel.org
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        linux-kernel@...r.kernel.org, Joerg Roedel <joro@...tes.org>,
        x86@...nel.org
Subject: Re: [PATCH] KVM: nSVM: clear events pending from svm_complete_interrupts() when exiting to L1

Vitaly Kuznetsov <vkuznets@...hat.com> writes:

> kvm-unit-tests' eventinj "NMI failing on IDT" test results in NMI being
> delivered to the host (L1) when it's running nested. The problem seems to
> be: svm_complete_interrupts() raises 'nmi_injected' flag but later we
> decide to reflect EXIT_NPF to L1. The flag remains pending and we do NMI
> injection upon entry so it got delivered to L1 instead of L2.
>
> It seems that VMX code solves the same issue in prepare_vmcs12(), this was
> introduced with code refactoring in commit 5f3d5799974b ("KVM: nVMX: Rework
> event injection and recovery").
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
>  arch/x86/kvm/svm.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 33d4ed6e78a5..db842dafccf0 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3419,6 +3419,14 @@ static int nested_svm_vmexit(struct vcpu_svm *svm)
>  	kvm_mmu_reset_context(&svm->vcpu);
>  	kvm_mmu_load(&svm->vcpu);
>  
> +	/*
> +	 * Drop what we picked up for L2 via svm_complete_interrupts() so it
> +	 * doesn't end up in L1.
> +	 */
> +	svm->vcpu.arch.nmi_injected = false;
> +	kvm_clear_exception_queue(&svm->vcpu);
> +	kvm_clear_interrupt_queue(&svm->vcpu);
> +
>  	return 0;
>  }

Ping?

-- 
Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ