[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CxjjJr8360q44nVSz33KyPQoaucfrFiD8to=Y86yKMOvA@mail.gmail.com>
Date: Tue, 25 Jul 2017 16:27:58 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
kvm <kvm@...r.kernel.org>,
Radim Krčmář <rkrcmar@...hat.com>,
Wanpeng Li <wanpeng.li@...mail.com>
Subject: Re: [PATCH] KVM: VMX: Fix losing blocking by NMI in the guest
interruptibility-state field
2017-07-14 19:36 GMT+08:00 Paolo Bonzini <pbonzini@...hat.com>:
> On 14/07/2017 11:39, Wanpeng Li wrote:
>> However, commit 0be9c7a89f750 (KVM: VMX: set "blocked by NMI" flag if EPT
>> violation happens during IRET from NMI) just fixes the fault due to EPT violation.
>> This patch tries to fix the fault due to the page fault of shadow page table.
>>
>> Cc: Paolo Bonzini <pbonzini@...hat.com>
>> Cc: Radim Krčmář <rkrcmar@...hat.com>
>> Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
>> ---
>> arch/x86/kvm/vmx.c | 5 +++++
>> 1 file changed, 5 insertions(+)
>>
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index 84e62ac..32ca063 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -5709,6 +5709,11 @@ static int handle_exception(struct kvm_vcpu *vcpu)
>> }
>>
>> if (is_page_fault(intr_info)) {
>> +
>> + if (!(to_vmx(vcpu)->idt_vectoring_info & VECTORING_INFO_VALID_MASK) &&
>> + (intr_info & INTR_INFO_UNBLOCK_NMI))
>> + vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, GUEST_INTR_STATE_NMI);
>> +
>> cr2 = vmcs_readl(EXIT_QUALIFICATION);
>> /* EPT won't cause page fault directly */
>> WARN_ON_ONCE(!vcpu->arch.apf.host_apf_reason && enable_ept);
>
> vmx_recover_nmi_blocking is supposed to do the same. EPT and PML-full exits
> need separate code because they store bit 12 in the exit qualification rather
> than the VM-exit interruption info. I think the bug is in the handling of
> vmx->nmi_known_unmasked.
>
> The following patch fixes it for me, can you test it too?
Sorry, I just touch my testing machine recently and had a traveling
before. It seems that the patch is correct for itself, but it still
can't fix the issue which I encounter. Actually, L1 injects NMI to L2
kvm-unit-tests/event.flat and mark the cached value of the guest
interruptibility info is masked, however, it is marked in the L1 and
L0 can't know what's the right value of the cached info should be. We
lost the right value of the cached info on L0, and the cached info is
unmask so vmx_recover_nmi_blocking can't handle it. So I'm afraid the
original patch also should be applied.
Regards,
Wanpeng Li
>
> Thanks,
>
> Paolo
>
> --------- 8< -------------------
> From: Paolo Bonzini <pbonzini@...hat.com
> Subject: [PATCH] KVM: nVMX: track NMI blocking state separately for each VMCS
>
> vmx_recover_nmi_blocking is using a cached value of the guest
> interruptibility info, which is stored in vmx->nmi_known_unmasked.
> vmx_recover_nmi_blocking is run for both normal and nested guests,
> so the cached value must be per-VMCS.
>
> This fixes eventinj.flat in a nested non-EPT environment. With EPT it
> works, because the EPT violation handler doesn't have the
> vmx->nmi_known_unmasked optimization (it is unnecessary because, unlike
> vmx_recover_nmi_blocking, it can just look at the exit qualification).
>
> Thanks to Wanpeng Li for debugging the testcase and providing an initial
> patch.
>
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
>
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 32db3f5dce7f..504df356a10c 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -198,7 +198,8 @@ struct loaded_vmcs {
> struct vmcs *vmcs;
> struct vmcs *shadow_vmcs;
> int cpu;
> - int launched;
> + bool launched;
> + bool nmi_known_unmasked;
> struct list_head loaded_vmcss_on_cpu_link;
> };
>
> @@ -5497,10 +5498,8 @@ static void vmx_inject_nmi(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> - if (!is_guest_mode(vcpu)) {
> - ++vcpu->stat.nmi_injections;
> - vmx->nmi_known_unmasked = false;
> - }
> + ++vcpu->stat.nmi_injections;
> + vmx->loaded_vmcs->nmi_known_unmasked = false;
>
> if (vmx->rmode.vm86_active) {
> if (kvm_inject_realmode_interrupt(vcpu, NMI_VECTOR, 0) != EMULATE_DONE)
> @@ -5514,16 +5513,21 @@ static void vmx_inject_nmi(struct kvm_vcpu *vcpu)
>
> static bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu)
> {
> - if (to_vmx(vcpu)->nmi_known_unmasked)
> + struct vcpu_vmx *vmx = to_vmx(vcpu);
> + bool masked;
> +
> + if (vmx->loaded_vmcs->nmi_known_unmasked)
> return false;
> - return vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_NMI;
> + masked = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_NMI;
> + vmx->loaded_vmcs->nmi_known_unmasked = !masked;
> + return masked;
> }
>
> static void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> - vmx->nmi_known_unmasked = !masked;
> + vmx->loaded_vmcs->nmi_known_unmasked = !masked;
> if (masked)
> vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
> GUEST_INTR_STATE_NMI);
> @@ -8719,7 +8723,7 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
>
> idtv_info_valid = vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK;
>
> - if (vmx->nmi_known_unmasked)
> + if (vmx->loaded_vmcs->nmi_known_unmasked)
> return;
> /*
> * Can't use vmx->exit_intr_info since we're not sure what
> @@ -8743,7 +8747,7 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
> vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
> GUEST_INTR_STATE_NMI);
> else
> - vmx->nmi_known_unmasked =
> + vmx->loaded_vmcs->nmi_known_unmasked =
> !(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO)
> & GUEST_INTR_STATE_NMI);
> }
Powered by blists - more mailing lists