[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <X/X+1q6H/q1Ez6zE@google.com>
Date: Wed, 6 Jan 2021 10:17:58 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: kvm@...r.kernel.org, Joerg Roedel <joro@...tes.org>,
Wanpeng Li <wanpengli@...cent.com>,
"open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
<linux-kernel@...r.kernel.org>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Jim Mattson <jmattson@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 2/2] KVM: nVMX: fix for disappearing L1->L2 event
injection on L1 migration
On Wed, Jan 06, 2021, Maxim Levitsky wrote:
> If migration happens while L2 entry with an injected event to L2 is pending,
> we weren't including the event in the migration state and it would be
> lost leading to L2 hang.
But the injected event should still be in vmcs12 and KVM_STATE_NESTED_RUN_PENDING
should be set in the migration state, i.e. it should naturally be copied to
vmcs02 and thus (re)injected by vmx_set_nested_state(). Is nested_run_pending
not set? Is the info in vmcs12 somehow lost? Or am I off in left field...
> Fix this by queueing the injected event in similar manner to how we queue
> interrupted injections.
>
> This can be reproduced by running an IO intense task in L2,
> and repeatedly migrating the L1.
>
> Suggested-by: Paolo Bonzini <pbonzini@...hat.com>
> Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
> ---
> arch/x86/kvm/vmx/nested.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index e2f26564a12de..2ea0bb14f385f 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -2355,12 +2355,12 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
> * Interrupt/Exception Fields
> */
> if (vmx->nested.nested_run_pending) {
> - vmcs_write32(VM_ENTRY_INTR_INFO_FIELD,
> - vmcs12->vm_entry_intr_info_field);
> - vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE,
> - vmcs12->vm_entry_exception_error_code);
> - vmcs_write32(VM_ENTRY_INSTRUCTION_LEN,
> - vmcs12->vm_entry_instruction_len);
> + if ((vmcs12->vm_entry_intr_info_field & VECTORING_INFO_VALID_MASK))
> + vmx_process_injected_event(&vmx->vcpu,
> + vmcs12->vm_entry_intr_info_field,
> + vmcs12->vm_entry_instruction_len,
> + vmcs12->vm_entry_exception_error_code);
> +
> vmcs_write32(GUEST_INTERRUPTIBILITY_INFO,
> vmcs12->guest_interruptibility_info);
> vmx->loaded_vmcs->nmi_known_unmasked =
> --
> 2.26.2
>
Powered by blists - more mailing lists