[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0c2504c0-8dac-4bbf-bd50-a503be755d3f@redhat.com>
Date: Tue, 7 Apr 2020 14:35:18 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Jim Mattson <jmattson@...gle.com>,
Wanpeng Li <wanpengli@...cent.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: VMX: fix crash cleanup when KVM wasn't used
On 01/04/20 10:13, Vitaly Kuznetsov wrote:
> If KVM wasn't used at all before we crash the cleanup procedure fails with
> BUG: unable to handle page fault for address: ffffffffffffffc8
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 23215067 P4D 23215067 PUD 23217067 PMD 0
> Oops: 0000 [#8] SMP PTI
> CPU: 0 PID: 3542 Comm: bash Kdump: loaded Tainted: G D 5.6.0-rc2+ #823
> RIP: 0010:crash_vmclear_local_loaded_vmcss.cold+0x19/0x51 [kvm_intel]
>
> The root cause is that loaded_vmcss_on_cpu list is not yet initialized,
> we initialize it in hardware_enable() but this only happens when we start
> a VM.
>
> Previously, we used to have a bitmap with enabled CPUs and that was
> preventing [masking] the issue.
>
> Initialized loaded_vmcss_on_cpu list earlier, right before we assign
> crash_vmclear_loaded_vmcss pointer. blocked_vcpu_on_cpu list and
> blocked_vcpu_on_cpu_lock are moved altogether for consistency.
>
> Fixes: 31603d4fc2bb ("KVM: VMX: Always VMCLEAR in-use VMCSes during crash with kexec support")
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 3aba51d782e2..39a5dde12b79 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2257,10 +2257,6 @@ static int hardware_enable(void)
> !hv_get_vp_assist_page(cpu))
> return -EFAULT;
>
> - INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
> - INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
> - spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
> -
> r = kvm_cpu_vmxon(phys_addr);
> if (r)
> return r;
> @@ -8006,7 +8002,7 @@ module_exit(vmx_exit);
>
> static int __init vmx_init(void)
> {
> - int r;
> + int r, cpu;
>
> #if IS_ENABLED(CONFIG_HYPERV)
> /*
> @@ -8060,6 +8056,12 @@ static int __init vmx_init(void)
> return r;
> }
>
> + for_each_possible_cpu(cpu) {
> + INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
> + INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
> + spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
> + }
> +
> #ifdef CONFIG_KEXEC_CORE
> rcu_assign_pointer(crash_vmclear_loaded_vmcss,
> crash_vmclear_local_loaded_vmcss);
>
Queued, thanks.
Paolo
Powered by blists - more mailing lists