[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALMp9eROXAOg_g=R8JRVfywY7uQXzBtVxKJYXq0dUcob-BfR-g@mail.gmail.com>
Date: Thu, 20 Aug 2020 13:08:22 -0700
From: Jim Mattson <jmattson@...gle.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Wanpeng Li <wanpengli@...cent.com>,
kvm list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] KVM: VMX: fix crash cleanup when KVM wasn't used
On Wed, Apr 1, 2020 at 1:13 AM Vitaly Kuznetsov <vkuznets@...hat.com> wrote:
>
> If KVM wasn't used at all before we crash the cleanup procedure fails with
> BUG: unable to handle page fault for address: ffffffffffffffc8
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 23215067 P4D 23215067 PUD 23217067 PMD 0
> Oops: 0000 [#8] SMP PTI
> CPU: 0 PID: 3542 Comm: bash Kdump: loaded Tainted: G D 5.6.0-rc2+ #823
> RIP: 0010:crash_vmclear_local_loaded_vmcss.cold+0x19/0x51 [kvm_intel]
>
> The root cause is that loaded_vmcss_on_cpu list is not yet initialized,
> we initialize it in hardware_enable() but this only happens when we start
> a VM.
>
> Previously, we used to have a bitmap with enabled CPUs and that was
> preventing [masking] the issue.
>
> Initialized loaded_vmcss_on_cpu list earlier, right before we assign
> crash_vmclear_loaded_vmcss pointer. blocked_vcpu_on_cpu list and
> blocked_vcpu_on_cpu_lock are moved altogether for consistency.
>
> Fixes: 31603d4fc2bb ("KVM: VMX: Always VMCLEAR in-use VMCSes during crash with kexec support")
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 3aba51d782e2..39a5dde12b79 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2257,10 +2257,6 @@ static int hardware_enable(void)
> !hv_get_vp_assist_page(cpu))
> return -EFAULT;
>
> - INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
> - INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
> - spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
> -
> r = kvm_cpu_vmxon(phys_addr);
> if (r)
> return r;
> @@ -8006,7 +8002,7 @@ module_exit(vmx_exit);
>
> static int __init vmx_init(void)
> {
> - int r;
> + int r, cpu;
>
> #if IS_ENABLED(CONFIG_HYPERV)
> /*
> @@ -8060,6 +8056,12 @@ static int __init vmx_init(void)
> return r;
> }
>
> + for_each_possible_cpu(cpu) {
> + INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
> + INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
> + spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
> + }
Just above this chunk, we have:
r = vmx_setup_l1d_flush(vmentry_l1d_flush_param);
if (r) {
vmx_exit();
...
If we take that early exit, because vmx_setup_l1d_flush() fails, we
won't initialize loaded_vmcss_on_cpu. However, vmx_exit() calls
kvm_exit(), which calls on_each_cpu(hardware_disable_nolock, NULL, 1).
Hardware_disable_nolock() then calls kvm_arch_hardware_disable(),
which calls kvm_x86_ops.hardware_disable() [the vmx.c
hardware_disable()], which calls vmclear_local_loaded_vmcss().
I believe that vmclear_local_loaded_vmcss() will then try to
dereference a NULL pointer, since per_cpu(loaded_vmcss_on_cpu, cpu) is
uninitialzed.
> #ifdef CONFIG_KEXEC_CORE
> rcu_assign_pointer(crash_vmclear_loaded_vmcss,
> crash_vmclear_local_loaded_vmcss);
> --
> 2.25.1
>
Powered by blists - more mailing lists