[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <353a53a7-5d1e-1797-c870-1eb8b382bedd@redhat.com>
Date: Fri, 14 Feb 2020 09:58:39 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Wanpeng Li <kernellwp@...il.com>,
LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>,
Wanpeng Li <wanpengli@...cent.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>
Subject: Re: [PATCH RESEND] KVM: X86: Grab KVM's srcu lock when accessing hv
assist page
On 14/02/20 09:51, Wanpeng Li wrote:
> From: Wanpeng Li <wanpengli@...cent.com>
>
> Acquire kvm->srcu for the duration of mapping eVMCS to fix a bug where accessing
> hv assist page derefences ->memslots without holding ->srcu or ->slots_lock.
Perhaps nested_sync_vmcs12_to_shadow should be moved to
prepare_guest_switch, where the SRCU is already taken.
Paolo
> It can be reproduced by running KVM's evmcs_test selftest.
>
> =============================
> WARNING: suspicious RCU usage
> 5.6.0-rc1+ #53 Tainted: G W IOE
> -----------------------------
> ./include/linux/kvm_host.h:623 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
> rcu_scheduler_active = 2, debug_locks = 1
> 1 lock held by evmcs_test/8507:
> #0: ffff9ddd156d00d0 (&vcpu->mutex){+.+.}, at:
> kvm_vcpu_ioctl+0x85/0x680 [kvm]
>
> stack backtrace:
> CPU: 6 PID: 8507 Comm: evmcs_test Tainted: G W IOE 5.6.0-rc1+ #53
> Hardware name: Dell Inc. OptiPlex 7040/0JCTF8, BIOS 1.4.9 09/12/2016
> Call Trace:
> dump_stack+0x68/0x9b
> kvm_read_guest_cached+0x11d/0x150 [kvm]
> kvm_hv_get_assist_page+0x33/0x40 [kvm]
> nested_enlightened_vmentry+0x2c/0x60 [kvm_intel]
> nested_vmx_handle_enlightened_vmptrld.part.52+0x32/0x1c0 [kvm_intel]
> nested_sync_vmcs12_to_shadow+0x439/0x680 [kvm_intel]
> vmx_vcpu_run+0x67a/0xe60 [kvm_intel]
> vcpu_enter_guest+0x35e/0x1bc0 [kvm]
> kvm_arch_vcpu_ioctl_run+0x40b/0x670 [kvm]
> kvm_vcpu_ioctl+0x370/0x680 [kvm]
> ksys_ioctl+0x235/0x850
> __x64_sys_ioctl+0x16/0x20
> do_syscall_64+0x77/0x780
> entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> Signed-off-by: Wanpeng Li <wanpengli@...cent.com>
> ---
> arch/x86/kvm/vmx/nested.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 657c2ed..a68a69d 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -1994,14 +1994,18 @@ static int
> nested_vmx_handle_enlightened_vmptrld(struct kvm_vcpu *vcpu,
> void nested_sync_vmcs12_to_shadow(struct kvm_vcpu *vcpu)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
> + int idx;
>
> /*
> * hv_evmcs may end up being not mapped after migration (when
> * L2 was running), map it here to make sure vmcs12 changes are
> * properly reflected.
> */
> - if (vmx->nested.enlightened_vmcs_enabled && !vmx->nested.hv_evmcs)
> + if (vmx->nested.enlightened_vmcs_enabled && !vmx->nested.hv_evmcs) {
> + idx = srcu_read_lock(&vcpu->kvm->srcu);
> nested_vmx_handle_enlightened_vmptrld(vcpu, false);
> + srcu_read_unlock(&vcpu->kvm->srcu, idx);
> + }
>
> if (vmx->nested.hv_evmcs) {
> copy_vmcs12_to_enlightened(vmx);
> --
> 2.7.4
>
Powered by blists - more mailing lists