[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <312a6a94-8325-a04f-2409-ec9ae0c0503a@redhat.com>
Date: Fri, 5 Jun 2020 15:52:45 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>, kvm@...r.kernel.org
Cc: syzbot <syzbot+705f4401d5a93a59b87d@...kaller.appspotmail.com>,
eesposit@...hat.com, gregkh@...uxfoundation.org,
linux-kernel@...r.kernel.org, rafael@...nel.org,
syzkaller-bugs@...glegroups.com
Subject: Re: general protection fault in start_creating
On 05/06/20 15:38, Vitaly Kuznetsov wrote:
>
> I think what happens here is one thread does kvm_vm_ioctl_create_vcpu()
> and another tries to delete the VM. The problem was probably present
> even before the commit as both kvm_create_vcpu_debugfs() and
> kvm_destroy_vm_debugfs() happen outside of kvm_lock but maybe it was
> harder to trigger.
I think it wasn't, because:
- you cannot reach kvm_vcpu_release before you have created the file
descriptor, and the patch moved debugfs creation after creation of
the file descriptor
- on the other hand you cannot reach kvm_vm_release during KVM_CREATE_VCPU,
so you cannot reach kvm_destroy_vm either (because the VM file descriptor
holds a reference). So the bug can only occur for things that are touched
by kvm_vcpu_release, and that is only the debugfs dentry.
I have a patch for this already, which is simply removing the
debugfs_remove_recursive call in kvm_vcpu_release, but I have forgotten to
send it. What you have below could work but i don't see a good reason to
put things under kvm->lock.
Paolo
> I think what happens here is one thread does kvm_vm_ioctl_create_vcpu()
> and another tries to delete the VM. The problem was probably present
> even before the commit as both kvm_create_vcpu_debugfs() and
> kvm_destroy_vm_debugfs() happen outside of kvm_lock but maybe it was
> harder to trigger. Is there a reason to not put all this under kvm_lock?
> I.e. the following:
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 7fa1e38e1659..d53784cb920f 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -793,9 +793,9 @@ static void kvm_destroy_vm(struct kvm *kvm)
> struct mm_struct *mm = kvm->mm;
>
> kvm_uevent_notify_change(KVM_EVENT_DESTROY_VM, kvm);
> - kvm_destroy_vm_debugfs(kvm);
> kvm_arch_sync_events(kvm);
> mutex_lock(&kvm_lock);
> + kvm_destroy_vm_debugfs(kvm);
> list_del(&kvm->vm_list);
> mutex_unlock(&kvm_lock);
> kvm_arch_pre_destroy_vm(kvm);
> @@ -3084,9 +3084,9 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
> smp_wmb();
> atomic_inc(&kvm->online_vcpus);
>
> + kvm_create_vcpu_debugfs(vcpu);
> mutex_unlock(&kvm->lock);
> kvm_arch_vcpu_postcreate(vcpu);
> - kvm_create_vcpu_debugfs(vcpu);
> return r;
>
> unlock_vcpu_destroy:
>
> should probably do. The reproducer doesn't work for me (or just takes
> too long so I gave up), unfortunately. Or I may have misunderstood
> everything
Powered by blists - more mailing lists