[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CABgObfZ4XD=yQ3kRiNnMfd=w0ZbGY3yzTz49s-Kq4CKE+QJXxg@mail.gmail.com>
Date: Mon, 2 Sep 2024 15:03:19 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Chao Gao <chao.gao@...el.com>, Kai Huang <kai.huang@...el.com>
Subject: Re: [PATCH v3 1/8] KVM: Use dedicated mutex to protect
kvm_usage_count to avoid deadlock
On Sat, Aug 31, 2024 at 1:45 AM Sean Christopherson <seanjc@...gle.com> wrote:
> > Can you add a comment to the comment message suggesting switching the vm_list
> > to RCU? All the occurrences of list_for_each_entry(..., &vm_list, ...) seem
> > amenable to that, and it should be as easy to stick all or part of
> > kvm_destroy_vm() behind call_rcu().
>
> +1 to the idea of making vm_list RCU-protected, though I think we'd want to use
> SRCU, e.g. set_nx_huge_pages() currently takes eash VM's slots_lock while purging
> possible NX hugepages.
Ah, for that I was thinking of wrapping everything with
kvm_get_kvm_safe()/rcu_read_unlock() and kvm_put_kvm/rcu_read_lock().
Avoiding zero refcounts is safer and generally these visits are not
hot code.
> And I think kvm_destroy_vm() can simply do a synchronize_srcu() after removing
> the VM from the list. Trying to put kvm_destroy_vm() into an RCU callback would
> probably be a bit of a disaster, e.g. kvm-intel.ko in particular currently does
> some rather nasty things while destory a VM.
If all iteration is guarded by kvm_get_kvm_safe(), probably you can
defer only the reclaiming part (i.e. the part after
kvm_destroy_devices()) which is a lot easier to audit.
Anyhow, I took a look at the v2 and it looks good.
Paolo
Powered by blists - more mailing lists