[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAL3jRz3DTL8Ivhv@google.com>
Date: Fri, 18 Apr 2025 18:08:29 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Vishal Annapurve <vannapurve@...gle.com>
Cc: Adrian Hunter <adrian.hunter@...el.com>, pbonzini@...hat.com, mlevitsk@...hat.com,
kvm@...r.kernel.org, rick.p.edgecombe@...el.com,
kirill.shutemov@...ux.intel.com, kai.huang@...el.com,
reinette.chatre@...el.com, xiaoyao.li@...el.com,
tony.lindgren@...ux.intel.com, binbin.wu@...ux.intel.com,
isaku.yamahata@...el.com, linux-kernel@...r.kernel.org, yan.y.zhao@...el.com,
chao.gao@...el.com
Subject: Re: [PATCH V2 1/1] KVM: TDX: Add sub-ioctl KVM_TDX_TERMINATE_VM
On Fri, Apr 18, 2025, Vishal Annapurve wrote:
> On Thu, Apr 17, 2025 at 6:20 AM Adrian Hunter <adrian.hunter@...el.com> wrote:
> >
> > ...
> > +static int tdx_terminate_vm(struct kvm *kvm)
> > +{
> > + int r = 0;
> > +
> > + guard(mutex)(&kvm->lock);
> > + cpus_read_lock();
> > +
> > + if (!kvm_trylock_all_vcpus(kvm)) {
>
> Does this need to be a trylock variant? Is userspace expected to keep
> retrying this operation indefinitely?
Userspace is expected to not be stupid, i.e. not be doing things with vCPUs when
terminating the VM. This is already rather unpleasant, I'd rather not have to
think hard about what could go wrong if KVM has to wait on all vCPU mutexes.
Powered by blists - more mailing lists