[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGtprH_PwNkZUUx5+SoZcCmXAqcgfFkzprfNRH8HY3wcOm+1eg@mail.gmail.com>
Date: Tue, 17 Jun 2025 23:00:41 -0700
From: Vishal Annapurve <vannapurve@...gle.com>
To: Adrian Hunter <adrian.hunter@...el.com>
Cc: pbonzini@...hat.com, seanjc@...gle.com, kvm@...r.kernel.org,
rick.p.edgecombe@...el.com, kirill.shutemov@...ux.intel.com,
kai.huang@...el.com, reinette.chatre@...el.com, xiaoyao.li@...el.com,
tony.lindgren@...ux.intel.com, binbin.wu@...ux.intel.com,
isaku.yamahata@...el.com, linux-kernel@...r.kernel.org, yan.y.zhao@...el.com,
chao.gao@...el.com
Subject: Re: [PATCH V4 1/1] KVM: TDX: Add sub-ioctl KVM_TDX_TERMINATE_VM
On Tue, Jun 17, 2025 at 10:50 PM Adrian Hunter <adrian.hunter@...el.com> wrote:
> ...
> >>
> >> Changes in V4:
> >>
> >> Drop TDX_FLUSHVP_NOT_DONE change. It will be done separately.
> >> Use KVM_BUG_ON() instead of WARN_ON().
> >> Correct kvm_trylock_all_vcpus() return value.
> >>
> >> Changes in V3:
> >>
> >> Remove KVM_BUG_ON() from tdx_mmu_release_hkid() because it would
> >> trigger on the error path from __tdx_td_init()
> >>
> >> Put cpus_read_lock() handling back into tdx_mmu_release_hkid()
> >>
> >> Handle KVM_TDX_TERMINATE_VM in the switch statement, i.e. let
> >> tdx_vm_ioctl() deal with kvm->lock
> >> ....
> >>
> >> +static int tdx_terminate_vm(struct kvm *kvm)
> >> +{
> >> + if (kvm_trylock_all_vcpus(kvm))
> >> + return -EBUSY;
> >> +
> >> + kvm_vm_dead(kvm);
> >
> > With this no more VM ioctls can be issued on this instance. How would
> > userspace VMM clean up the memslots? Is the expectation that
> > guest_memfd and VM fds are closed to actually reclaim the memory?
>
> Yes
>
> >
> > Ability to clean up memslots from userspace without closing
> > VM/guest_memfd handles is useful to keep reusing the same guest_memfds
> > for the next boot iteration of the VM in case of reboot.
>
> TD lifecycle does not include reboot. In other words, reboot is
> done by shutting down the TD and then starting again with a new TD.
>
> AFAIK it is not currently possible to shut down without closing
> guest_memfds since the guest_memfd holds a reference (users_count)
> to struct kvm, and destruction begins when users_count hits zero.
>
gmem link support[1] allows associating existing guest_memfds with new
VM instances.
Breakdown of the userspace VMM flow:
1) Create a new VM instance before closing guest_memfd files.
2) Link existing guest_memfd files with the new VM instance. -> This
creates new set of files backed by the same inode but associated with
the new VM instance.
3) Close the older guest memfd handles -> results in older VM instance cleanup.
[1] https://lore.kernel.org/lkml/cover.1747368092.git.afranji@google.com/#t
Powered by blists - more mailing lists