[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAGtprH9RXM8RGj_GtxjHMQcWcvUPa_FJWXOu7LTQ00C7N5pxiQ@mail.gmail.com>
Date: Fri, 20 Jun 2025 20:00:03 -0700
From: Vishal Annapurve <vannapurve@...gle.com>
To: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
Cc: "Gao, Chao" <chao.gao@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "seanjc@...gle.com" <seanjc@...gle.com>,
"Huang, Kai" <kai.huang@...el.com>,
"binbin.wu@...ux.intel.com" <binbin.wu@...ux.intel.com>, "Chatre, Reinette" <reinette.chatre@...el.com>,
"Li, Xiaoyao" <xiaoyao.li@...el.com>, "Hunter, Adrian" <adrian.hunter@...el.com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"tony.lindgren@...ux.intel.com" <tony.lindgren@...ux.intel.com>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Yamahata, Isaku" <isaku.yamahata@...el.com>, "pbonzini@...hat.com" <pbonzini@...hat.com>,
"Zhao, Yan Y" <yan.y.zhao@...el.com>
Subject: Re: [PATCH V4 1/1] KVM: TDX: Add sub-ioctl KVM_TDX_TERMINATE_VM
On Fri, Jun 20, 2025 at 4:34 PM Edgecombe, Rick P
<rick.p.edgecombe@...el.com> wrote:
>
> On Fri, 2025-06-20 at 14:21 -0700, Vishal Annapurve wrote:
> > > Sorry if I'm being dumb, but why does it do this? It saves
> > > freeing/allocating
> > > the guestmemfd pages? Or the in-place data gets reused somehow?
> >
> > The goal is just to be able to reuse the same physical memory for the
> > next boot of the guest. Freeing and faulting-in the same amount of
> > memory is redundant and time-consuming for large VM sizes.
>
> Can you provide enough information to evaluate how the whole problem is being
> solved? (it sounds like you have the full solution implemented?)
>
> The problem seems to be that rebuilding a whole TD for reboot is too slow. Does
> the S-EPT survive if the VM is destroyed? If not, how does keeping the pages in
> guestmemfd help with re-faulting? If the S-EPT is preserved, then what happens
> when the new guest re-accepts it?
SEPT entries don't survive reboots.
The faulting-in I was referring to is just allocation of memory pages
for guest_memfd offsets.
>
> >
> > >
> > > The series Vishal linked has some kind of SEV state transfer thing. How is
> > > it
> > > intended to work for TDX?
> >
> > The series[1] unblocks intrahost-migration [2] and reboot usecases.
> >
> > [1] https://lore.kernel.org/lkml/cover.1747368092.git.afranji@google.com/#t
> > [2] https://lore.kernel.org/lkml/cover.1749672978.git.afranji@google.com/#t
>
> The question was: how was this reboot optimization intended to work for TDX? Are
> you saying that it works via intra-host migration? Like some state is migrated
> to the new TD to start it up?
Reboot optimization is not specific to TDX, it's basically just about
trying to reuse the same physical memory for the next boot. No state
is preserved here except the mapping of guest_memfd offsets to
physical memory pages.
Powered by blists - more mailing lists