[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1989278031344a14f14b2096bb018652ad6df8c2.camel@intel.com>
Date: Fri, 20 Jun 2025 23:34:05 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "Annapurve, Vishal" <vannapurve@...gle.com>
CC: "Gao, Chao" <chao.gao@...el.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "seanjc@...gle.com" <seanjc@...gle.com>,
"Huang, Kai" <kai.huang@...el.com>, "binbin.wu@...ux.intel.com"
<binbin.wu@...ux.intel.com>, "Chatre, Reinette" <reinette.chatre@...el.com>,
"Li, Xiaoyao" <xiaoyao.li@...el.com>, "Hunter, Adrian"
<adrian.hunter@...el.com>, "kirill.shutemov@...ux.intel.com"
<kirill.shutemov@...ux.intel.com>, "tony.lindgren@...ux.intel.com"
<tony.lindgren@...ux.intel.com>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Yamahata, Isaku" <isaku.yamahata@...el.com>, "pbonzini@...hat.com"
<pbonzini@...hat.com>, "Zhao, Yan Y" <yan.y.zhao@...el.com>
Subject: Re: [PATCH V4 1/1] KVM: TDX: Add sub-ioctl KVM_TDX_TERMINATE_VM
On Fri, 2025-06-20 at 14:21 -0700, Vishal Annapurve wrote:
> > Sorry if I'm being dumb, but why does it do this? It saves
> > freeing/allocating
> > the guestmemfd pages? Or the in-place data gets reused somehow?
>
> The goal is just to be able to reuse the same physical memory for the
> next boot of the guest. Freeing and faulting-in the same amount of
> memory is redundant and time-consuming for large VM sizes.
Can you provide enough information to evaluate how the whole problem is being
solved? (it sounds like you have the full solution implemented?)
The problem seems to be that rebuilding a whole TD for reboot is too slow. Does
the S-EPT survive if the VM is destroyed? If not, how does keeping the pages in
guestmemfd help with re-faulting? If the S-EPT is preserved, then what happens
when the new guest re-accepts it?
>
> >
> > The series Vishal linked has some kind of SEV state transfer thing. How is
> > it
> > intended to work for TDX?
>
> The series[1] unblocks intrahost-migration [2] and reboot usecases.
>
> [1] https://lore.kernel.org/lkml/cover.1747368092.git.afranji@google.com/#t
> [2] https://lore.kernel.org/lkml/cover.1749672978.git.afranji@google.com/#t
The question was: how was this reboot optimization intended to work for TDX? Are
you saying that it works via intra-host migration? Like some state is migrated
to the new TD to start it up?
>
> >
> > > E.g. otherwise multiple reboots would manifest as memory leakds and
> > > eventually OOM the host.
> >
> > This is in the case of future guestmemfd functionality? Or today?
This question was originally intended for Sean, but I gather from context that
the answer is in the future.
>
> Intrahost-migration and guest reboot are important usecases for Google
> to support guest VM lifecycles.
I am not challenging the priority of the use case *at all*.
Powered by blists - more mailing lists