[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGtprH86N7XgEXq0UyOexjVRXYV1KdOguURVOYXTnQzsTHPrJQ@mail.gmail.com>
Date: Wed, 9 Jul 2025 07:28:48 -0700
From: Vishal Annapurve <vannapurve@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Rick P Edgecombe <rick.p.edgecombe@...el.com>, "pvorel@...e.cz" <pvorel@...e.cz>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>, "catalin.marinas@....com" <catalin.marinas@....com>,
Jun Miao <jun.miao@...el.com>, "palmer@...belt.com" <palmer@...belt.com>,
"pdurrant@...zon.co.uk" <pdurrant@...zon.co.uk>, "vbabka@...e.cz" <vbabka@...e.cz>,
"peterx@...hat.com" <peterx@...hat.com>, "x86@...nel.org" <x86@...nel.org>,
"amoorthy@...gle.com" <amoorthy@...gle.com>, "tabba@...gle.com" <tabba@...gle.com>,
"quic_svaddagi@...cinc.com" <quic_svaddagi@...cinc.com>, "maz@...nel.org" <maz@...nel.org>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
"anthony.yznaga@...cle.com" <anthony.yznaga@...cle.com>,
"mail@...iej.szmigiero.name" <mail@...iej.szmigiero.name>,
"quic_eberman@...cinc.com" <quic_eberman@...cinc.com>, Wei W Wang <wei.w.wang@...el.com>,
Fan Du <fan.du@...el.com>,
"Wieczor-Retman, Maciej" <maciej.wieczor-retman@...el.com>, Yan Y Zhao <yan.y.zhao@...el.com>,
"ajones@...tanamicro.com" <ajones@...tanamicro.com>, Dave Hansen <dave.hansen@...el.com>,
"paul.walmsley@...ive.com" <paul.walmsley@...ive.com>,
"quic_mnalajal@...cinc.com" <quic_mnalajal@...cinc.com>, "aik@....com" <aik@....com>,
"usama.arif@...edance.com" <usama.arif@...edance.com>, "fvdl@...gle.com" <fvdl@...gle.com>,
"jack@...e.cz" <jack@...e.cz>, "quic_cvanscha@...cinc.com" <quic_cvanscha@...cinc.com>,
Kirill Shutemov <kirill.shutemov@...el.com>, "willy@...radead.org" <willy@...radead.org>,
"steven.price@....com" <steven.price@....com>, "anup@...infault.org" <anup@...infault.org>,
"thomas.lendacky@....com" <thomas.lendacky@....com>, "keirf@...gle.com" <keirf@...gle.com>,
"mic@...ikod.net" <mic@...ikod.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "nsaenz@...zon.es" <nsaenz@...zon.es>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"oliver.upton@...ux.dev" <oliver.upton@...ux.dev>,
"binbin.wu@...ux.intel.com" <binbin.wu@...ux.intel.com>, "muchun.song@...ux.dev" <muchun.song@...ux.dev>,
Zhiquan1 Li <zhiquan1.li@...el.com>, "rientjes@...gle.com" <rientjes@...gle.com>,
Erdem Aktas <erdemaktas@...gle.com>, "mpe@...erman.id.au" <mpe@...erman.id.au>,
"david@...hat.com" <david@...hat.com>, "jgg@...pe.ca" <jgg@...pe.ca>, "hughd@...gle.com" <hughd@...gle.com>,
"jhubbard@...dia.com" <jhubbard@...dia.com>, Haibo1 Xu <haibo1.xu@...el.com>,
Isaku Yamahata <isaku.yamahata@...el.com>, "jthoughton@...gle.com" <jthoughton@...gle.com>,
"rppt@...nel.org" <rppt@...nel.org>, "steven.sistare@...cle.com" <steven.sistare@...cle.com>,
"jarkko@...nel.org" <jarkko@...nel.org>, "quic_pheragu@...cinc.com" <quic_pheragu@...cinc.com>,
"chenhuacai@...nel.org" <chenhuacai@...nel.org>, Kai Huang <kai.huang@...el.com>,
"shuah@...nel.org" <shuah@...nel.org>, "bfoster@...hat.com" <bfoster@...hat.com>,
"dwmw@...zon.co.uk" <dwmw@...zon.co.uk>, Chao P Peng <chao.p.peng@...el.com>,
"pankaj.gupta@....com" <pankaj.gupta@....com>, Alexander Graf <graf@...zon.com>,
"nikunj@....com" <nikunj@....com>, "viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
"pbonzini@...hat.com" <pbonzini@...hat.com>, "yuzenghui@...wei.com" <yuzenghui@...wei.com>,
"jroedel@...e.de" <jroedel@...e.de>, "suzuki.poulose@....com" <suzuki.poulose@....com>,
"jgowans@...zon.com" <jgowans@...zon.com>, Yilun Xu <yilun.xu@...el.com>,
"liam.merwick@...cle.com" <liam.merwick@...cle.com>, "michael.roth@....com" <michael.roth@....com>,
"quic_tsoni@...cinc.com" <quic_tsoni@...cinc.com>, Xiaoyao Li <xiaoyao.li@...el.com>,
"aou@...s.berkeley.edu" <aou@...s.berkeley.edu>, Ira Weiny <ira.weiny@...el.com>,
"richard.weiyang@...il.com" <richard.weiyang@...il.com>,
"kent.overstreet@...ux.dev" <kent.overstreet@...ux.dev>, "qperret@...gle.com" <qperret@...gle.com>,
"dmatlack@...gle.com" <dmatlack@...gle.com>, "james.morse@....com" <james.morse@....com>,
"brauner@...nel.org" <brauner@...nel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"ackerleytng@...gle.com" <ackerleytng@...gle.com>, "pgonda@...gle.com" <pgonda@...gle.com>,
"quic_pderrin@...cinc.com" <quic_pderrin@...cinc.com>, "roypat@...zon.co.uk" <roypat@...zon.co.uk>,
"hch@...radead.org" <hch@...radead.org>, "will@...nel.org" <will@...nel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [RFC PATCH v2 00/51] 1G page support for guest_memfd
On Tue, Jul 8, 2025 at 11:55 AM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Tue, Jul 08, 2025, Rick P Edgecombe wrote:
> > On Tue, 2025-07-08 at 11:03 -0700, Sean Christopherson wrote:
> > > > I think there is interest in de-coupling it?
> > >
> > > No?
> >
> > I'm talking about the intra-host migration/reboot optimization stuff. And not
> > doing a good job, sorry.
> >
> > > Even if we get to a point where multiple distinct VMs can bind to a single
> > > guest_memfd, e.g. for inter-VM shared memory, there will still need to be a
> > > sole
> > > owner of the memory. AFAICT, fully decoupling guest_memfd from a VM would add
> > > non-trivial complexity for zero practical benefit.
> >
> > I'm talking about moving a gmem fd between different VMs or something using
> > KVM_LINK_GUEST_MEMFD [0]. Not advocating to try to support it. But trying to
> > feel out where the concepts are headed. It kind of allows gmem fds (or just
> > their source memory?) to live beyond a VM lifecycle.
>
> I think the answer is that we want to let guest_memfd live beyond the "struct kvm"
> instance, but not beyond the Virtual Machine. From a past discussion on this topic[*].
>
> : No go. Because again, the inode (physical memory) is coupled to the virtual machine
> : as a thing, not to a "struct kvm". Or more concretely, the inode is coupled to an
> : ASID or an HKID, and there can be multiple "struct kvm" objects associated with a
> : single ASID. And at some point in the future, I suspect we'll have multiple KVM
> : objects per HKID too.
> :
> : The current SEV use case is for the migration helper, where two KVM objects share
> : a single ASID (the "real" VM and the helper). I suspect TDX will end up with
> : similar behavior where helper "VMs" can use the HKID of the "real" VM. For KVM,
> : that means multiple struct kvm objects being associated with a single HKID.
> :
> : To prevent use-after-free, KVM "just" needs to ensure the helper instances can't
> : outlive the real instance, i.e. can't use the HKID/ASID after the owning virtual
> : machine has been destroyed.
> :
> : To put it differently, "struct kvm" is a KVM software construct that _usually_,
> : but not always, is associated 1:1 with a virtual machine.
> :
> : And FWIW, stashing the pointer without holding a reference would not be a complete
> : solution, because it couldn't guard against KVM reusing a pointer. E.g. if a
> : struct kvm was unbound and then freed, KVM could reuse the same memory for a new
> : struct kvm, with a different ASID/HKID, and get a false negative on the rebinding
> : check.
>
> Exactly what that will look like in code is TBD, but the concept/logic holds up.
I think we can simplify the role of guest_memfd in line with discussion [1]:
1) guest_memfd is a memory provider for userspace, KVM, IOMMU.
- It allows fallocate to populate/deallocate memory
2) guest_memfd supports the notion of private/shared faults.
3) guest_memfd supports memory access control:
- It allows shared faults from userspace, KVM, IOMMU
- It allows private faults from KVM, IOMMU
4) guest_memfd supports changing access control on its ranges between
shared/private.
- It notifies the users to invalidate their mappings for the
ranges getting converted/truncated.
Responsibilities that ideally should not be taken up by guest_memfd:
1) guest_memfd can not initiate pre-faulting on behalf of it's users.
2) guest_memfd should not be directly communicating with the
underlying architecture layers.
- All communication should go via KVM/IOMMU.
3) KVM should ideally associate the lifetime of backing
pagetables/protection tables/RMP tables with the lifetime of the
binding of memslots with guest_memfd.
- Today KVM SNP logic ties RMP table entry lifetimes with how
long the folios are mapped in guest_memfd, which I think should be
revisited.
Some very early thoughts on how guest_memfd could be laid out for the long term:
1) guest_memfd code ideally should be built-in to the kernel.
2) guest_memfd instances should still be created using KVM IOCTLs that
carry specific capabilities/restrictions for its users based on the
backing VM/arch.
3) Any outgoing communication from guest_memfd to it's users like
userspace/KVM/IOMMU should be via notifiers to invalidate similar to
how MMU notifiers work.
4) KVM and IOMMU can implement intermediate layers to handle
interaction with guest_memfd.
- e.g. there could be a layer within kvm that handles:
- creating guest_memfd files and associating a
kvm_gmem_context with those files.
- memslot binding
- kvm_gmem_context will be used to bind kvm
memslots with the context ranges.
- invalidate notifier handling
- kvm_gmem_context will be used to intercept
guest_memfd callbacks and
translate them to the right GPA ranges.
- linking
- kvm_gmem_context can be linked to different
KVM instances.
This line of thinking can allow cleaner separation between
guest_memfd/KVM/IOMMU [2].
[1] https://lore.kernel.org/lkml/CAGtprH-+gPN8J_RaEit=M_ErHWTmFHeCipC6viT6PHhG3ELg6A@mail.gmail.com/#t
[2] https://lore.kernel.org/lkml/31beeed3-b1be-439b-8a5b-db8c06dadc30@amd.com/
>
> [*] https://lore.kernel.org/all/ZOO782YGRY0YMuPu@google.com
>
> > [0] https://lore.kernel.org/all/cover.1747368092.git.afranji@google.com/
> > https://lore.kernel.org/kvm/cover.1749672978.git.afranji@google.com/
Powered by blists - more mailing lists