[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGtprH9v=bw2q7ogo0Z46icsVWMUhm1ryyxdRFuiMkcGgxrw2w@mail.gmail.com>
Date: Wed, 9 Jul 2025 20:39:36 -0700
From: Vishal Annapurve <vannapurve@...gle.com>
To: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
Cc: "seanjc@...gle.com" <seanjc@...gle.com>, "pvorel@...e.cz" <pvorel@...e.cz>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>, "catalin.marinas@....com" <catalin.marinas@....com>,
"Miao, Jun" <jun.miao@...el.com>, "palmer@...belt.com" <palmer@...belt.com>,
"pdurrant@...zon.co.uk" <pdurrant@...zon.co.uk>, "vbabka@...e.cz" <vbabka@...e.cz>,
"peterx@...hat.com" <peterx@...hat.com>, "x86@...nel.org" <x86@...nel.org>,
"amoorthy@...gle.com" <amoorthy@...gle.com>, "tabba@...gle.com" <tabba@...gle.com>,
"maz@...nel.org" <maz@...nel.org>, "quic_svaddagi@...cinc.com" <quic_svaddagi@...cinc.com>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
"anthony.yznaga@...cle.com" <anthony.yznaga@...cle.com>, "jack@...e.cz" <jack@...e.cz>,
"mail@...iej.szmigiero.name" <mail@...iej.szmigiero.name>,
"quic_eberman@...cinc.com" <quic_eberman@...cinc.com>, "Wang, Wei W" <wei.w.wang@...el.com>,
"keirf@...gle.com" <keirf@...gle.com>,
"Wieczor-Retman, Maciej" <maciej.wieczor-retman@...el.com>, "Zhao, Yan Y" <yan.y.zhao@...el.com>,
"ajones@...tanamicro.com" <ajones@...tanamicro.com>, "willy@...radead.org" <willy@...radead.org>,
"paul.walmsley@...ive.com" <paul.walmsley@...ive.com>, "Hansen, Dave" <dave.hansen@...el.com>,
"aik@....com" <aik@....com>, "usama.arif@...edance.com" <usama.arif@...edance.com>,
"quic_mnalajal@...cinc.com" <quic_mnalajal@...cinc.com>, "fvdl@...gle.com" <fvdl@...gle.com>,
"rppt@...nel.org" <rppt@...nel.org>, "quic_cvanscha@...cinc.com" <quic_cvanscha@...cinc.com>,
"nsaenz@...zon.es" <nsaenz@...zon.es>, "anup@...infault.org" <anup@...infault.org>,
"thomas.lendacky@....com" <thomas.lendacky@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "mic@...ikod.net" <mic@...ikod.net>,
"oliver.upton@...ux.dev" <oliver.upton@...ux.dev>, "Du, Fan" <fan.du@...el.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>, "steven.price@....com" <steven.price@....com>,
"binbin.wu@...ux.intel.com" <binbin.wu@...ux.intel.com>, "muchun.song@...ux.dev" <muchun.song@...ux.dev>,
"Li, Zhiquan1" <zhiquan1.li@...el.com>, "rientjes@...gle.com" <rientjes@...gle.com>,
"Aktas, Erdem" <erdemaktas@...gle.com>, "mpe@...erman.id.au" <mpe@...erman.id.au>,
"david@...hat.com" <david@...hat.com>, "jgg@...pe.ca" <jgg@...pe.ca>, "hughd@...gle.com" <hughd@...gle.com>,
"jhubbard@...dia.com" <jhubbard@...dia.com>, "Xu, Haibo1" <haibo1.xu@...el.com>,
"Yamahata, Isaku" <isaku.yamahata@...el.com>, "jthoughton@...gle.com" <jthoughton@...gle.com>,
"steven.sistare@...cle.com" <steven.sistare@...cle.com>,
"quic_pheragu@...cinc.com" <quic_pheragu@...cinc.com>, "jarkko@...nel.org" <jarkko@...nel.org>,
"Shutemov, Kirill" <kirill.shutemov@...el.com>, "chenhuacai@...nel.org" <chenhuacai@...nel.org>,
"Huang, Kai" <kai.huang@...el.com>, "shuah@...nel.org" <shuah@...nel.org>,
"bfoster@...hat.com" <bfoster@...hat.com>, "dwmw@...zon.co.uk" <dwmw@...zon.co.uk>,
"Peng, Chao P" <chao.p.peng@...el.com>, "pankaj.gupta@....com" <pankaj.gupta@....com>,
"Graf, Alexander" <graf@...zon.com>, "nikunj@....com" <nikunj@....com>,
"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>, "pbonzini@...hat.com" <pbonzini@...hat.com>,
"yuzenghui@...wei.com" <yuzenghui@...wei.com>, "jroedel@...e.de" <jroedel@...e.de>,
"suzuki.poulose@....com" <suzuki.poulose@....com>, "jgowans@...zon.com" <jgowans@...zon.com>,
"Xu, Yilun" <yilun.xu@...el.com>, "liam.merwick@...cle.com" <liam.merwick@...cle.com>,
"michael.roth@....com" <michael.roth@....com>, "quic_tsoni@...cinc.com" <quic_tsoni@...cinc.com>,
"Li, Xiaoyao" <xiaoyao.li@...el.com>, "aou@...s.berkeley.edu" <aou@...s.berkeley.edu>,
"Weiny, Ira" <ira.weiny@...el.com>,
"richard.weiyang@...il.com" <richard.weiyang@...il.com>,
"kent.overstreet@...ux.dev" <kent.overstreet@...ux.dev>, "qperret@...gle.com" <qperret@...gle.com>,
"dmatlack@...gle.com" <dmatlack@...gle.com>, "james.morse@....com" <james.morse@....com>,
"brauner@...nel.org" <brauner@...nel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"ackerleytng@...gle.com" <ackerleytng@...gle.com>, "pgonda@...gle.com" <pgonda@...gle.com>,
"quic_pderrin@...cinc.com" <quic_pderrin@...cinc.com>, "roypat@...zon.co.uk" <roypat@...zon.co.uk>,
"hch@...radead.org" <hch@...radead.org>, "will@...nel.org" <will@...nel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [RFC PATCH v2 00/51] 1G page support for guest_memfd
On Wed, Jul 9, 2025 at 8:17 AM Edgecombe, Rick P
<rick.p.edgecombe@...el.com> wrote:
>
> On Wed, 2025-07-09 at 07:28 -0700, Vishal Annapurve wrote:
> > I think we can simplify the role of guest_memfd in line with discussion [1]:
> > 1) guest_memfd is a memory provider for userspace, KVM, IOMMU.
> > - It allows fallocate to populate/deallocate memory
> > 2) guest_memfd supports the notion of private/shared faults.
> > 3) guest_memfd supports memory access control:
> > - It allows shared faults from userspace, KVM, IOMMU
> > - It allows private faults from KVM, IOMMU
> > 4) guest_memfd supports changing access control on its ranges between
> > shared/private.
> > - It notifies the users to invalidate their mappings for the
> > ranges getting converted/truncated.
>
> KVM needs to know if a GFN is private/shared. I think it is also intended to now
> be a repository for this information, right? Besides invalidations, it needs to
> be queryable.
Yeah, that interface can be added as well. Though, if possible KVM can
just directly pass the fault type to guest_memfd and it can return an
error if the fault type doesn't match the permission. Additionally KVM
does query the mapping order for a certain pfn/gfn which will need to
be supported as well.
>
> >
> > Responsibilities that ideally should not be taken up by guest_memfd:
> > 1) guest_memfd can not initiate pre-faulting on behalf of it's users.
> > 2) guest_memfd should not be directly communicating with the
> > underlying architecture layers.
> > - All communication should go via KVM/IOMMU.
>
> Maybe stronger, there should be generic gmem behaviors. Not any special
> if (vm_type == tdx) type logic.
>
> > 3) KVM should ideally associate the lifetime of backing
> > pagetables/protection tables/RMP tables with the lifetime of the
> > binding of memslots with guest_memfd.
> > - Today KVM SNP logic ties RMP table entry lifetimes with how
> > long the folios are mapped in guest_memfd, which I think should be
> > revisited.
>
> I don't understand the problem. KVM needs to respond to user accessible
> invalidations, but how long it keeps other resources around could be useful for
> various optimizations. Like deferring work to a work queue or something.
I don't think it could be deferred to a work queue as the RMP table
entries will need to be removed synchronously once the last reference
on the guest_memfd drops, unless memory itself is kept around after
filemap eviction. I can see benefits of this approach for handling
scenarios like intrahost-migration.
>
> I think it would help to just target the ackerly series goals. We should get
> that code into shape and this kind of stuff will fall out of it.
>
> >
> > Some very early thoughts on how guest_memfd could be laid out for the long term:
> > 1) guest_memfd code ideally should be built-in to the kernel.
> > 2) guest_memfd instances should still be created using KVM IOCTLs that
> > carry specific capabilities/restrictions for its users based on the
> > backing VM/arch.
> > 3) Any outgoing communication from guest_memfd to it's users like
> > userspace/KVM/IOMMU should be via notifiers to invalidate similar to
> > how MMU notifiers work.
> > 4) KVM and IOMMU can implement intermediate layers to handle
> > interaction with guest_memfd.
> > - e.g. there could be a layer within kvm that handles:
> > - creating guest_memfd files and associating a
> > kvm_gmem_context with those files.
> > - memslot binding
> > - kvm_gmem_context will be used to bind kvm
> > memslots with the context ranges.
> > - invalidate notifier handling
> > - kvm_gmem_context will be used to intercept
> > guest_memfd callbacks and
> > translate them to the right GPA ranges.
> > - linking
> > - kvm_gmem_context can be linked to different
> > KVM instances.
>
> We can probably look at the code to decide these.
>
Agree.
Powered by blists - more mailing lists