[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aNVMIRels8iCldOj@google.com>
Date: Thu, 25 Sep 2025 07:05:21 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Shivank Garg <shivankg@....com>
Cc: willy@...radead.org, akpm@...ux-foundation.org, david@...hat.com,
pbonzini@...hat.com, shuah@...nel.org, vbabka@...e.cz, brauner@...nel.org,
viro@...iv.linux.org.uk, dsterba@...e.com, xiang@...nel.org, chao@...nel.org,
jaegeuk@...nel.org, clm@...com, josef@...icpanda.com,
kent.overstreet@...ux.dev, zbestahu@...il.com, jefflexu@...ux.alibaba.com,
dhavale@...gle.com, lihongbo22@...wei.com, lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com, rppt@...nel.org, surenb@...gle.com, mhocko@...e.com,
ziy@...dia.com, matthew.brost@...el.com, joshua.hahnjy@...il.com,
rakie.kim@...com, byungchul@...com, gourry@...rry.net,
ying.huang@...ux.alibaba.com, apopple@...dia.com, tabba@...gle.com,
ackerleytng@...gle.com, paul@...l-moore.com, jmorris@...ei.org,
serge@...lyn.com, pvorel@...e.cz, bfoster@...hat.com, vannapurve@...gle.com,
chao.gao@...el.com, bharata@....com, nikunj@....com, michael.day@....com,
shdhiman@....com, yan.y.zhao@...el.com, Neeraj.Upadhyay@....com,
thomas.lendacky@....com, michael.roth@....com, aik@....com, jgg@...dia.com,
kalyazin@...zon.com, peterx@...hat.com, jack@...e.cz, hch@...radead.org,
cgzones@...glemail.com, ira.weiny@...el.com, rientjes@...gle.com,
roypat@...zon.co.uk, chao.p.peng@...el.com, amit@...radead.org,
ddutile@...hat.com, dan.j.williams@...el.com, ashish.kalra@....com,
gshan@...hat.com, jgowans@...zon.com, pankaj.gupta@....com, papaluri@....com,
yuzhao@...gle.com, suzuki.poulose@....com, quic_eberman@...cinc.com,
linux-bcachefs@...r.kernel.org, linux-btrfs@...r.kernel.org,
linux-erofs@...ts.ozlabs.org, linux-f2fs-devel@...ts.sourceforge.net,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-security-module@...r.kernel.org,
kvm@...r.kernel.org, linux-kselftest@...r.kernel.org,
linux-coco@...ts.linux.dev
Subject: Re: [PATCH kvm-next V11 5/7] KVM: guest_memfd: Add slab-allocated
inode cache
On Wed, Aug 27, 2025, Shivank Garg wrote:
> Add dedicated inode structure (kvm_gmem_inode_info) and slab-allocated
> inode cache for guest memory backing, similar to how shmem handles inodes.
>
> This adds the necessary allocation/destruction functions and prepares
> for upcoming guest_memfd NUMA policy support changes.
>
> Signed-off-by: Shivank Garg <shivankg@....com>
> ---
> virt/kvm/guest_memfd.c | 70 ++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 68 insertions(+), 2 deletions(-)
>
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 6c66a0974055..356947d36a47 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -17,6 +17,15 @@ struct kvm_gmem {
> struct list_head entry;
> };
>
> +struct kvm_gmem_inode_info {
What about naming this simply gmem_inode?
> + struct inode vfs_inode;
> +};
> +
> +static inline struct kvm_gmem_inode_info *KVM_GMEM_I(struct inode *inode)
And then GMEM_I()?
And then (in a later follow-up if we target this for 6.18, or as a prep patch if
we push this out to 6.19), rename kvm_gmem to gmem_file?
That would make guest_memfd look a bit more like other filesystems, and I don't
see a need to preface the local structures and helpers with "kvm_", e.g. GMEM_I()
is analogous to x86's to_vmx() and to_svm().
As for renaming kvm_gmem => gmem_file, I wandered back into this code via Ackerley's
in-place conversion series, and it took me a good long while to remember the roles
of files vs. inodes in gmem. That's probably a sign that the code needs clarification
given that I wrote the original code. :-)
Leveraging an old discussion[*], my thought is to get to this:
/*
* A guest_memfd instance can be associated multiple VMs, each with its own
* "view" of the underlying physical memory.
*
* The gmem's inode is effectively the raw underlying physical storage, and is
* used to track properties of the physical memory, while each gmem file is
* effectively a single VM's view of that storage, and is used to track assets
* specific to its associated VM, e.g. memslots=>gmem bindings.
*/
struct gmem_file {
struct kvm *kvm;
struct xarray bindings;
struct list_head entry;
};
struct gmem_inode {
struct shared_policy policy;
struct inode vfs_inode;
};
[*] https://lore.kernel.org/all/ZLGiEfJZTyl7M8mS@google.com
Powered by blists - more mailing lists