[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2ac8bb8e-c05b-4dc3-a2c1-43e8b936e8f3@intel.com>
Date: Tue, 18 Mar 2025 17:41:26 +0200
From: Adrian Hunter <adrian.hunter@...el.com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
<pbonzini@...hat.com>
CC: <seanjc@...gle.com>, <kvm@...r.kernel.org>, <rick.p.edgecombe@...el.com>,
<kai.huang@...el.com>, <reinette.chatre@...el.com>, <xiaoyao.li@...el.com>,
<tony.lindgren@...ux.intel.com>, <binbin.wu@...ux.intel.com>,
<isaku.yamahata@...el.com>, <linux-kernel@...r.kernel.org>,
<yan.y.zhao@...el.com>, <chao.gao@...el.com>
Subject: Re: [PATCH RFC] KVM: TDX: Defer guest memory removal to decrease
shutdown time
On 17/03/25 10:13, Kirill A. Shutemov wrote:
> On Thu, Mar 13, 2025 at 08:16:29PM +0200, Adrian Hunter wrote:
>> @@ -3221,6 +3241,19 @@ int tdx_gmem_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn)
>> return PG_LEVEL_4K;
>> }
>>
>> +int tdx_gmem_defer_removal(struct kvm *kvm, struct inode *inode)
>> +{
>> + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
>> +
>> + if (kvm_tdx->nr_gmem_inodes >= TDX_MAX_GMEM_INODES)
>> + return 0;
>
> We have graceful way to handle this, but should we pr_warn_once() or
> something if we ever hit this limit?
>
> Hm. It is also a bit odd that we need to wait until removal to add a link
> to guest_memfd inode from struct kvm/kvm_tdx. Can we do it right away in
> __kvm_gmem_create()?
Sure.
The thing is, the inode is currently private within virt/kvm/guest_memfd.c
so there needs to be a way to make it accessible to arch code. Either a
callback passes it, or it is put on struct kvm in some way.
>
> Do I read correctly that inode->i_mapping->i_private_list only ever has
> single entry of the gmem? Seems wasteful.
Yes, it is presently used for only 1 gmem.
>
> Maybe move it to i_private (I don't see flags being used anywhere) and
> re-use the list_head to link all inodes of the struct kvm?
>
> No need in the gmem_inodes array.
There is also inode->i_mapping->i_private_data which is unused.
Powered by blists - more mailing lists