[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250609191340.2051741-10-kirill.shutemov@linux.intel.com>
Date: Mon, 9 Jun 2025 22:13:37 +0300
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
To: pbonzini@...hat.com,
seanjc@...gle.com,
dave.hansen@...ux.intel.com
Cc: rick.p.edgecombe@...el.com,
isaku.yamahata@...el.com,
kai.huang@...el.com,
yan.y.zhao@...el.com,
chao.gao@...el.com,
tglx@...utronix.de,
mingo@...hat.com,
bp@...en8.de,
kvm@...r.kernel.org,
x86@...nel.org,
linux-coco@...ts.linux.dev,
linux-kernel@...r.kernel.org,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: [PATCHv2 09/12] KVM: TDX: Reclaim PAMT memory
The PAMT memory holds metadata for TDX-protected memory. With Dynamic
PAMT, PAMT_4K is allocated on demand. The kernel supplies the TDX module
with a few pages that cover 2M of host physical memory.
PAMT memory can be reclaimed when the last user is gone. It can happen
in a few code paths:
- On TDH.PHYMEM.PAGE.RECLAIM in tdx_reclaim_td_control_pages() and
tdx_reclaim_page().
- On TDH.MEM.PAGE.REMOVE in tdx_sept_drop_private_spte().
- In tdx_sept_zap_private_spte() for pages that were in the queue to be
added with TDH.MEM.PAGE.ADD, but it never happened due to an error.
- In tdx_sept_free_private_spt() for SEPT pages;
Add tdx_pamt_put() for memory that comes from guest_memfd and use
tdx_free_page() for the rest.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
---
arch/x86/kvm/vmx/tdx.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index bc9bc393f866..0aed7e73cd6b 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -353,7 +353,7 @@ static void tdx_reclaim_control_page(struct page *ctrl_page)
if (tdx_reclaim_page(ctrl_page))
return;
- __free_page(ctrl_page);
+ tdx_free_page(ctrl_page);
}
struct tdx_flush_vp_arg {
@@ -584,7 +584,7 @@ static void tdx_reclaim_td_control_pages(struct kvm *kvm)
}
tdx_clear_page(kvm_tdx->td.tdr_page);
- __free_page(kvm_tdx->td.tdr_page);
+ tdx_free_page(kvm_tdx->td.tdr_page);
kvm_tdx->td.tdr_page = NULL;
}
@@ -1635,6 +1635,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
return -EIO;
}
tdx_clear_page(page);
+ tdx_pamt_put(page, level);
tdx_unpin(kvm, page);
return 0;
}
@@ -1724,6 +1725,7 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
if (tdx_is_sept_zap_err_due_to_premap(kvm_tdx, err, entry, level) &&
!KVM_BUG_ON(!atomic64_read(&kvm_tdx->nr_premapped), kvm)) {
atomic64_dec(&kvm_tdx->nr_premapped);
+ tdx_pamt_put(page, level);
tdx_unpin(kvm, page);
return 0;
}
@@ -1788,6 +1790,8 @@ int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn,
enum pg_level level, void *private_spt)
{
struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+ struct page *page = virt_to_page(private_spt);
+ int ret;
/*
* free_external_spt() is only called after hkid is freed when TD is
@@ -1804,7 +1808,12 @@ int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn,
* The HKID assigned to this TD was already freed and cache was
* already flushed. We don't have to flush again.
*/
- return tdx_reclaim_page(virt_to_page(private_spt));
+ ret = tdx_reclaim_page(virt_to_page(private_spt));
+ if (ret)
+ return ret;
+
+ tdx_pamt_put(page, PG_LEVEL_4K);
+ return 0;
}
int tdx_sept_remove_private_spte(struct kvm *kvm, gfn_t gfn,
--
2.47.2
Powered by blists - more mailing lists