[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250918232224.2202592-10-rick.p.edgecombe@intel.com>
Date: Thu, 18 Sep 2025 16:22:17 -0700
From: Rick Edgecombe <rick.p.edgecombe@...el.com>
To: kas@...nel.org,
bp@...en8.de,
chao.gao@...el.com,
dave.hansen@...ux.intel.com,
isaku.yamahata@...el.com,
kai.huang@...el.com,
kvm@...r.kernel.org,
linux-coco@...ts.linux.dev,
linux-kernel@...r.kernel.org,
mingo@...hat.com,
pbonzini@...hat.com,
seanjc@...gle.com,
tglx@...utronix.de,
x86@...nel.org,
yan.y.zhao@...el.com,
vannapurve@...gle.com
Cc: rick.p.edgecombe@...el.com,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: [PATCH v3 09/16] KVM: TDX: Allocate PAMT memory for TD control structures
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
TDX TD control structures are provided to the TDX module at 4KB page size
and require PAMT backing. This means for Dynamic PAMT they need to also
have 4KB backings installed.
Previous changes introduced tdx_alloc_page()/tdx_free_page() that can
allocate a page and automatically handle the DPAMT maintenance. Use them
for vCPU control structures instead of alloc_page()/__free_page().
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
[update log]
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
---
v3:
- Write log. Rename from “KVM: TDX: Allocate PAMT memory in __tdx_td_init()”
---
arch/x86/kvm/vmx/tdx.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index a952c7b6a22d..40c2730ea2ac 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -2482,7 +2482,7 @@ static int __tdx_td_init(struct kvm *kvm, struct td_params *td_params,
atomic_inc(&nr_configured_hkid);
- tdr_page = alloc_page(GFP_KERNEL);
+ tdr_page = tdx_alloc_page();
if (!tdr_page)
goto free_hkid;
@@ -2495,7 +2495,7 @@ static int __tdx_td_init(struct kvm *kvm, struct td_params *td_params,
goto free_tdr;
for (i = 0; i < kvm_tdx->td.tdcs_nr_pages; i++) {
- tdcs_pages[i] = alloc_page(GFP_KERNEL);
+ tdcs_pages[i] = tdx_alloc_page();
if (!tdcs_pages[i])
goto free_tdcs;
}
@@ -2616,10 +2616,8 @@ static int __tdx_td_init(struct kvm *kvm, struct td_params *td_params,
teardown:
/* Only free pages not yet added, so start at 'i' */
for (; i < kvm_tdx->td.tdcs_nr_pages; i++) {
- if (tdcs_pages[i]) {
- __free_page(tdcs_pages[i]);
- tdcs_pages[i] = NULL;
- }
+ tdx_free_page(tdcs_pages[i]);
+ tdcs_pages[i] = NULL;
}
if (!kvm_tdx->td.tdcs_pages)
kfree(tdcs_pages);
@@ -2635,15 +2633,13 @@ static int __tdx_td_init(struct kvm *kvm, struct td_params *td_params,
free_tdcs:
for (i = 0; i < kvm_tdx->td.tdcs_nr_pages; i++) {
- if (tdcs_pages[i])
- __free_page(tdcs_pages[i]);
+ tdx_free_page(tdcs_pages[i]);
}
kfree(tdcs_pages);
kvm_tdx->td.tdcs_pages = NULL;
free_tdr:
- if (tdr_page)
- __free_page(tdr_page);
+ tdx_free_page(tdr_page);
kvm_tdx->td.tdr_page = 0;
free_hkid:
--
2.51.0
Powered by blists - more mailing lists