[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251121005125.417831-11-rick.p.edgecombe@intel.com>
Date: Thu, 20 Nov 2025 16:51:19 -0800
From: Rick Edgecombe <rick.p.edgecombe@...el.com>
To: bp@...en8.de,
chao.gao@...el.com,
dave.hansen@...el.com,
isaku.yamahata@...el.com,
kai.huang@...el.com,
kas@...nel.org,
kvm@...r.kernel.org,
linux-coco@...ts.linux.dev,
linux-kernel@...r.kernel.org,
mingo@...hat.com,
pbonzini@...hat.com,
seanjc@...gle.com,
tglx@...utronix.de,
vannapurve@...gle.com,
x86@...nel.org,
yan.y.zhao@...el.com,
xiaoyao.li@...el.com,
binbin.wu@...el.com
Cc: rick.p.edgecombe@...el.com,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: [PATCH v4 10/16] KVM: TDX: Allocate PAMT memory for vCPU control structures
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
TDX vCPU control structures are provided to the TDX module at 4KB page
size and require PAMT backing. This means for Dynamic PAMT they need to
also have 4KB backings installed.
Previous changes introduced tdx_alloc_page()/tdx_free_page() that can
allocate a page and automatically handle the DPAMT maintenance. Use them
for vCPU control structures instead of alloc_page()/__free_page().
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
[update log]
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
---
v3:
- Write log. Reame from “Allocate PAMT memory for TDH.VP.CREATE and
TDH.VP.ADDCX”.
- Remove new line damage
---
arch/x86/kvm/vmx/tdx.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 8c4c1221e311..b6d7f4b5f40f 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -2882,7 +2882,7 @@ static int tdx_td_vcpu_init(struct kvm_vcpu *vcpu, u64 vcpu_rcx)
int ret, i;
u64 err;
- page = alloc_page(GFP_KERNEL);
+ page = tdx_alloc_page();
if (!page)
return -ENOMEM;
tdx->vp.tdvpr_page = page;
@@ -2902,7 +2902,7 @@ static int tdx_td_vcpu_init(struct kvm_vcpu *vcpu, u64 vcpu_rcx)
}
for (i = 0; i < kvm_tdx->td.tdcx_nr_pages; i++) {
- page = alloc_page(GFP_KERNEL);
+ page = tdx_alloc_page();
if (!page) {
ret = -ENOMEM;
goto free_tdcx;
@@ -2924,7 +2924,7 @@ static int tdx_td_vcpu_init(struct kvm_vcpu *vcpu, u64 vcpu_rcx)
* method, but the rest are freed here.
*/
for (; i < kvm_tdx->td.tdcx_nr_pages; i++) {
- __free_page(tdx->vp.tdcx_pages[i]);
+ tdx_free_page(tdx->vp.tdcx_pages[i]);
tdx->vp.tdcx_pages[i] = NULL;
}
return -EIO;
@@ -2952,16 +2952,14 @@ static int tdx_td_vcpu_init(struct kvm_vcpu *vcpu, u64 vcpu_rcx)
free_tdcx:
for (i = 0; i < kvm_tdx->td.tdcx_nr_pages; i++) {
- if (tdx->vp.tdcx_pages[i])
- __free_page(tdx->vp.tdcx_pages[i]);
+ tdx_free_page(tdx->vp.tdcx_pages[i]);
tdx->vp.tdcx_pages[i] = NULL;
}
kfree(tdx->vp.tdcx_pages);
tdx->vp.tdcx_pages = NULL;
free_tdvpr:
- if (tdx->vp.tdvpr_page)
- __free_page(tdx->vp.tdvpr_page);
+ tdx_free_page(tdx->vp.tdvpr_page);
tdx->vp.tdvpr_page = NULL;
tdx->vp.tdvpr_pa = 0;
--
2.51.2
Powered by blists - more mailing lists