[<prev] [next>] [day] [month] [year] [list]
Message-ID:
<OS9PR01MB15202A41CF04F0BBF8CE244B48D34A@OS9PR01MB15202.jpnprd01.prod.outlook.com>
Date: Thu, 14 Aug 2025 22:34:02 -0400
From: Shixuan Zhao <shixuan.zhao@...mail.com>
To: "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>,
x86@...nel.org,
"H . Peter Anvin" <hpa@...or.com>,
linux-coco@...ts.linux.dev,
linux-kernel@...r.kernel.org
Cc: Shixuan Zhao <shixuan.zhao@...mail.com>
Subject: [PATCH] x86/tdx: support VM area addresses for tdx_enc_status_changed
Currently tdx_enc_status_changed uses __pa which will only accept
addresses within the linear mapping. This patch allows memory allocated
in the VM area to be used.
For VM area addresses, we do it page-by-page since there's no guarantee
that the physical pages are contiguous. If, however, the entire range
falls within the linear mapping, we provide a fast path that do the
entire range just like the current version so that the performance
would remain roughly the same as current.
Signed-off-by: Shixuan Zhao <shixuan.zhao@...mail.com>
---
Hi,
I recently ran into a problem where tdx_enc_status_changed was not
implemented to handle memory mapped in the kernel VM area (e.g., ioremap
or vmalloc). I have created a patch that tries to fix this problem. The
overall idea is to keep a fast path for the current __pa-based routine
if the range falls within the linear mapping, otherwise fall to a page-by-
page page table walk for those in the VM area.
It's the first time I'm submitting a patch to the kernel so although I've
done the RTFM, feel free to discuss or point out anything improper.
Thanks,
Shixuan
arch/x86/coco/tdx/tdx.c | 42 ++++++++++++++++++++++++++++++++++-------
1 file changed, 35 insertions(+), 7 deletions(-)
diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index 7b2833705..c56cd429f 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -957,15 +957,11 @@ static bool tdx_map_gpa(phys_addr_t start, phys_addr_t end, bool enc)
}
/*
- * Inform the VMM of the guest's intent for this physical page: shared with
- * the VMM or private to the guest. The VMM is expected to change its mapping
- * of the page in response.
+ * Helper that works on a paddr range for tdx_enc_status_changed
*/
-static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
+static bool tdx_enc_status_changed_phys(phys_addr_t start, phys_addr_t end,
+ bool enc)
{
- phys_addr_t start = __pa(vaddr);
- phys_addr_t end = __pa(vaddr + numpages * PAGE_SIZE);
-
if (!tdx_map_gpa(start, end, enc))
return false;
@@ -976,6 +972,38 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
return true;
}
+/*
+ * Inform the VMM of the guest's intent for this vaddr range: shared with
+ * the VMM or private to the guest. The VMM is expected to change its mapping
+ * of the page in response.
+ */
+static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
+{
+ unsigned long va_iter;
+ unsigned long end_va = vaddr + numpages * PAGE_SIZE;
+ phys_addr_t start_pa, end_pa;
+
+ /* fast path when the entire range is within linear mapping */
+ if (virt_addr_valid((void *)vaddr) &&
+ virt_addr_valid((void *)end_va)) {
+ start_pa = __pa(vaddr);
+ end_pa = __pa(end_va);
+
+ return tdx_enc_status_changed_phys(start_pa, end_pa, enc);
+ }
+
+ /* use page table walk for memory in VM area */
+ for (va_iter = vaddr; va_iter < end_va; va_iter += PAGE_SIZE) {
+ start_pa = slow_virt_to_phys((void *)va_iter);
+ end_pa = start_pa + PAGE_SIZE;
+
+ if (!tdx_enc_status_changed_phys(start_pa, end_pa, enc))
+ return false;
+ }
+
+ return true;
+}
+
static int tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
bool enc)
{
--
2.43.0
Powered by blists - more mailing lists