[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCVOmp/tlpgRuAF4@intel.com>
Date: Thu, 15 May 2025 10:16:58 +0800
From: Chao Gao <chao.gao@...el.com>
To: Yan Zhao <yan.y.zhao@...el.com>
CC: <pbonzini@...hat.com>, <seanjc@...gle.com>,
<linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>, <x86@...nel.org>,
<rick.p.edgecombe@...el.com>, <dave.hansen@...el.com>,
<kirill.shutemov@...el.com>, <tabba@...gle.com>, <ackerleytng@...gle.com>,
<quic_eberman@...cinc.com>, <michael.roth@....com>, <david@...hat.com>,
<vannapurve@...gle.com>, <vbabka@...e.cz>, <jroedel@...e.de>,
<thomas.lendacky@....com>, <pgonda@...gle.com>, <zhiquan1.li@...el.com>,
<fan.du@...el.com>, <jun.miao@...el.com>, <ira.weiny@...el.com>,
<isaku.yamahata@...el.com>, <xiaoyao.li@...el.com>,
<binbin.wu@...ux.intel.com>, <chao.p.peng@...el.com>
Subject: Re: [RFC PATCH 02/21] x86/virt/tdx: Enhance tdh_mem_page_aug() to
support huge pages
On Thu, Apr 24, 2025 at 11:04:28AM +0800, Yan Zhao wrote:
>Enhance the SEAMCALL wrapper tdh_mem_page_aug() to support huge pages.
>
>Verify the validity of the level and ensure that the mapping range is fully
>contained within the page folio.
>
>As a conservative solution, perform CLFLUSH on all pages to be mapped into
>the TD before invoking the SEAMCALL TDH_MEM_PAGE_AUG. This ensures that any
>dirty cache lines do not write back later and clobber TD memory.
>
>Signed-off-by: Xiaoyao Li <xiaoyao.li@...el.com>
>Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
>Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
>---
> arch/x86/virt/vmx/tdx/tdx.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
>diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
>index f5e2a937c1e7..a66d501b5677 100644
>--- a/arch/x86/virt/vmx/tdx/tdx.c
>+++ b/arch/x86/virt/vmx/tdx/tdx.c
>@@ -1595,9 +1595,18 @@ u64 tdh_mem_page_aug(struct tdx_td *td, u64 gpa, int level, struct page *page, u
> .rdx = tdx_tdr_pa(td),
> .r8 = page_to_phys(page),
> };
>+ unsigned long nr_pages = 1 << (level * 9);
>+ struct folio *folio = page_folio(page);
>+ unsigned long idx = 0;
> u64 ret;
>
>- tdx_clflush_page(page);
>+ if (!(level >= TDX_PS_4K && level < TDX_PS_NR) ||
>+ (folio_page_idx(folio, page) + nr_pages > folio_nr_pages(folio)))
>+ return -EINVAL;
Returning -EINVAL looks incorrect as the return type is u64.
>+
>+ while (nr_pages--)
>+ tdx_clflush_page(nth_page(page, idx++));
>+
> ret = seamcall_ret(TDH_MEM_PAGE_AUG, &args);
>
> *ext_err1 = args.rcx;
>--
>2.43.2
>
Powered by blists - more mailing lists