[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aJrhNFmLFBOP2TVK@yzhao56-desk.sh.intel.com>
Date: Tue, 12 Aug 2025 14:37:40 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Sagi Shahar <sagis@...gle.com>
CC: <pbonzini@...hat.com>, <seanjc@...gle.com>,
<linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>, <x86@...nel.org>,
<rick.p.edgecombe@...el.com>, <dave.hansen@...el.com>, <kas@...nel.org>,
<tabba@...gle.com>, <ackerleytng@...gle.com>, <quic_eberman@...cinc.com>,
<michael.roth@....com>, <david@...hat.com>, <vannapurve@...gle.com>,
<vbabka@...e.cz>, <thomas.lendacky@....com>, <pgonda@...gle.com>,
<zhiquan1.li@...el.com>, <fan.du@...el.com>, <jun.miao@...el.com>,
<ira.weiny@...el.com>, <isaku.yamahata@...el.com>, <xiaoyao.li@...el.com>,
<binbin.wu@...ux.intel.com>, <chao.p.peng@...el.com>
Subject: Re: [RFC PATCH v2 18/23] x86/virt/tdx: Do not perform cache flushes
unless CLFLUSH_BEFORE_ALLOC is set
On Mon, Aug 11, 2025 at 04:10:41PM -0500, Sagi Shahar wrote:
> On Thu, Aug 7, 2025 at 4:47 AM Yan Zhao <yan.y.zhao@...el.com> wrote:
> >
> > From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> >
> > The TDX module enumerates with a TDX_FEATURES0 bit if an explicit cache
> > flush is necessary when switching KeyID for a page, like before
> > handing the page over to a TD.
> >
> > Currently, none of the TDX-capable platforms have this bit enabled.
> >
> > Moreover, cache flushing with TDH.PHYMEM.PAGE.WBINVD fails if
> > Dynamic PAMT is active and the target page is not 4k. The SEAMCALL only
> > supports 4k pages and will fail if there is no PAMT_4K for the HPA.
I actually couldn't observe this failure in my side with DPAMT + hugepage
(without shutdown optimization).
> > Avoid performing these cache flushes unless the CLFLUSH_BEFORE_ALLOC bit
> > of TDX_FEATURES0 is set.
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
> > ---
> > RFC v2:
> > - Pulled from
> > git://git.kernel.org/pub/scm/linux/kernel/git/kas/linux.git tdx/dpamt-huge.
> > - Rebased on top of TDX huge page RFC v2 (Yan)
> > ---
> > arch/x86/include/asm/tdx.h | 1 +
> > arch/x86/virt/vmx/tdx/tdx.c | 19 +++++++++++++------
> > 2 files changed, 14 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
> > index f1bd74348b34..c058a82d4a97 100644
> > --- a/arch/x86/include/asm/tdx.h
> > +++ b/arch/x86/include/asm/tdx.h
> > @@ -15,6 +15,7 @@
> >
> > /* Bit definitions of TDX_FEATURES0 metadata field */
> > #define TDX_FEATURES0_NO_RBP_MOD BIT_ULL(18)
> > +#define TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC BIT_ULL(23)
> > #define TDX_FEATURES0_DYNAMIC_PAMT BIT_ULL(36)
> >
> > #ifndef __ASSEMBLER__
> > diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> > index 9ed585bde062..b7a0ee0f4a50 100644
> > --- a/arch/x86/virt/vmx/tdx/tdx.c
> > +++ b/arch/x86/virt/vmx/tdx/tdx.c
> > @@ -1648,14 +1648,13 @@ static inline u64 tdx_tdvpr_pa(struct tdx_vp *td)
> > return page_to_phys(td->tdvpr_page);
> > }
> >
> > -/*
> > - * The TDX module exposes a CLFLUSH_BEFORE_ALLOC bit to specify whether
> > - * a CLFLUSH of pages is required before handing them to the TDX module.
> > - * Be conservative and make the code simpler by doing the CLFLUSH
> > - * unconditionally.
> > - */
> > static void tdx_clflush_page(struct page *page)
> > {
> > + u64 tdx_features0 = tdx_sysinfo.features.tdx_features0;
> > +
> > + if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC)
> > + return;
>
> Isn't the logic here and below reversed? If
> TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC bit is set, we want to perform the
> clflush()
Yes, I think so.
As my test machine has boot_cpu_has_bug(X86_BUG_TDX_PW_MCE) returning true, I
thought it was right to perform clflush() and overlooked this logical error.
> > clflush_cache_range(page_to_virt(page), PAGE_SIZE);
> > }
> >
> > @@ -2030,8 +2029,12 @@ EXPORT_SYMBOL_GPL(tdh_phymem_cache_wb);
> >
> > u64 tdh_phymem_page_wbinvd_tdr(struct tdx_td *td)
> > {
> > + u64 tdx_features0 = tdx_sysinfo.features.tdx_features0;
> > struct tdx_module_args args = {};
> >
> > + if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC)
> > + return 0;
> > +
> > args.rcx = mk_keyed_paddr(tdx_global_keyid, td->tdr_page);
> >
> > return seamcall(TDH_PHYMEM_PAGE_WBINVD, &args);
> > @@ -2041,10 +2044,14 @@ EXPORT_SYMBOL_GPL(tdh_phymem_page_wbinvd_tdr);
> > u64 tdh_phymem_page_wbinvd_hkid(u64 hkid, struct folio *folio,
> > unsigned long start_idx, unsigned long npages)
> > {
> > + u64 tdx_features0 = tdx_sysinfo.features.tdx_features0;
> > struct page *start = folio_page(folio, start_idx);
> > struct tdx_module_args args = {};
> > u64 err;
> >
> > + if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC)
> > + return 0;
> > +
> > if (start_idx + npages > folio_nr_pages(folio))
> > return TDX_OPERAND_INVALID;
> >
> > --
> > 2.43.2
> >
> >
>
Powered by blists - more mailing lists