[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aLlg+VavGQlnQqFY@yzhao56-desk.sh.intel.com>
Date: Thu, 4 Sep 2025 17:50:49 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Binbin Wu <binbin.wu@...ux.intel.com>
CC: <pbonzini@...hat.com>, <seanjc@...gle.com>,
<linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>, <x86@...nel.org>,
<rick.p.edgecombe@...el.com>, <dave.hansen@...el.com>, <kas@...nel.org>,
<tabba@...gle.com>, <ackerleytng@...gle.com>, <quic_eberman@...cinc.com>,
<michael.roth@....com>, <david@...hat.com>, <vannapurve@...gle.com>,
<vbabka@...e.cz>, <thomas.lendacky@....com>, <pgonda@...gle.com>,
<zhiquan1.li@...el.com>, <fan.du@...el.com>, <jun.miao@...el.com>,
<ira.weiny@...el.com>, <isaku.yamahata@...el.com>, <xiaoyao.li@...el.com>,
<chao.p.peng@...el.com>
Subject: Re: [RFC PATCH v2 18/23] x86/virt/tdx: Do not perform cache flushes
unless CLFLUSH_BEFORE_ALLOC is set
On Thu, Sep 04, 2025 at 04:16:27PM +0800, Binbin Wu wrote:
>
>
> On 8/7/2025 5:45 PM, Yan Zhao wrote:
> > From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> >
> > The TDX module enumerates with a TDX_FEATURES0 bit if an explicit cache
> > flush is necessary when switching KeyID for a page, like before
> > handing the page over to a TD.
> >
> > Currently, none of the TDX-capable platforms have this bit enabled.
> >
> > Moreover, cache flushing with TDH.PHYMEM.PAGE.WBINVD fails if
> > Dynamic PAMT is active and the target page is not 4k. The SEAMCALL only
> > supports 4k pages and will fail if there is no PAMT_4K for the HPA.
> >
> > Avoid performing these cache flushes unless the CLFLUSH_BEFORE_ALLOC bit
> > of TDX_FEATURES0 is set.
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
> > ---
> > RFC v2:
> > - Pulled from
> > git://git.kernel.org/pub/scm/linux/kernel/git/kas/linux.git tdx/dpamt-huge.
> > - Rebased on top of TDX huge page RFC v2 (Yan)
> > ---
> > arch/x86/include/asm/tdx.h | 1 +
> > arch/x86/virt/vmx/tdx/tdx.c | 19 +++++++++++++------
> > 2 files changed, 14 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
> > index f1bd74348b34..c058a82d4a97 100644
> > --- a/arch/x86/include/asm/tdx.h
> > +++ b/arch/x86/include/asm/tdx.h
> > @@ -15,6 +15,7 @@
> > /* Bit definitions of TDX_FEATURES0 metadata field */
> > #define TDX_FEATURES0_NO_RBP_MOD BIT_ULL(18)
> > +#define TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC BIT_ULL(23)
> > #define TDX_FEATURES0_DYNAMIC_PAMT BIT_ULL(36)
> > #ifndef __ASSEMBLER__
> > diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> > index 9ed585bde062..b7a0ee0f4a50 100644
> > --- a/arch/x86/virt/vmx/tdx/tdx.c
> > +++ b/arch/x86/virt/vmx/tdx/tdx.c
> > @@ -1648,14 +1648,13 @@ static inline u64 tdx_tdvpr_pa(struct tdx_vp *td)
> > return page_to_phys(td->tdvpr_page);
> > }
> > -/*
> > - * The TDX module exposes a CLFLUSH_BEFORE_ALLOC bit to specify whether
> > - * a CLFLUSH of pages is required before handing them to the TDX module.
> > - * Be conservative and make the code simpler by doing the CLFLUSH
> > - * unconditionally.
> > - */
> > static void tdx_clflush_page(struct page *page)
> > {
> > + u64 tdx_features0 = tdx_sysinfo.features.tdx_features0;
> > +
> > + if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC)
>
> According to the cover letter, if TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC is enabled,
> an explicit cache flush is necessary.
> Shouldn't this and below be:
> if (!(tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC))
Right, Sagi also reported it.
https://lore.kernel.org/kvm/CAAhR5DEZZfX0=9QwBrXhC+1fp1Z0w4Xbb3mXcn0OuW+45tsLwA@mail.gmail.com/
> > + return;
> > +
> > clflush_cache_range(page_to_virt(page), PAGE_SIZE);
> > }
> > @@ -2030,8 +2029,12 @@ EXPORT_SYMBOL_GPL(tdh_phymem_cache_wb);
> > u64 tdh_phymem_page_wbinvd_tdr(struct tdx_td *td)
> > {
> > + u64 tdx_features0 = tdx_sysinfo.features.tdx_features0;
> > struct tdx_module_args args = {};
> > + if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC)
> > + return 0;
> > +
> > args.rcx = mk_keyed_paddr(tdx_global_keyid, td->tdr_page);
> > return seamcall(TDH_PHYMEM_PAGE_WBINVD, &args);
> > @@ -2041,10 +2044,14 @@ EXPORT_SYMBOL_GPL(tdh_phymem_page_wbinvd_tdr);
> > u64 tdh_phymem_page_wbinvd_hkid(u64 hkid, struct folio *folio,
> > unsigned long start_idx, unsigned long npages)
> > {
> > + u64 tdx_features0 = tdx_sysinfo.features.tdx_features0;
> > struct page *start = folio_page(folio, start_idx);
> > struct tdx_module_args args = {};
> > u64 err;
> > + if (tdx_features0 & TDX_FEATURES0_CLFLUSH_BEFORE_ALLOC)
> > + return 0;
> > +
> > if (start_idx + npages > folio_nr_pages(folio))
> > return TDX_OPERAND_INVALID;
>
Powered by blists - more mailing lists