lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aW9IetVmF3pIVFRl@yzhao56-desk.sh.intel.com>
Date: Tue, 20 Jan 2026 17:18:50 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: "Huang, Kai" <kai.huang@...el.com>
CC: "seanjc@...gle.com" <seanjc@...gle.com>, "kvm@...r.kernel.org"
	<kvm@...r.kernel.org>, "linux-coco@...ts.linux.dev"
	<linux-coco@...ts.linux.dev>, "Li, Xiaoyao" <xiaoyao.li@...el.com>, "Hansen,
 Dave" <dave.hansen@...el.com>, "Wu, Binbin" <binbin.wu@...el.com>,
	"kas@...nel.org" <kas@...nel.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "mingo@...hat.com" <mingo@...hat.com>,
	"pbonzini@...hat.com" <pbonzini@...hat.com>, "tglx@...utronix.de"
	<tglx@...utronix.de>, "Yamahata, Isaku" <isaku.yamahata@...el.com>,
	"Annapurve, Vishal" <vannapurve@...gle.com>, "Gao, Chao"
	<chao.gao@...el.com>, "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
	"bp@...en8.de" <bp@...en8.de>, "x86@...nel.org" <x86@...nel.org>
Subject: Re: [PATCH v4 11/16] KVM: TDX: Add x86 ops for external spt cache

On Tue, Jan 20, 2026 at 04:42:37PM +0800, Huang, Kai wrote:
> On Mon, 2026-01-19 at 10:31 +0800, Yan Zhao wrote:
> > On Fri, Jan 16, 2026 at 04:53:57PM -0800, Sean Christopherson wrote:
> > > On Thu, Nov 20, 2025, Rick Edgecombe wrote:
> > > > Move mmu_external_spt_cache behind x86 ops.
> > > > 
> > > > In the mirror/external MMU concept, the KVM MMU manages a non-active EPT
> > > > tree for private memory (the mirror). The actual active EPT tree the
> > > > private memory is protected inside the TDX module. Whenever the mirror EPT
> > > > is changed, it needs to call out into one of a set of x86 opts that
> > > > implement various update operation with TDX specific SEAMCALLs and other
> > > > tricks. These implementations operate on the TDX S-EPT (the external).
> > > > 
> > > > In reality these external operations are designed narrowly with respect to
> > > > TDX particulars. On the surface, what TDX specific things are happening to
> > > > fulfill these update operations are mostly hidden from the MMU, but there
> > > > is one particular area of interest where some details leak through.
> > > > 
> > > > The S-EPT needs pages to use for the S-EPT page tables. These page tables
> > > > need to be allocated before taking the mmu lock, like all the rest. So the
> > > > KVM MMU pre-allocates pages for TDX to use for the S-EPT in the same place
> > > > where it pre-allocates the other page tables. It’s not too bad and fits
> > > > nicely with the others.
> > > > 
> > > > However, Dynamic PAMT will need even more pages for the same operations.
> > > > Further, these pages will need to be handed to the arch/x86 side which used
> > > > them for DPAMT updates, which is hard for the existing KVM based cache.
> > > > The details living in core MMU code start to add up.
> > > > 
> > > > So in preparation to make it more complicated, move the external page
> > > > table cache into TDX code by putting it behind some x86 ops. Have one for
> > > > topping up and one for allocation. Don’t go so far to try to hide the
> > > > existence of external page tables completely from the generic MMU, as they
> > > > are currently stored in their mirror struct kvm_mmu_page and it’s quite
> > > > handy.
> > > > 
> > > > To plumb the memory cache operations through tdx.c, export some of
> > > > the functions temporarily. This will be removed in future changes.
> > > > 
> > > > Acked-by: Kiryl Shutsemau <kas@...nel.org>
> > > > Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
> > > > ---
> > > 
> > > NAK.  I kinda sorta get why you did this?  But the pages KVM uses for page tables
> > > are KVM's, not to be mixed with PAMT pages.
> > > 
> > > Eww.  Definitely a hard "no".  In tdp_mmu_alloc_sp_for_split(), the allocation
> > > comes from KVM:
> > > 
> > > 	if (mirror) {
> > > 		sp->external_spt = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT);
> > > 		if (!sp->external_spt) {
> > > 			free_page((unsigned long)sp->spt);
> > > 			kmem_cache_free(mmu_page_header_cache, sp);
> > > 			return NULL;
> > > 		}
> > > 	}
> > > 
> > > But then in kvm_tdp_mmu_map(), via kvm_mmu_alloc_external_spt(), the allocation
> > > comes from get_tdx_prealloc_page()
> > > 
> > >   static void *tdx_alloc_external_fault_cache(struct kvm_vcpu *vcpu)
> > >   {
> > > 	struct page *page = get_tdx_prealloc_page(&to_tdx(vcpu)->prealloc);
> > > 
> > > 	if (WARN_ON_ONCE(!page))
> > > 		return (void *)__get_free_page(GFP_ATOMIC | __GFP_ACCOUNT);
> > > 
> > > 	return page_address(page);
> > >   }
> > > 
> > > But then regardles of where the page came from, KVM frees it.  Seriously.
> > > 
> > >   static void tdp_mmu_free_sp(struct kvm_mmu_page *sp)
> > >   {
> > > 	free_page((unsigned long)sp->external_spt);  <=====
> > > 	free_page((unsigned long)sp->spt);
> > > 	kmem_cache_free(mmu_page_header_cache, sp);
> > >   }
> > IMHO, it's by design. I don't see a problem with KVM freeing the sp->external_spt,
> > regardless of whether it's from:
> > (1) KVM's mmu cache,
> > (2) tdp_mmu_alloc_sp_for_split(), or
> > (3) tdx_alloc_external_fault_cache().
> > Please correct me if I missed anything.
> > 
> > None of (1)-(3) keeps the pages in list after KVM obtains the pages and maps
> > them into SPTEs.
> > 
> > So, with SPTEs as the pages' sole consumer, it's perfectly fine for KVM to free
> > the pages when freeing SPTEs. No?
> > 
> > Also, in the current upstream code, after tdp_mmu_split_huge_pages_root() is
> > invoked for dirty tracking, some sp->spt are allocated from
> > tdp_mmu_alloc_sp_for_split(), while others are from kvm_mmu_memory_cache_alloc().
> > However, tdp_mmu_free_sp() can still free them without any problem.
> > 
> > 
> 
> Well I think it's for consistency, and IMHO you can even argue this is a
> bug in the current code, because IIUC there's indeed one issue in the
> current code.
> 
> When sp->spt is allocated via per-vCPU mmu_shadow_page_cache, it is
> actually initialized to SHADOW_NONPRESENT_VALUE:
> 
>         vcpu->arch.mmu_shadow_page_cache.init_value =                    
>                 SHADOW_NONPRESENT_VALUE;                                 
> 
> So the way sp->spt is allocated in tdp_mmu_alloc_sp_for_split() is
> actually broken IMHO because entries in sp->spt is never initialized.
The sp->spt allocated in tdp_mmu_alloc_sp_for_split() is initialized in
tdp_mmu_split_huge_page()...

> Fortunately tdp_mmu_alloc_sp_for_split() isn't reachable for TDX guests,
> so we are lucky so far.
> 
> A per-VM cache requires more code to handle, but to me I still think we
> should just use the same way to allocate staff when possible, and that
> includes spt->external_spt.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ