[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aWrdpZCCDDAffZRM@google.com>
Date: Fri, 16 Jan 2026 16:53:57 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Rick Edgecombe <rick.p.edgecombe@...el.com>
Cc: bp@...en8.de, chao.gao@...el.com, dave.hansen@...el.com,
isaku.yamahata@...el.com, kai.huang@...el.com, kas@...nel.org,
kvm@...r.kernel.org, linux-coco@...ts.linux.dev, linux-kernel@...r.kernel.org,
mingo@...hat.com, pbonzini@...hat.com, tglx@...utronix.de,
vannapurve@...gle.com, x86@...nel.org, yan.y.zhao@...el.com,
xiaoyao.li@...el.com, binbin.wu@...el.com
Subject: Re: [PATCH v4 11/16] KVM: TDX: Add x86 ops for external spt cache
On Thu, Nov 20, 2025, Rick Edgecombe wrote:
> Move mmu_external_spt_cache behind x86 ops.
>
> In the mirror/external MMU concept, the KVM MMU manages a non-active EPT
> tree for private memory (the mirror). The actual active EPT tree the
> private memory is protected inside the TDX module. Whenever the mirror EPT
> is changed, it needs to call out into one of a set of x86 opts that
> implement various update operation with TDX specific SEAMCALLs and other
> tricks. These implementations operate on the TDX S-EPT (the external).
>
> In reality these external operations are designed narrowly with respect to
> TDX particulars. On the surface, what TDX specific things are happening to
> fulfill these update operations are mostly hidden from the MMU, but there
> is one particular area of interest where some details leak through.
>
> The S-EPT needs pages to use for the S-EPT page tables. These page tables
> need to be allocated before taking the mmu lock, like all the rest. So the
> KVM MMU pre-allocates pages for TDX to use for the S-EPT in the same place
> where it pre-allocates the other page tables. It’s not too bad and fits
> nicely with the others.
>
> However, Dynamic PAMT will need even more pages for the same operations.
> Further, these pages will need to be handed to the arch/x86 side which used
> them for DPAMT updates, which is hard for the existing KVM based cache.
> The details living in core MMU code start to add up.
>
> So in preparation to make it more complicated, move the external page
> table cache into TDX code by putting it behind some x86 ops. Have one for
> topping up and one for allocation. Don’t go so far to try to hide the
> existence of external page tables completely from the generic MMU, as they
> are currently stored in their mirror struct kvm_mmu_page and it’s quite
> handy.
>
> To plumb the memory cache operations through tdx.c, export some of
> the functions temporarily. This will be removed in future changes.
>
> Acked-by: Kiryl Shutsemau <kas@...nel.org>
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
> ---
NAK. I kinda sorta get why you did this? But the pages KVM uses for page tables
are KVM's, not to be mixed with PAMT pages.
Eww. Definitely a hard "no". In tdp_mmu_alloc_sp_for_split(), the allocation
comes from KVM:
if (mirror) {
sp->external_spt = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT);
if (!sp->external_spt) {
free_page((unsigned long)sp->spt);
kmem_cache_free(mmu_page_header_cache, sp);
return NULL;
}
}
But then in kvm_tdp_mmu_map(), via kvm_mmu_alloc_external_spt(), the allocation
comes from get_tdx_prealloc_page()
static void *tdx_alloc_external_fault_cache(struct kvm_vcpu *vcpu)
{
struct page *page = get_tdx_prealloc_page(&to_tdx(vcpu)->prealloc);
if (WARN_ON_ONCE(!page))
return (void *)__get_free_page(GFP_ATOMIC | __GFP_ACCOUNT);
return page_address(page);
}
But then regardles of where the page came from, KVM frees it. Seriously.
static void tdp_mmu_free_sp(struct kvm_mmu_page *sp)
{
free_page((unsigned long)sp->external_spt); <=====
free_page((unsigned long)sp->spt);
kmem_cache_free(mmu_page_header_cache, sp);
}
Oh, and the hugepage series also fumbles its topup (why there's yet another
topup API, I have no idea).
static int tdx_topup_vm_split_cache(struct kvm *kvm, enum pg_level level)
{
struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
struct tdx_prealloc *prealloc = &kvm_tdx->prealloc_split_cache;
int cnt = tdx_min_split_cache_sz(kvm, level);
while (READ_ONCE(prealloc->cnt) < cnt) {
struct page *page = alloc_page(GFP_KERNEL); <==== GFP_KERNEL_ACCOUNT
if (!page)
return -ENOMEM;
spin_lock(&kvm_tdx->prealloc_split_cache_lock);
list_add(&page->lru, &prealloc->page_list);
prealloc->cnt++;
spin_unlock(&kvm_tdx->prealloc_split_cache_lock);
}
return 0;
}
Powered by blists - more mailing lists