[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aWrfi8Oy6WXhiNv1@google.com>
Date: Fri, 16 Jan 2026 17:02:03 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Rick Edgecombe <rick.p.edgecombe@...el.com>
Cc: bp@...en8.de, chao.gao@...el.com, dave.hansen@...el.com,
isaku.yamahata@...el.com, kai.huang@...el.com, kas@...nel.org,
kvm@...r.kernel.org, linux-coco@...ts.linux.dev, linux-kernel@...r.kernel.org,
mingo@...hat.com, pbonzini@...hat.com, tglx@...utronix.de,
vannapurve@...gle.com, x86@...nel.org, yan.y.zhao@...el.com,
xiaoyao.li@...el.com, binbin.wu@...el.com
Subject: Re: [PATCH v4 12/16] x86/virt/tdx: Add helpers to allow for
pre-allocating pages
On Thu, Nov 20, 2025, Rick Edgecombe wrote:
> ---
> v4:
> - Change to GFP_KERNEL_ACCOUNT to match replaced kvm_mmu_memory_cache
> - Add GFP_ATOMIC backup, like kvm_mmu_memory_cache has (Kiryl)
LOL, having fun reinventing kvm_mmu_memory_cache? :-D
> - Explain why not to use mempool (Dave)
> - Tweak local vars to be more reverse christmas tree by deleting some
> that were only added for reasons that go away in this patch anyway
> ---
> arch/x86/include/asm/tdx.h | 43 ++++++++++++++++++++++++++++++++++++-
> arch/x86/kvm/vmx/tdx.c | 21 +++++++++++++-----
> arch/x86/kvm/vmx/tdx.h | 2 +-
> arch/x86/virt/vmx/tdx/tdx.c | 22 +++++++++++++------
> virt/kvm/kvm_main.c | 3 ---
> 5 files changed, 75 insertions(+), 16 deletions(-)
> +/*
> + * Simple structure for pre-allocating Dynamic
> + * PAMT pages outside of locks.
As called out in an earlier patch, it's not just PAMT pages.
> + */
> +struct tdx_prealloc {
> + struct list_head page_list;
> + int cnt;
> +};
> +
> +static inline struct page *get_tdx_prealloc_page(struct tdx_prealloc *prealloc)
> +{
> + struct page *page;
> +
> + page = list_first_entry_or_null(&prealloc->page_list, struct page, lru);
> + if (page) {
> + list_del(&page->lru);
> + prealloc->cnt--;
> + }
> +
> + return page;
> +}
> +
> +static inline int topup_tdx_prealloc_page(struct tdx_prealloc *prealloc, unsigned int min_size)
> +{
> + while (prealloc->cnt < min_size) {
> + struct page *page = alloc_page(GFP_KERNEL_ACCOUNT);
> +
> + if (!page)
> + return -ENOMEM;
> +
> + list_add(&page->lru, &prealloc->page_list);
Huh, TIL that page->lru is fair game for private usage when the page is kernel-
allocated.
> + prealloc->cnt++;
>
> static int tdx_topup_external_fault_cache(struct kvm_vcpu *vcpu, unsigned int cnt)
> {
> - struct vcpu_tdx *tdx = to_tdx(vcpu);
> + struct tdx_prealloc *prealloc = &to_tdx(vcpu)->prealloc;
> + int min_fault_cache_size;
>
> - return kvm_mmu_topup_memory_cache(&tdx->mmu_external_spt_cache, cnt);
> + /* External page tables */
> + min_fault_cache_size = cnt;
> + /* Dynamic PAMT pages (if enabled) */
> + min_fault_cache_size += tdx_dpamt_entry_pages() * PT64_ROOT_MAX_LEVEL;
> +
> + return topup_tdx_prealloc_page(prealloc, min_fault_cache_size);
> }
>
> static void tdx_free_external_fault_cache(struct kvm_vcpu *vcpu)
> {
> struct vcpu_tdx *tdx = to_tdx(vcpu);
> + struct page *page;
>
> - kvm_mmu_free_memory_cache(&tdx->mmu_external_spt_cache);
> + while ((page = get_tdx_prealloc_page(&tdx->prealloc)))
> + __free_page(page);
No. Either put the ownership of the PAMT cache in arch/x86/virt/vmx/tdx/tdx.c
or use kvm_mmu_memory_cache. Don't add a custom caching scheme in KVM.
> /* Number PAMT pages to be provided to TDX module per 2M region of PA */
> -static int tdx_dpamt_entry_pages(void)
> +int tdx_dpamt_entry_pages(void)
> {
> if (!tdx_supports_dynamic_pamt(&tdx_sysinfo))
> return 0;
>
A comment here stating the "common" number of entries would be helper. I have no
clue as to the magnitude. E.g. this could be 2 or it could be 200, I genuinely
have no idea.
Powered by blists - more mailing lists