[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9096e7a47742f4a46a7f400aac467ac78e1dfe50.camel@intel.com>
Date: Thu, 29 Jan 2026 17:18:58 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "seanjc@...gle.com" <seanjc@...gle.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "linux-coco@...ts.linux.dev"
<linux-coco@...ts.linux.dev>, "Huang, Kai" <kai.huang@...el.com>, "Li,
Xiaoyao" <xiaoyao.li@...el.com>, "Hansen, Dave" <dave.hansen@...el.com>,
"Zhao, Yan Y" <yan.y.zhao@...el.com>, "Wu, Binbin" <binbin.wu@...el.com>,
"kas@...nel.org" <kas@...nel.org>, "binbin.wu@...ux.intel.com"
<binbin.wu@...ux.intel.com>, "mingo@...hat.com" <mingo@...hat.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>, "tglx@...utronix.de"
<tglx@...utronix.de>, "Yamahata, Isaku" <isaku.yamahata@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "Annapurve,
Vishal" <vannapurve@...gle.com>, "Gao, Chao" <chao.gao@...el.com>,
"bp@...en8.de" <bp@...en8.de>, "x86@...nel.org" <x86@...nel.org>
Subject: Re: [PATCH v4 07/16] x86/virt/tdx: Add tdx_alloc/free_page() helpers
On Wed, 2026-01-28 at 17:19 -0800, Sean Christopherson wrote:
> Honestly, the entire scheme is a mess. Four days of staring at this
> and I finally undertand what the code is doing. The whole "struct
> tdx_module_array_args" union is completely unnecessary, the resulting
> args.args crud is ugly, having a pile of duplicate accessors is
> brittle, the code obfuscates a simple concept, and the end result
> doesn't provide any actual protection since the kernel will happily
> overflow the buffer after the WARN.
The original sin for this, as was spotted by Nikilay in v3, is actually
that it turns out that the whole variable length thing was intended to
give the TDX module flexibility *if* it wanted to increase it in the
future. As in it's not required today. Worse, whether it would actually
grow in the specific way the code assumes is not covered in the spec.
Apparently it was based on some past internal discussions. So the
agreement on v3 was to just support the fixed two page size in the
spec.
Here was the end of that thread:
https://lore.kernel.org/kvm/da3701ea-08ea-45c9-94a8-355205a45f8e@intel.com/
This simplifies the whole thing, no union, no worse case allocations,
etc. I'm just getting back and going through mail so will check out
your full solution. (Thanks!) But from the below I think the fixed
array size code will be better still.
>
> It's also relying on the developer to correctly copy+paste the same
> register in multiple locations: ~5 depending on how you want to
> count.
>
> static u64 *dpamt_args_array_ptr_r12(struct tdx_module_array_args
> *args)
> #1
> {
> WARN_ON_ONCE(tdx_dpamt_entry_pages() > MAX_TDX_ARGS(r12));
> #2
>
> return &args->args_array[TDX_ARG_INDEX(r12)];
> #3
>
>
> u64 guest_memory_pamt_page[MAX_TDX_ARGS(r12)];
> #4
>
>
> u64 *args_array = dpamt_args_array_ptr_r12(&args);
> #5
Yea it could probably use another DEFINE or two to make it less error
prone. Vanilla DPAMT has 4 instances of rdx.
>
> After all of that boilerplate, the caller _still_ has to do the
> actual memcpy(), and for me at least, all of the above makes it
> _harder_ to understand what the code is doing.
>
> Drop the struct+union overlay and just provide a helper with wrappers
> to copy to/from a tdx_module_args structure. It's far from
> bulletproof, but it at least avoids an immediate buffer overflow, and
> defers to the kernel owner with respect to handling uninitialized
> stack data.
>
> /*
> * For SEAMCALLs that pass a bundle of pages, the TDX spec treats the
> registers
> * like an array, as they are ordered in the struct. The effective
> array size
> * is (obviously) limited by the number or registers, relative to the
> starting
> * register. Fill the register array at a given starting register,
> with sanity
> * checks to avoid overflowing the args structure.
> */
> static void dpamt_copy_regs_array(struct tdx_module_args *args, void
> *reg,
> u64 *pamt_pa_array, bool
> copy_to_regs)
> {
> int size = tdx_dpamt_entry_pages() * sizeof(*pamt_pa_array);
>
> if (WARN_ON_ONCE(reg + size > (void *)args) + sizeof(*args))
> return;
>
> /* Copy PAMT page PA's to/from the struct per the TDX ABI.
> */
> if (copy_to_regs)
> memcpy(reg, pamt_pa_array, size);
> else
> memcpy(pamt_pa_array, reg, size);
> }
>
> #define dpamt_copy_from_regs(dst, args, reg) \
> dpamt_copy_regs_array(args, &(args)->reg, dst, false)
>
> #define dpamt_copy_to_regs(args, reg, src) \
> dpamt_copy_regs_array(args, &(args)->reg, src, true)
>
> As far as the on-stack allocations go, why bother being precise?
> Except for paranoid setups which explicitly initialize the stack,
> "allocating" ~48 unused bytes is literally free. Not to mention the
> cost relative to the latency of a SEAMCALL is in the noise.
>
> /*
> * When declaring PAMT arrays on the stack, use the maximum
> theoretical number
> * of entries that can be squeezed into a SEAMCALL, as stack
> allocations are
> * practically free, i.e. any wasted space is a non-issue.
> */
> #define MAX_NR_DPAMT_ARGS (sizeof(struct tdx_module_args) /
> sizeof(u64))
>
>
> With that, callers don't have to regurgitate the same register
> multiple times, and we don't need a new wrapper for every variation
> of SEAMCALL.
> E.g.
>
>
> u64 pamt_pa_array[MAX_NR_DPAMT_ARGS];
>
> ...
>
> bool dpamt = tdx_supports_dynamic_pamt(&tdx_sysinfo) &&
> level == PG_LEVEL_2M;
> u64 pamt_pa_array[MAX_NR_DPAMT_ARGS];
> struct tdx_module_args args = {
> .rcx = gpa | pg_level_to_tdx_sept_level(level),
> .rdx = tdx_tdr_pa(td),
> .r8 = page_to_phys(new_sp),
> };
> u64 ret;
>
> if (!tdx_supports_demote_nointerrupt(&tdx_sysinfo))
> return TDX_SW_ERROR;
>
> if (dpamt) {
> if (alloc_pamt_array(pamt_pa_array, pamt_cache))
> return TDX_SW_ERROR;
>
> dpamt_copy_to_regs(&args, r12, pamt_pa_array);
> }
>
> Which to me is easier to read and much more intuitive than:
>
>
> u64 guest_memory_pamt_page[MAX_TDX_ARGS(r12)];
> struct tdx_module_array_args args = {
> .args.rcx = gpa | pg_level_to_tdx_sept_level(level),
> .args.rdx = tdx_tdr_pa(td),
> .args.r8 = PFN_PHYS(page_to_pfn(new_sp)),
> };
> struct tdx_module_array_args retry_args;
> int i = 0;
> u64 ret;
>
> if (dpamt) {
> u64 *args_array = dpamt_args_array_ptr_r12(&args);
>
> if (alloc_pamt_array(guest_memory_pamt_page,
> pamt_cache))
> return TDX_SW_ERROR;
>
> /*
> * Copy PAMT page PAs of the guest memory into the
> struct per the
> * TDX ABI
> */
> memcpy(args_array, guest_memory_pamt_page,
> tdx_dpamt_entry_pages() *
> sizeof(*args_array));
> }
What you have here is close to what I had done when I first took this
series. But it ran afoul of FORTIFY_SOUCE and required some horrible
casting to trick it. I wonder if this code will hit that issue too.
Dave didn't like the solution and suggested the union actually:
https://lore.kernel.org/kvm/355ad607-52ed-42cc-9a48-63aaa49f4c68@intel.com/#t
I'm aware of your tendency to dislike union based solutions. But since
this was purely contained to tip, I went with Dave's preference.
But I think it's all moot because the fixed size-2 solution doesn't
need union or array copying. They can be just normal tdx_module_args
args.
Powered by blists - more mailing lists