[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f30d181-0747-cd7d-be6a-f19dcd1674f6@intel.com>
Date: Fri, 8 Sep 2023 09:31:44 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Kai Huang <kai.huang@...el.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: x86@...nel.org, kirill.shutemov@...ux.intel.com,
tony.luck@...el.com, peterz@...radead.org, tglx@...utronix.de,
bp@...en8.de, mingo@...hat.com, hpa@...or.com, seanjc@...gle.com,
pbonzini@...hat.com, david@...hat.com, dan.j.williams@...el.com,
rafael.j.wysocki@...el.com, ashok.raj@...el.com,
reinette.chatre@...el.com, len.brown@...el.com, ak@...ux.intel.com,
isaku.yamahata@...el.com, ying.huang@...el.com, chao.gao@...el.com,
sathyanarayanan.kuppuswamy@...ux.intel.com, nik.borisov@...e.com,
bagasdotme@...il.com, sagis@...gle.com, imammedo@...hat.com
Subject: Re: [PATCH v13 06/22] x86/virt/tdx: Add SEAMCALL error printing for
module initialization
On 8/25/23 05:14, Kai Huang wrote:
> +#define SEAMCALL_PRERR(__seamcall_func, __fn, __args, __seamcall_err_func) \
> +({ \
> + u64 ___sret = __SEAMCALL_PRERR(__seamcall_func, __fn, __args, \
> + __seamcall_err_func, pr_err); \
> + int ___ret; \
> + \
> + switch (___sret) { \
> + case TDX_SUCCESS: \
> + ___ret = 0; \
> + break; \
> + case TDX_SEAMCALL_VMFAILINVALID: \
> + pr_err("SEAMCALL failed: TDX module not loaded.\n"); \
> + ___ret = -ENODEV; \
> + break; \
> + case TDX_SEAMCALL_GP: \
> + pr_err("SEAMCALL failed: TDX disabled by BIOS.\n"); \
> + ___ret = -EOPNOTSUPP; \
> + break; \
> + case TDX_SEAMCALL_UD: \
> + pr_err("SEAMCALL failed: CPU not in VMX operation.\n"); \
> + ___ret = -EACCES; \
> + break; \
> + default: \
> + ___ret = -EIO; \
> + } \
> + ___ret; \
> +})
I have no clue where all of this came from or why it is necessary or why
it has to be macros. I'm just utterly confused.
I was really hoping to be able to run through this set and get it ready
to be merged. But it seems to still be seeing a *LOT* of change.
Should I wait another few weeks for this to settle down again?
Powered by blists - more mailing lists