lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e14da6ed4520bc2362322434b1b4b336f079f3b7.camel@intel.com>
Date:   Tue, 26 Apr 2022 11:41:59 +1200
From:   Kai Huang <kai.huang@...el.com>
To:     Sathyanarayanan Kuppuswamy 
        <sathyanarayanan.kuppuswamy@...ux.intel.com>,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc:     seanjc@...gle.com, pbonzini@...hat.com, dave.hansen@...el.com,
        len.brown@...el.com, tony.luck@...el.com,
        rafael.j.wysocki@...el.com, reinette.chatre@...el.com,
        dan.j.williams@...el.com, peterz@...radead.org, ak@...ux.intel.com,
        kirill.shutemov@...ux.intel.com, isaku.yamahata@...el.com
Subject: Re: [PATCH v3 06/21] x86/virt/tdx: Shut down TDX module in case of
 error

On Sat, 2022-04-23 at 08:39 -0700, Sathyanarayanan Kuppuswamy wrote:
> 
> On 4/5/22 9:49 PM, Kai Huang wrote:
> > TDX supports shutting down the TDX module at any time during its
> > lifetime.  After TDX module is shut down, no further SEAMCALL can be
> > made on any logical cpu.
> > 
> > Shut down the TDX module in case of any error happened during the
> > initialization process.  It's pointless to leave the TDX module in some
> > middle state.
> > 
> > Shutting down the TDX module requires calling TDH.SYS.LP.SHUTDOWN on all
> 
> May be adding specification reference will help.

How about adding the reference to the code comment?  Here we just need some fact
description.  Adding reference to the code comment also allows people to find
the relative part in the spec easily when they are looking at the actual code
(i.e. after the code is merged to upstream).  Otherwise people needs to do a git
blame and find the exact commit message for that.
 
> 
> > BIOS-enabled cpus, and the SEMACALL can run concurrently on different
> > cpus.  Implement a mechanism to run SEAMCALL concurrently on all online
> 
>  From TDX Module spec, sec 13.4.1 titled "Shutdown Initiated by the Host
> VMM (as Part of Module Update)",
> 
> TDH.SYS.LP.SHUTDOWN is designed to set state variables to block all
> SEAMCALLs on the current LP and all SEAMCALL leaf functions except
> TDH.SYS.LP.SHUTDOWN on the other LPs.
> 
> As per above spec reference, executing TDH.SYS.LP.SHUTDOWN in
> one LP prevent all SEAMCALL leaf function on all other LPs. If so,
> why execute it on all CPUs?

Prevent all SEAMCALLs on other LPs except TDH.SYS.LP.SHUTDOWN.  The spec defnies
shutting down the TDX module as running this SEAMCALl on all LPs, so why just
run on a single cpu?  What's the benefit?

Also, the spec also mentions for runtime update, "SEAMLDR can check that
TDH.SYS.SHUTDOWN has been executed on all LPs".  Runtime update isn't supported
in this series, but it can leverage the existing code if we run SEAMCALL on all
LPs to shutdown the module as spec suggested.  Why just run on a single cpu?

> 
> > cpus.  Logical-cpu scope initialization will use it too.
> 
> Concurrent SEAMCALL support seem to be useful for other SEAMCALL
> types as well. If you agree, I think it would be better if you move
> it out to a separate common patch.

There are couple of problems of doing that:

- All the functions are static in this tdx.c.  Introducing them separately in
dedicated patch would result in compile warning about those static functions are
not used.
- I have received comments from others I can add those functions when they are
firstly used.  Given those functions is not large, so I prefer this way too.

> 
> > 
> > Signed-off-by: Kai Huang <kai.huang@...el.com>
> > ---
> >   arch/x86/virt/vmx/tdx/tdx.c | 40 ++++++++++++++++++++++++++++++++++++-
> >   arch/x86/virt/vmx/tdx/tdx.h |  5 +++++
> >   2 files changed, 44 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> > index 674867bccc14..faf8355965a5 100644
> > --- a/arch/x86/virt/vmx/tdx/tdx.c
> > +++ b/arch/x86/virt/vmx/tdx/tdx.c
> > @@ -11,6 +11,8 @@
> >   #include <linux/cpumask.h>
> >   #include <linux/mutex.h>
> >   #include <linux/cpu.h>
> > +#include <linux/smp.h>
> > +#include <linux/atomic.h>
> >   #include <asm/msr-index.h>
> >   #include <asm/msr.h>
> >   #include <asm/cpufeature.h>
> > @@ -328,6 +330,39 @@ static int seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
> >   	return 0;
> >   }
> >   
> > +/* Data structure to make SEAMCALL on multiple CPUs concurrently */
> > +struct seamcall_ctx {
> > +	u64 fn;
> > +	u64 rcx;
> > +	u64 rdx;
> > +	u64 r8;
> > +	u64 r9;
> > +	atomic_t err;
> > +	u64 seamcall_ret;
> > +	struct tdx_module_output out;
> > +};
> > +
> > +static void seamcall_smp_call_function(void *data)
> > +{
> > +	struct seamcall_ctx *sc = data;
> > +	int ret;
> > +
> > +	ret = seamcall(sc->fn, sc->rcx, sc->rdx, sc->r8, sc->r9,
> > +			&sc->seamcall_ret, &sc->out);
> > +	if (ret)
> > +		atomic_set(&sc->err, ret);
> > +}
> > +
> > +/*
> > + * Call the SEAMCALL on all online cpus concurrently.
> > + * Return error if SEAMCALL fails on any cpu.
> > + */
> > +static int seamcall_on_each_cpu(struct seamcall_ctx *sc)
> > +{
> > +	on_each_cpu(seamcall_smp_call_function, sc, true);
> > +	return atomic_read(&sc->err);
> > +}
> > +
> >   static inline bool p_seamldr_ready(void)
> >   {
> >   	return !!p_seamldr_info.p_seamldr_ready;
> > @@ -437,7 +472,10 @@ static int init_tdx_module(void)
> >   
> >   static void shutdown_tdx_module(void)
> >   {
> > -	/* TODO: Shut down the TDX module */
> > +	struct seamcall_ctx sc = { .fn = TDH_SYS_LP_SHUTDOWN };
> > +
> > +	seamcall_on_each_cpu(&sc);
> 
> May be check the error and WARN_ON on failure?

When SEAMCALL fails, the error code will be printed out actually (please see
previous patch), so I thought there's no need to WARN_ON() here (and some other
similar places).  I am not sure the additional WARN_ON() will do any help?

-- 
Thanks,
-Kai


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ