lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <001d826d-a644-37ef-6b6d-729be14fe5ca@intel.com>
Date:   Wed, 23 Nov 2022 16:28:53 -0800
From:   Dave Hansen <dave.hansen@...el.com>
To:     Kai Huang <kai.huang@...el.com>, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org
Cc:     linux-mm@...ck.org, seanjc@...gle.com, pbonzini@...hat.com,
        dan.j.williams@...el.com, rafael.j.wysocki@...el.com,
        kirill.shutemov@...ux.intel.com, ying.huang@...el.com,
        reinette.chatre@...el.com, len.brown@...el.com,
        tony.luck@...el.com, peterz@...radead.org, ak@...ux.intel.com,
        isaku.yamahata@...el.com, chao.gao@...el.com,
        sathyanarayanan.kuppuswamy@...ux.intel.com, bagasdotme@...il.com,
        sagis@...gle.com, imammedo@...hat.com
Subject: Re: [PATCH v7 17/20] x86/virt/tdx: Configure global KeyID on all
 packages

On 11/20/22 16:26, Kai Huang wrote:
> After the array of TDMRs and the global KeyID are configured to the TDX
> module, use TDH.SYS.KEY.CONFIG to configure the key of the global KeyID
> on all packages.
> 
> TDH.SYS.KEY.CONFIG must be done on one (any) cpu for each package.  And
> it cannot run concurrently on different CPUs.  Implement a helper to
> run SEAMCALL on one cpu for each package one by one, and use it to
> configure the global KeyID on all packages.

This has the same problems as SYS.LP.INIT.  It's basically snake oil in
some TDX configurations.

This really only needs to be done when the TDX module has memory
mappings on a socket for which it needs to use the "global KeyID".  If
there's no PAMT on a socket, there are probably no allocations there to
speak of and no *technical* reason to call TDH.SYS.KEY.CONFIG on that
socket.  At least none I can see.

So, let's check up on this requirement as well.  This could also turn
out to be a real pain if all the CPUs on a socket are offline.

> Intel hardware doesn't guarantee cache coherency across different
> KeyIDs.  The kernel needs to flush PAMT's dirty cachelines (associated
> with KeyID 0) before the TDX module uses the global KeyID to access the
> PAMT.  Following the TDX module specification, flush cache before
> configuring the global KeyID on all packages.

I think it's probably worth an aside here about why TDX security isn't
dependent on this step.  I *think* it boils down to the memory integrity
protections.  If the caches aren't flushed, a dirty KeyID-0 cacheline
could be written back to RAM.  The TDX module would come along later and
read the cacheline using KeyID-whatever, get an integrity mismatch,
machine check, and then everyone would be sad.

Everyone is sad, but TDX security remains intact because memory
integrity saved us.

Is it memory integrity or the TD bit, actually?

> Given the PAMT size can be large (~1/256th of system RAM), just use
> WBINVD on all CPUs to flush.

<sigh>

> Note if any TDH.SYS.KEY.CONFIG fails, the TDX module may already have
> used the global KeyID to write any PAMT.  Therefore, need to use WBINVD
> to flush cache before freeing the PAMTs back to the kernel.  Note using
> MOVDIR64B (which changes the page's associated KeyID from the old TDX
> private KeyID back to KeyID 0, which is used by the kernel) to clear
> PMATs isn't needed, as the KeyID 0 doesn't support integrity check.

I hope this is covered in the code well.

> diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> index 3a032930e58a..99d1be5941a7 100644
> --- a/arch/x86/virt/vmx/tdx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx/tdx.c
> @@ -224,6 +224,46 @@ static void seamcall_on_each_cpu(struct seamcall_ctx *sc)
>  	on_each_cpu(seamcall_smp_call_function, sc, true);
>  }
>  
> +/*
> + * Call one SEAMCALL on one (any) cpu for each physical package in
> + * serialized way.  Return immediately in case of any error if
> + * SEAMCALL fails on any cpu.
> + *
> + * Note for serialized calls 'struct seamcall_ctx::err' doesn't have
> + * to be atomic, but for simplicity just reuse it instead of adding
> + * a new one.
> + */
> +static int seamcall_on_each_package_serialized(struct seamcall_ctx *sc)
> +{
> +	cpumask_var_t packages;
> +	int cpu, ret = 0;
> +
> +	if (!zalloc_cpumask_var(&packages, GFP_KERNEL))
> +		return -ENOMEM;
> +
> +	for_each_online_cpu(cpu) {
> +		if (cpumask_test_and_set_cpu(topology_physical_package_id(cpu),
> +					packages))
> +			continue;
> +
> +		ret = smp_call_function_single(cpu, seamcall_smp_call_function,
> +				sc, true);
> +		if (ret)
> +			break;
> +
> +		/*
> +		 * Doesn't have to use atomic_read(), but it doesn't
> +		 * hurt either.
> +		 */

I don't think you need to cover this twice.  Just do it in one comment.

> +		ret = atomic_read(&sc->err);
> +		if (ret)
> +			break;
> +	}
> +
> +	free_cpumask_var(packages);
> +	return ret;
> +}
> +
>  static int tdx_module_init_cpus(void)
>  {
>  	struct seamcall_ctx sc = { .fn = TDH_SYS_LP_INIT };
> @@ -1010,6 +1050,22 @@ static int config_tdx_module(struct tdmr_info *tdmr_array, int tdmr_num,
>  	return ret;
>  }
>  
> +static int config_global_keyid(void)
> +{
> +	struct seamcall_ctx sc = { .fn = TDH_SYS_KEY_CONFIG };
> +
> +	/*
> +	 * Configure the key of the global KeyID on all packages by
> +	 * calling TDH.SYS.KEY.CONFIG on all packages in a serialized
> +	 * way as it cannot run concurrently on different CPUs.
> +	 *
> +	 * TDH.SYS.KEY.CONFIG may fail with entropy error (which is
> +	 * a recoverable error).  Assume this is exceedingly rare and
> +	 * just return error if encountered instead of retrying.
> +	 */
> +	return seamcall_on_each_package_serialized(&sc);
> +}
> +
>  /*
>   * Detect and initialize the TDX module.
>   *
> @@ -1098,15 +1154,44 @@ static int init_tdx_module(void)
>  	if (ret)
>  		goto out_free_pamts;
>  
> +	/*
> +	 * Hardware doesn't guarantee cache coherency across different
> +	 * KeyIDs.  The kernel needs to flush PAMT's dirty cachelines
> +	 * (associated with KeyID 0) before the TDX module can use the
> +	 * global KeyID to access the PAMT.  Given PAMTs are potentially
> +	 * large (~1/256th of system RAM), just use WBINVD on all cpus
> +	 * to flush the cache.
> +	 *
> +	 * Follow the TDX spec to flush cache before configuring the
> +	 * global KeyID on all packages.
> +	 */
> +	wbinvd_on_all_cpus();
> +
> +	/* Config the key of global KeyID on all packages */
> +	ret = config_global_keyid();
> +	if (ret)
> +		goto out_free_pamts;
> +
>  	/*
>  	 * Return -EINVAL until all steps of TDX module initialization
>  	 * process are done.
>  	 */
>  	ret = -EINVAL;
>  out_free_pamts:
> -	if (ret)
> +	if (ret) {
> +		/*
> +		 * Part of PAMT may already have been initialized by

						s/initialized/written/

> +		 * TDX module.  Flush cache before returning PAMT back
> +		 * to the kernel.
> +		 *
> +		 * Note there's no need to do MOVDIR64B (which changes
> +		 * the page's associated KeyID from the old TDX private
> +		 * KeyID back to KeyID 0, which is used by the kernel),
> +		 * as KeyID 0 doesn't support integrity check.
> +		 */

MOVDIR64B is the tiniest of implementation details and also not the only
way to initialize memory integrity metadata.

Just keep this high level:

		* No need to worry about memory integrity checks here.
		* KeyID 0 has integrity checking disabled.

> +		wbinvd_on_all_cpus();
>  		tdmrs_free_pamt_all(tdmr_array, tdmr_num);
> -	else
> +	} else
>  		pr_info("%lu pages allocated for PAMT.\n",
>  				tdmrs_count_pamt_pages(tdmr_array, tdmr_num));
>  out_free_tdmrs:
> diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
> index c26bab2555ca..768d097412ab 100644
> --- a/arch/x86/virt/vmx/tdx/tdx.h
> +++ b/arch/x86/virt/vmx/tdx/tdx.h
> @@ -15,6 +15,7 @@
>  /*
>   * TDX module SEAMCALL leaf functions
>   */
> +#define TDH_SYS_KEY_CONFIG	31
>  #define TDH_SYS_INFO		32
>  #define TDH_SYS_INIT		33
>  #define TDH_SYS_LP_INIT		35

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ