lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 23 Jan 2018 10:40:33 -0800
From:   Dave Hansen <dave.hansen@...el.com>
To:     David Woodhouse <dwmw@...zon.co.uk>, arjan@...ux.intel.com,
        tglx@...utronix.de, karahmed@...zon.de, x86@...nel.org,
        linux-kernel@...r.kernel.org, tim.c.chen@...ux.intel.com,
        bp@...en8.de, peterz@...radead.org, pbonzini@...hat.com,
        ak@...ux.intel.com, torvalds@...ux-foundation.org,
        gregkh@...ux-foundation.org, thomas.lendacky@....com
Subject: Re: [PATCH v2 5/5] x86/pti: Do not enable PTI on fixed Intel
 processors

On 01/23/2018 08:52 AM, David Woodhouse wrote:
> When they advertise the IA32_ARCH_CAPABILITIES MSR and it has the RDCL_NO
> bit set, they don't need KPTI either.
> 
> Signed-off-by: David Woodhouse <dwmw@...zon.co.uk>
> ---
>  arch/x86/kernel/cpu/common.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index e5d66e9..c05d0fe 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -900,8 +900,14 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
>  
>  	setup_force_cpu_cap(X86_FEATURE_ALWAYS);
>  
> -	if (c->x86_vendor != X86_VENDOR_AMD)
> -		setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
> +	if (c->x86_vendor != X86_VENDOR_AMD) {
> +		u64 ia32_cap = 0;
> +
> +		if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
> +			rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
> +		if (!(ia32_cap & ARCH_CAP_RDCL_NO))
> +			setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
> +	}

I'd really rather we break this out into a nice, linear set of
true/false conditions.

bool early_cpu_vulnerable_meltdown(struct cpuinfo_x86 *c)
{
	u64 ia32_cap = 0;

	/* AMD processors are not subject to Meltdown exploit: */
	if (c->x86_vendor == X86_VENDOR_AMD)
		return false;

	/* Assume all remaining CPUs not enumerating are vulnerable: */
	if (!cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
		return true;

	/*
	 * Does the CPU explicitly enumerate that it is not vulnerable
	 * to Rogue Data Cache Load (aka Meltdown)?
	 */
	rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
	if (ia32_cap & ARCH_CAP_RDCL_NO)
		return false;

	/* Assume everything else is vulnerable */
	return true;
}

Then we get a nice:

	if (early_cpu_vulnerable_meltdown(c))
		setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V2);

Which clearly shows that Meltdown is special.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ