[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1516757232.13558.68.camel@infradead.org>
Date: Wed, 24 Jan 2018 01:27:12 +0000
From: David Woodhouse <dwmw2@...radead.org>
To: Dave Hansen <dave.hansen@...el.com>, arjan@...ux.intel.com,
tglx@...utronix.de, karahmed@...zon.de, x86@...nel.org,
linux-kernel@...r.kernel.org, tim.c.chen@...ux.intel.com,
bp@...en8.de, peterz@...radead.org, pbonzini@...hat.com,
ak@...ux.intel.com, torvalds@...ux-foundation.org,
gregkh@...ux-foundation.org, thomas.lendacky@....com
Subject: Re: [PATCH v2 5/5] x86/pti: Do not enable PTI on fixed Intel
processors
On Tue, 2018-01-23 at 10:40 -0800, Dave Hansen wrote:
>
> I'd really rather we break this out into a nice, linear set of
> true/false conditions.
>
> bool early_cpu_vulnerable_meltdown(struct cpuinfo_x86 *c)
> {
> u64 ia32_cap = 0;
>
> /* AMD processors are not subject to Meltdown exploit: */
> if (c->x86_vendor == X86_VENDOR_AMD)
> return false;
>
> /* Assume all remaining CPUs not enumerating are vulnerable: */
> if (!cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
> return true;
>
> /*
> * Does the CPU explicitly enumerate that it is not vulnerable
> * to Rogue Data Cache Load (aka Meltdown)?
> */
> rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
> if (ia32_cap & ARCH_CAP_RDCL_NO)
> return false;
>
> /* Assume everything else is vulnerable */
> return true;
> }
Makes sense. It also starts to address Alan's "starting to get messy"
comment, and gives a simple way to add other conditions.
Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (5213 bytes)
Powered by blists - more mailing lists