[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <23d8ffaac99be49aa163eb16dd131399141fc432.1539798901.git.tim.c.chen@linux.intel.com>
Date: Wed, 17 Oct 2018 10:59:37 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Jiri Kosina <jikos@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Tim Chen <tim.c.chen@...ux.intel.com>,
Tom Lendacky <thomas.lendacky@....com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
David Woodhouse <dwmw@...zon.co.uk>,
Andi Kleen <ak@...ux.intel.com>,
Dave Hansen <dave.hansen@...el.com>,
Casey Schaufler <casey.schaufler@...el.com>,
Asit Mallick <asit.k.mallick@...el.com>,
Arjan van de Ven <arjan@...ux.intel.com>,
Jon Masters <jcm@...hat.com>, linux-kernel@...r.kernel.org,
x86@...nel.org
Subject: [Patch v3 09/13] x86/speculation: Reorganize SPEC_CTRL MSR update
Reorganize the spculation control MSR update code. Currently it is limited
to only dynamic update of the Speculative Store Bypass Disable bit.
This patch consolidates the logic to check for AMD CPUs that may or may
not use this MSR to control SSBD. This prepares us to add logic to update
other bits in the SPEC_CTRL MSR cleanly.
Originally-by: Thomas Lendacky <Thomas.Lendacky@....com>
Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
---
arch/x86/kernel/process.c | 39 +++++++++++++++++++++++++++++----------
1 file changed, 29 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 8aa4960..789f1bada 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -397,25 +397,45 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
{
- u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
+ u64 msr = x86_spec_ctrl_base;
+
+ /*
+ * If X86_FEATURE_SSBD is not set, the SSBD
+ * bit is not to be touched.
+ */
+ if (static_cpu_has(X86_FEATURE_SSBD))
+ msr |= ssbd_tif_to_spec_ctrl(tifn);
wrmsrl(MSR_IA32_SPEC_CTRL, msr);
}
-static __always_inline void __speculation_ctrl_update(unsigned long tifn)
+static __always_inline void __speculation_ctrl_update(unsigned long tifp,
+ unsigned long tifn)
{
- if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
- amd_set_ssb_virt_state(tifn);
- else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
- amd_set_core_ssb_state(tifn);
- else
+ bool updmsr = false;
+
+ /* Check for AMD cpu to see if it uses SPEC_CTRL MSR for SSBD */
+ if ((tifp ^ tifn) & _TIF_SSBD) {
+ if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
+ amd_set_ssb_virt_state(tifn);
+ else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+ amd_set_core_ssb_state(tifn);
+ else if (static_cpu_has(X86_FEATURE_SSBD))
+ updmsr = true;
+ }
+
+ if (updmsr)
spec_ctrl_update_msr(tifn);
}
void speculation_ctrl_update(unsigned long tif)
{
+ /*
+ * On this path we're forcing the update, so use ~tif as the
+ * previous flags.
+ */
preempt_disable();
- __speculation_ctrl_update(tif);
+ __speculation_ctrl_update(~tif, tif);
preempt_enable();
}
@@ -451,8 +471,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
if ((tifp ^ tifn) & _TIF_NOCPUID)
set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
- if ((tifp ^ tifn) & _TIF_SSBD)
- __speculation_ctrl_update(tifn);
+ __speculation_ctrl_update(tifp, tifn);
}
/*
--
2.9.4
Powered by blists - more mailing lists