[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <63856b8d-0b53-7103-db7e-315330e2ee48@gmail.com>
Date: Fri, 26 Oct 2018 13:21:36 -0400
From: Waiman Long <longman9394@...il.com>
To: Tim Chen <tim.c.chen@...ux.intel.com>,
Jiri Kosina <jikos@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Tom Lendacky <thomas.lendacky@....com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
David Woodhouse <dwmw@...zon.co.uk>,
Andi Kleen <ak@...ux.intel.com>,
Dave Hansen <dave.hansen@...el.com>,
Casey Schaufler <casey.schaufler@...el.com>,
Asit Mallick <asit.k.mallick@...el.com>,
Arjan van de Ven <arjan@...ux.intel.com>,
Jon Masters <jcm@...hat.com>, linux-kernel@...r.kernel.org,
x86@...nel.org
Subject: Re: [Patch v3 09/13] x86/speculation: Reorganize SPEC_CTRL MSR update
On 10/17/2018 01:59 PM, Tim Chen wrote:
> Reorganize the spculation control MSR update code. Currently it is limited
> to only dynamic update of the Speculative Store Bypass Disable bit.
> This patch consolidates the logic to check for AMD CPUs that may or may
> not use this MSR to control SSBD. This prepares us to add logic to update
> other bits in the SPEC_CTRL MSR cleanly.
>
> Originally-by: Thomas Lendacky <Thomas.Lendacky@....com>
> Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
> ---
> arch/x86/kernel/process.c | 39 +++++++++++++++++++++++++++++----------
> 1 file changed, 29 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index 8aa4960..789f1bada 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -397,25 +397,45 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
>
> static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
> {
> - u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
> + u64 msr = x86_spec_ctrl_base;
> +
> + /*
> + * If X86_FEATURE_SSBD is not set, the SSBD
> + * bit is not to be touched.
> + */
> + if (static_cpu_has(X86_FEATURE_SSBD))
> + msr |= ssbd_tif_to_spec_ctrl(tifn);
>
> wrmsrl(MSR_IA32_SPEC_CTRL, msr);
> }
>
> -static __always_inline void __speculation_ctrl_update(unsigned long tifn)
> +static __always_inline void __speculation_ctrl_update(unsigned long tifp,
> + unsigned long tifn)
I think it will be more intuitive to pass in (tifp ^ tifn) as bitmask of
changed TIF bits than tifp alone as you are only interested in the
changed bits anyway. Please also document the input parameters as it is
hard to know what they are by reading the function alone.
Cheers,
Longman
> {
> - if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> - amd_set_ssb_virt_state(tifn);
> - else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> - amd_set_core_ssb_state(tifn);
> - else
> + bool updmsr = false;
> +
> + /* Check for AMD cpu to see if it uses SPEC_CTRL MSR for SSBD */
> + if ((tifp ^ tifn) & _TIF_SSBD) {
> + if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> + amd_set_ssb_virt_state(tifn);
> + else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> + amd_set_core_ssb_state(tifn);
> + else if (static_cpu_has(X86_FEATURE_SSBD))
> + updmsr = true;
> + }
> +
> + if (updmsr)
> spec_ctrl_update_msr(tifn);
> }
>
> void speculation_ctrl_update(unsigned long tif)
> {
> + /*
> + * On this path we're forcing the update, so use ~tif as the
> + * previous flags.
> + */
> preempt_disable();
> - __speculation_ctrl_update(tif);
> + __speculation_ctrl_update(~tif, tif);
> preempt_enable();
> }
>
> @@ -451,8 +471,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
> if ((tifp ^ tifn) & _TIF_NOCPUID)
> set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
>
> - if ((tifp ^ tifn) & _TIF_SSBD)
> - __speculation_ctrl_update(tifn);
> + __speculation_ctrl_update(tifp, tifn);
> }
>
> /*
Powered by blists - more mailing lists