[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4eb6b777-0fba-cd6a-0481-28b800e278f5@amd.com>
Date: Fri, 30 Apr 2021 09:50:22 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: Ramakrishna Saripalli <rsaripal@....com>,
linux-kernel@...r.kernel.org, x86@...nel.org, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, hpa@...or.com,
Jonathan Corbet <corbet@....net>
Cc: bsd@...hat.com
Subject: Re: [PATCH v4 1/1] x86/cpufeatures: Implement Predictive Store
Forwarding control.
On 4/30/21 8:17 AM, Ramakrishna Saripalli wrote:
> From: Ramakrishna Saripalli <rk.saripalli@....com>
>
> Certain AMD processors feature a new technology called Predictive Store
> Forwarding (PSF).
>
> PSF is a micro-architectural optimization designed to improve the
> performance of code execution by predicting dependencies between
> loads and stores.
>
> Incorrect PSF predictions can occur due to two reasons.
>
> - It is possible that the load/store pair may have had dependency for
> a while but the dependency has stopped because the address in the
> load/store pair has changed.
>
> - Second source of incorrect PSF prediction can occur because of an alias
> in the PSF predictor structure stored in the microarchitectural state.
> PSF predictor tracks load/store pair based on portions of instruction
> pointer. It is possible that a load/store pair which does have a
> dependency may be aliased by another load/store pair which does not have
> the same dependency. This can result in incorrect speculation.
>
> Software may be able to detect this aliasing and perform side-channel
> attacks.
>
> All CPUs that implement PSF provide one bit to disable this feature.
> If the bit to disable this feature is available, it means that the CPU
> implements PSF feature and is therefore vulnerable to PSF risks.
>
> The bits that are introduced
>
> X86_FEATURE_PSFD: CPUID_Fn80000008_EBX[28] ("PSF disable")
> If this bit is 1, CPU implements PSF and PSF control
> via SPEC_CTRL_MSR is supported in the CPU.
>
> All AMD processors that support PSF implement a bit in
> SPEC_CTRL MSR (0x48) to disable or enable Predictive Store
> Forwarding.
>
> PSF control introduces a new kernel parameter called
> predict_store_fwd.
>
> Kernel parameter predict_store_fwd has the following values
>
> - off. This value is used to disable PSF on all CPUs.
>
> - on. This value is used to enable PSF on all CPUs.
> This is also the default setting.
>
> Signed-off-by: Ramakrishna Saripalli<rk.saripalli@....com>
> ---
> .../admin-guide/kernel-parameters.txt | 5 ++++
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/include/asm/msr-index.h | 2 ++
> arch/x86/kernel/cpu/amd.c | 23 +++++++++++++++++++
> arch/x86/kernel/cpu/bugs.c | 6 ++++-
> 5 files changed, 36 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 04545725f187..a4dd08bb0d3a 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -3940,6 +3940,11 @@
> Format: {"off"}
> Disable Hardware Transactional Memory
>
> + predict_store_fwd= [X86] This option controls PSF.
> + off - Turns off PSF.
> + on - Turns on PSF.
> + default : on.
> +
> preempt= [KNL]
> Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
> none - Limited to cond_resched() calls
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index cc96e26d69f7..078f46022293 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -309,6 +309,7 @@
> #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */
> #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */
> #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
> +#define X86_FEATURE_PSFD (13*32+28) /* Predictive Store Forward Disable */
>
> /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
> #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 546d6ecf0a35..f569918c8754 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -51,6 +51,8 @@
> #define SPEC_CTRL_STIBP BIT(SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */
> #define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit */
> #define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */
> +#define SPEC_CTRL_PSFD_SHIFT 7
> +#define SPEC_CTRL_PSFD BIT(SPEC_CTRL_PSFD_SHIFT) /* Predictive Store Forwarding Disable */
>
> #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
> #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index 347a956f71ca..3fdaec8090b6 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -1170,3 +1170,26 @@ void set_dr_addr_mask(unsigned long mask, int dr)
> break;
> }
> }
> +
> +static int __init psf_cmdline(char *str)
> +{
> + u64 tmp = 0;
> +
> + if (!boot_cpu_has(X86_FEATURE_PSFD))
> + return 0;
> +
> + if (!str)
> + return -EINVAL;
> +
> + if (!strcmp(str, "off")) {
> + set_cpu_cap(&boot_cpu_data, X86_FEATURE_MSR_SPEC_CTRL);
> + rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
> + tmp |= SPEC_CTRL_PSFD;
> + x86_spec_ctrl_base |= tmp;
With the change to bugs.c, this should just be:
x86_spec_ctrl_base |= SPEC_CTRL_PSFD;
> + wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
Then the whole rdmsrl/or/wrmsrl could just be replaced with msr_set_bit().
I think that would do what you need.
Thanks,
Tom
> + }
> +
> + return 0;
> +}
> +
> +early_param("predict_store_fwd", psf_cmdline);
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index d41b70fe4918..536136e0daa3 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -78,6 +78,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear);
>
> void __init check_bugs(void)
> {
> + u64 tmp = 0;
> +
> identify_boot_cpu();
>
> /*
> @@ -97,7 +99,9 @@ void __init check_bugs(void)
> * init code as it is not enumerated and depends on the family.
> */
> if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
> - rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> + rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
> +
> + x86_spec_ctrl_base |= tmp;
>
> /* Allow STIBP in MSR_SPEC_CTRL if supported */
> if (boot_cpu_has(X86_FEATURE_STIBP))
>
Powered by blists - more mailing lists