[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z1oR3qxjr8hHbTpN@google.com>
Date: Wed, 11 Dec 2024 14:27:42 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Borislav Petkov <bp@...nel.org>
Cc: X86 ML <x86@...nel.org>, Paolo Bonzini <pbonzini@...hat.com>,
Josh Poimboeuf <jpoimboe@...hat.com>, Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
KVM <kvm@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
"Borislav Petkov (AMD)" <bp@...en8.de>
Subject: Re: [PATCH v2 3/4] x86/bugs: KVM: Add support for SRSO_MSR_FIX
On Mon, Dec 02, 2024, Borislav Petkov wrote:
> diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
> index 2ad1c05b8c88..79a8f7dea06d 100644
> --- a/Documentation/admin-guide/hw-vuln/srso.rst
> +++ b/Documentation/admin-guide/hw-vuln/srso.rst
> @@ -104,7 +104,17 @@ The possible values in this file are:
>
> (spec_rstack_overflow=ibpb-vmexit)
>
> + * 'Mitigation: Reduced Speculation':
>
> + This mitigation gets automatically enabled when the above one "IBPB on
> + VMEXIT" has been selected and the CPU supports the BpSpecReduce bit.
> +
> + Currently, the mitigation is automatically enabled when KVM enables
> + virtualization and can incur some cost.
How much cost are we talking?
> static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
> @@ -2665,6 +2667,12 @@ static void __init srso_select_mitigation(void)
>
> ibpb_on_vmexit:
> case SRSO_CMD_IBPB_ON_VMEXIT:
> + if (boot_cpu_has(X86_FEATURE_SRSO_MSR_FIX)) {
> + pr_notice("Reducing speculation to address VM/HV SRSO attack vector.\n");
> + srso_mitigation = SRSO_MITIGATION_BP_SPEC_REDUCE;
> + break;
> + }
> +
> if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
> if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
> setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index dd15cc635655..e4fad330cd25 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -608,6 +608,9 @@ static void svm_disable_virtualization_cpu(void)
> kvm_cpu_svm_disable();
>
> amd_pmu_disable_virt();
> +
> + if (cpu_feature_enabled(X86_FEATURE_SRSO_MSR_FIX))
> + msr_clear_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
> }
>
> static int svm_enable_virtualization_cpu(void)
> @@ -685,6 +688,9 @@ static int svm_enable_virtualization_cpu(void)
> rdmsr(MSR_TSC_AUX, sev_es_host_save_area(sd)->tsc_aux, msr_hi);
> }
>
> + if (cpu_feature_enabled(X86_FEATURE_SRSO_MSR_FIX))
> + msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
IIUC, this magic bit reduces how much the CPU is allowed to speculate in order
to mitigate potential VM=>host attacks, and that reducing speculation also reduces
overall performance.
If that's correct, then enabling the magic bit needs to be gated by an appropriate
mitagation being enabled, not forced on automatically just because the CPU supports
X86_FEATURE_SRSO_MSR_FIX.
And depending on the cost, it might also make sense to set the bit on-demand, and
then clean up when KVM disables virtualization. E.g. wait to set the bit until
entry to a guest is imminent.
Powered by blists - more mailing lists