[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250425080946.GBaAtDShGzNQqi30vr@renoirsky.local>
Date: Fri, 25 Apr 2025 10:09:46 +0200
From: Borislav Petkov <bp@...en8.de>
To: "Kaplan, David" <David.Kaplan@....com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"x86@...nel.org" <x86@...nel.org>,
"H . Peter Anvin" <hpa@...or.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation
On Thu, Apr 24, 2025 at 08:31:25PM +0000, Kaplan, David wrote:
> verw_mitigation_selected implies that X86_FEATURE_CLEAR_CPU_BUF will
> be enabled, which does a VERW on kernel/vmm exits.
Does it imply that though? As explained, it simply says that *a* VERW
mitigation has been selected.
And only in the MMIO case which mandates that both spots - kernel entry and
VMENTER - should be mitigated, it basically says that this
CLEAR_CPU_BUFFERS macro should be active. And that macro does VERW on
kernel entry and right before VMLAUNCH.
And when the machine is not affected by MDS+TAA, then it enables this
cpu_buf_vm_clear thing which does VERW in C code, a bit earlier before
VMLAUNCH.
> So I'm not sure the comment is really wrong, but it can be rephrased.
Yes please.
> But it kind of does. !verw_mitigation_selected means that the
> X86_FEATURE bit there isn't set. So the VMM-based mitigation (the
> static branch) is only used if the broader X86_FEATURE_CLEAR_CPU_BUF
> is not being used.
Right, except that implication is not fully clear, I think.
> I'm ok with this patch, as long as 'full VERW mitigation' is
> considered a clear enough term. I think the updated comment in the
> apply function does explain what that means, so if that's good enough
> I'm ok.
Right.
So, I did beef up the comments some and renamed the key. Diff ontop of
yours below. How does that look?
---
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 51a677fe9a8d..8bb5740eba7a 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -561,7 +561,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
-DECLARE_STATIC_KEY_FALSE(cpu_buf_vm_clear);
+DECLARE_STATIC_KEY_FALSE(clear_cpu_buf_vm);
extern u16 mds_verw_sel;
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c97ded4d55e5..75eddf4f77d8 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -154,11 +154,11 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
/*
* Controls CPU Fill buffer clear before VMenter. This is a subset of
- * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only
+ * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when VM-only
* mitigation is required.
*/
-DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear);
-EXPORT_SYMBOL_GPL(cpu_buf_vm_clear);
+DEFINE_STATIC_KEY_FALSE(clear_cpu_buf_vm);
+EXPORT_SYMBOL_GPL(clear_cpu_buf_vm);
void __init cpu_select_mitigations(void)
{
@@ -529,8 +529,11 @@ static void __init mmio_select_mitigation(void)
return;
/*
- * Enable CPU buffer clear mitigation for host and VMM, if also affected
- * by MDS or TAA.
+ * Enable full VERW mitigation if also affected by MDS or TAA.
+ * Full VERW mitigation in the context of the MMIO vuln means
+ * that the X86_FEATURE_CLEAR_CPU_BUF flag enables the VERW
+ * clearing in CLEAR_CPU_BUFFERS both on kernel and also on
+ * guest entry.
*/
if (boot_cpu_has_bug(X86_BUG_MDS) || taa_vulnerable())
verw_mitigation_selected = true;
@@ -568,14 +571,15 @@ static void __init mmio_apply_mitigation(void)
return;
/*
- * Only enable the VMM mitigation if the CPU buffer clear mitigation is
- * not being used.
+ * Full VERW mitigation selection enables host and guest entry
+ * buffer clearing, otherwise buffer clearing only on guest
+ * entry is needed.
*/
if (verw_mitigation_selected) {
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
- static_branch_disable(&cpu_buf_vm_clear);
+ static_branch_disable(&clear_cpu_buf_vm);
} else {
- static_branch_enable(&cpu_buf_vm_clear);
+ static_branch_enable(&clear_cpu_buf_vm);
}
/*
@@ -681,7 +685,7 @@ static void __init md_clear_update_mitigation(void)
taa_select_mitigation();
}
/*
- * MMIO_MITIGATION_OFF is not checked here so that cpu_buf_vm_clear
+ * MMIO_MITIGATION_OFF is not checked here so that clear_cpu_buf_vm
* gets updated correctly as per X86_FEATURE_CLEAR_CPU_BUF state.
*/
if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1547bfacd40f..16bb5ed1e6cf 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7359,13 +7359,13 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
* executed in spite of L1D Flush. This is because an extra VERW
* should not matter much after the big hammer L1D Flush.
*
- * cpu_buf_vm_clear is used when system is not vulnerable to MDS/TAA,
- * and is affected by MMIO Stale Data. In such cases mitigation in only
+ * clear_cpu_buf_vm is used when system is not vulnerable to MDS/TAA,
+ * and is affected by MMIO Stale Data. In such cases mitigation is only
* needed against an MMIO capable guest.
*/
if (static_branch_unlikely(&vmx_l1d_should_flush))
vmx_l1d_flush(vcpu);
- else if (static_branch_unlikely(&cpu_buf_vm_clear) &&
+ else if (static_branch_unlikely(&clear_cpu_buf_vm) &&
kvm_arch_has_assigned_device(vcpu->kvm))
mds_clear_cpu_buffers();
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
Powered by blists - more mailing lists