[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241115054836.oubgh4jbyvjum4tk@jpoimboe>
Date: Thu, 14 Nov 2024 21:48:36 -0800
From: Josh Poimboeuf <jpoimboe@...nel.org>
To: Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>
Cc: Andrew Cooper <andrew.cooper3@...rix.com>, Amit Shah <amit@...nel.org>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org, x86@...nel.org,
linux-doc@...r.kernel.org, amit.shah@....com,
thomas.lendacky@....com, bp@...en8.de, tglx@...utronix.de,
peterz@...radead.org, corbet@....net, mingo@...hat.com,
dave.hansen@...ux.intel.com, hpa@...or.com, seanjc@...gle.com,
pbonzini@...hat.com, daniel.sneddon@...ux.intel.com,
kai.huang@...el.com, sandipan.das@....com,
boris.ostrovsky@...cle.com, Babu.Moger@....com,
david.kaplan@....com, dwmw@...zon.co.uk
Subject: Re: [RFC PATCH v2 1/3] x86: cpu/bugs: update SpectreRSB comments for
AMD
On Thu, Nov 14, 2024 at 12:01:16AM -0800, Pawan Gupta wrote:
> > For PBRSB, I guess we don't need to worry about that since there would
> > be at least one kernel CALL before context switch.
>
> Right. So the case where we need RSB filling at context switch is
> retpoline+CDT mitigation.
According to the docs, classic IBRS also needs RSB filling at context
switch to protect against corrupt RSB entries (as opposed to RSB
underflow).
Something like so...
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 47a01d4028f6..7b9c0a21e478 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1579,27 +1579,44 @@ static void __init spec_ctrl_disable_kernel_rrsba(void)
rrsba_disabled = true;
}
-static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_mitigation mode)
+static void __init spectre_v2_mitigate_rsb(enum spectre_v2_mitigation mode)
{
/*
- * Similar to context switches, there are two types of RSB attacks
- * after VM exit:
+ * In general there are two types of RSB attacks:
*
- * 1) RSB underflow
+ * 1) RSB underflow ("Intel Retbleed")
+ *
+ * Some Intel parts have "bottomless RSB". When the RSB is empty,
+ * speculated return targets may come from the branch predictor,
+ * which could have a user-poisoned BTB or BHB entry.
+ *
+ * user->user attacks are mitigated by IBPB on context switch.
+ *
+ * user->kernel attacks via context switch are mitigated by IBRS,
+ * eIBRS, or RSB filling.
+ *
+ * user->kernel attacks via kernel entry are mitigated by IBRS,
+ * eIBRS, or call depth tracking.
+ *
+ * On VMEXIT, guest->host attacks are mitigated by IBRS, eIBRS, or
+ * RSB filling.
*
* 2) Poisoned RSB entry
*
- * When retpoline is enabled, both are mitigated by filling/clearing
- * the RSB.
+ * On a context switch, the previous task can poison RSB entries
+ * used by the next task, controlling its speculative return
+ * targets. Poisoned RSB entries can also be created by "AMD
+ * Retbleed" or SRSO.
*
- * When IBRS is enabled, while #1 would be mitigated by the IBRS branch
- * prediction isolation protections, RSB still needs to be cleared
- * because of #2. Note that SMEP provides no protection here, unlike
- * user-space-poisoned RSB entries.
+ * user->user attacks are mitigated by IBPB on context switch.
*
- * eIBRS should protect against RSB poisoning, but if the EIBRS_PBRSB
- * bug is present then a LITE version of RSB protection is required,
- * just a single call needs to retire before a RET is executed.
+ * user->kernel attacks via context switch are prevented by
+ * SMEP+eIBRS+SRSO mitigations, or RSB clearing.
+ *
+ * guest->host attacks are mitigated by eIBRS or RSB clearing on
+ * VMEXIT. eIBRS implementations with X86_BUG_EIBRS_PBRSB still
+ * need "lite" RSB filling which retires a CALL before the first
+ * RET.
*/
switch (mode) {
case SPECTRE_V2_NONE:
@@ -1608,8 +1625,8 @@ static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_
case SPECTRE_V2_EIBRS_LFENCE:
case SPECTRE_V2_EIBRS:
if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) {
- setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE);
pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n");
+ setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE);
}
return;
@@ -1617,12 +1634,13 @@ static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_
case SPECTRE_V2_RETPOLINE:
case SPECTRE_V2_LFENCE:
case SPECTRE_V2_IBRS:
+ pr_info("Spectre v2 / SpectreRSB : Filling RSB on context switch and VMEXIT\n");
+ setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT);
- pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n");
return;
}
- pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation at VM exit");
+ pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation\n");
dump_stack();
}
@@ -1817,48 +1835,7 @@ static void __init spectre_v2_select_mitigation(void)
spectre_v2_enabled = mode;
pr_info("%s\n", spectre_v2_strings[mode]);
- /*
- * If Spectre v2 protection has been enabled, fill the RSB during a
- * context switch. In general there are two types of RSB attacks
- * across context switches, for which the CALLs/RETs may be unbalanced.
- *
- * 1) RSB underflow
- *
- * Some Intel parts have "bottomless RSB". When the RSB is empty,
- * speculated return targets may come from the branch predictor,
- * which could have a user-poisoned BTB or BHB entry.
- *
- * AMD has it even worse: *all* returns are speculated from the BTB,
- * regardless of the state of the RSB.
- *
- * When IBRS or eIBRS is enabled, the "user -> kernel" attack
- * scenario is mitigated by the IBRS branch prediction isolation
- * properties, so the RSB buffer filling wouldn't be necessary to
- * protect against this type of attack.
- *
- * The "user -> user" attack scenario is mitigated by RSB filling.
- *
- * 2) Poisoned RSB entry
- *
- * If the 'next' in-kernel return stack is shorter than 'prev',
- * 'next' could be tricked into speculating with a user-poisoned RSB
- * entry.
- *
- * The "user -> kernel" attack scenario is mitigated by SMEP and
- * eIBRS.
- *
- * The "user -> user" scenario, also known as SpectreBHB, requires
- * RSB clearing.
- *
- * So to mitigate all cases, unconditionally fill RSB on context
- * switches.
- *
- * FIXME: Is this pointless for retbleed-affected AMD?
- */
- setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
- pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
-
- spectre_v2_determine_rsb_fill_type_at_vmexit(mode);
+ spectre_v2_mitigate_rsb(mode);
/*
* Retpoline protects the kernel, but doesn't protect firmware. IBRS
Powered by blists - more mailing lists