[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250123170149.GCZ5J1_WovzHQzo0cW@fat_crate.local>
Date: Thu, 23 Jan 2025 18:01:49 +0100
From: Borislav Petkov <bp@...en8.de>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Borislav Petkov <bp@...nel.org>, X86 ML <x86@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
KVM <kvm@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] x86/bugs: KVM: Add support for SRSO_MSR_FIX
On Thu, Jan 23, 2025 at 08:25:17AM -0800, Sean Christopherson wrote:
> But if we wanted to catch all paths, wrap the guts and clear the feature in the
> outer layer?
Yap, all valid points, thanks for catching those.
> +static void __init srso_select_mitigation(void)
> +{
> + __srso_select_mitigation();
>
> if (srso_mitigation != SRSO_MITIGATION_BP_SPEC_REDUCE)
> setup_clear_cpu_cap(X86_FEATURE_SRSO_BP_SPEC_REDUCE);
> -
> - pr_info("%s\n", srso_strings[srso_mitigation]);
> }
What I'd like, though, here is to not dance around this srso_mitigation
variable setting in __srso_select_mitigation() and then know that the __
function did modify it and now we can eval it.
I'd like for the __ function to return it like __ssb_select_mitigation() does.
But then if we do that, we'll have to do the same changes and turn the returns
to "goto out" where all the paths converge. And I'd prefer if those paths
converged anyway and not have some "early escapes" like those returns which
I completely overlooked. :-\
And that code is going to change soon anyway after David's attack vectors
series.
So, long story short, I guess the simplest thing to do would be to simply do
the below.
I *think*.
I'll stare at it later, on a clear head again and test all cases to make sure
nothing's escaping anymore.
Thx!
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 9e3ea7f1b358..11cafe293c29 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2581,7 +2581,7 @@ static void __init srso_select_mitigation(void)
srso_cmd == SRSO_CMD_OFF) {
if (boot_cpu_has(X86_FEATURE_SBPB))
x86_pred_cmd = PRED_CMD_SBPB;
- return;
+ goto out;
}
if (has_microcode) {
@@ -2593,7 +2593,7 @@ static void __init srso_select_mitigation(void)
*/
if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
- return;
+ goto out;
}
if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
@@ -2692,11 +2692,11 @@ static void __init srso_select_mitigation(void)
}
out:
-
if (srso_mitigation != SRSO_MITIGATION_BP_SPEC_REDUCE)
setup_clear_cpu_cap(X86_FEATURE_SRSO_BP_SPEC_REDUCE);
- pr_info("%s\n", srso_strings[srso_mitigation]);
+ if (srso_mitigation != SRSO_MITIGATION_NONE)
+ pr_info("%s\n", srso_strings[srso_mitigation]);
}
#undef pr_fmt
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
Powered by blists - more mailing lists