[<prev] [next>] [day] [month] [year] [list]
Message-ID: <175758654222.709179.12409305584307949414.tip-bot2@tip-bot2>
Date: Thu, 11 Sep 2025 10:29:02 -0000
From: "tip-bot2 for Yazen Ghannam" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Yazen Ghannam <yazen.ghannam@....com>,
"Borislav Petkov (AMD)" <bp@...en8.de>,
Nikolay Borisov <nik.borisov@...e.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: ras/core] x86/mce: Define BSP-only SMCA init
The following commit has been merged into the ras/core branch of tip:
Commit-ID: c6e465b8d45a1bc717d196ee769ee5a9060de8e2
Gitweb: https://git.kernel.org/tip/c6e465b8d45a1bc717d196ee769ee5a9060de8e2
Author: Yazen Ghannam <yazen.ghannam@....com>
AuthorDate: Mon, 08 Sep 2025 15:40:32
Committer: Borislav Petkov (AMD) <bp@...en8.de>
CommitterDate: Thu, 11 Sep 2025 12:22:50 +02:00
x86/mce: Define BSP-only SMCA init
Currently, on AMD systems, MCA interrupt handler functions are set during CPU
init. However, the functions only need to be set once for the whole system.
Assign the handlers only during BSP init. Do so only for SMCA systems to
maintain the old behavior for legacy systems.
Signed-off-by: Yazen Ghannam <yazen.ghannam@....com>
Signed-off-by: Borislav Petkov (AMD) <bp@...en8.de>
Reviewed-by: Nikolay Borisov <nik.borisov@...e.com>
Link: https://lore.kernel.org/20250908-wip-mca-updates-v6-0-eef5d6c74b9c@amd.com
---
arch/x86/kernel/cpu/mce/amd.c | 6 ++++++
arch/x86/kernel/cpu/mce/core.c | 3 +++
arch/x86/kernel/cpu/mce/internal.h | 2 ++
3 files changed, 11 insertions(+)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index 3c6c19e..7345e24 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -684,6 +684,12 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
deferred_error_interrupt_enable(c);
}
+void smca_bsp_init(void)
+{
+ mce_threshold_vector = amd_threshold_interrupt;
+ deferred_error_int_vector = amd_deferred_error_interrupt;
+}
+
/*
* DRAM ECC errors are reported in the Northbridge (bank 4) with
* Extended Error Code 8.
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 79f3dd7..a8cb7ff 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -2244,6 +2244,9 @@ void mca_bsp_init(struct cpuinfo_x86 *c)
mce_flags.succor = cpu_feature_enabled(X86_FEATURE_SUCCOR);
mce_flags.smca = cpu_feature_enabled(X86_FEATURE_SMCA);
+ if (mce_flags.smca)
+ smca_bsp_init();
+
rdmsrq(MSR_IA32_MCG_CAP, cap);
/* Use accurate RIP reporting if available. */
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index 64ac25b..6cb2995 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -294,12 +294,14 @@ static __always_inline void smca_extract_err_addr(struct mce *m)
m->addr &= GENMASK_ULL(55, lsb);
}
+void smca_bsp_init(void);
#else
static inline void mce_threshold_create_device(unsigned int cpu) { }
static inline void mce_threshold_remove_device(unsigned int cpu) { }
static inline bool amd_filter_mce(struct mce *m) { return false; }
static inline bool amd_mce_usable_address(struct mce *m) { return false; }
static inline void smca_extract_err_addr(struct mce *m) { }
+static inline void smca_bsp_init(void) { }
#endif
#ifdef CONFIG_X86_ANCIENT_MCE
Powered by blists - more mailing lists