[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1453750913-4781-9-git-send-email-bp@alien8.de>
Date: Mon, 25 Jan 2016 20:41:53 +0100
From: Borislav Petkov <bp@...en8.de>
To: Ingo Molnar <mingo@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: [PATCH 8/8] x86/mce/AMD: Set MCAX Enable bit
From: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@....com>
It is required for the OS to acknowledge that it is using the MCAX
register set and its associated fields by setting the 'McaXEnable' bit
in each bank's MCi_CONFIG register. If it is not set, then all UC errors
will cause a system panic.
Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@....com>
Cc: linux-edac <linux-edac@...r.kernel.org>
Cc: Tony Luck <tony.luck@...el.com>
Cc: x86-ml <x86@...nel.org>
Link: http://lkml.kernel.org/r/1452901836-27632-6-git-send-email-Aravind.Gopalakrishnan@amd.com
Signed-off-by: Borislav Petkov <bp@...e.de>
---
arch/x86/include/asm/msr-index.h | 4 ++++
arch/x86/kernel/cpu/mcheck/mce_amd.c | 14 ++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index b05402ef3b84..5b1aa4c05c4f 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -264,6 +264,10 @@
#define MSR_IA32_MC0_CTL2 0x00000280
#define MSR_IA32_MCx_CTL2(x) (MSR_IA32_MC0_CTL2 + (x))
+/* AMD64 Scalable MCA */
+#define MSR_AMD64_SMCA_MC0_CONFIG 0xc0002004
+#define MSR_AMD64_SMCA_MCx_CONFIG(x) (MSR_AMD64_SMCA_MC0_CONFIG + 0x10*(x))
+
#define MSR_P6_PERFCTR0 0x000000c1
#define MSR_P6_PERFCTR1 0x000000c2
#define MSR_P6_EVNTSEL0 0x00000186
diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd.c b/arch/x86/kernel/cpu/mcheck/mce_amd.c
index e5ac583ae915..e1f05e8854d1 100644
--- a/arch/x86/kernel/cpu/mcheck/mce_amd.c
+++ b/arch/x86/kernel/cpu/mcheck/mce_amd.c
@@ -54,6 +54,14 @@
/* Threshold LVT offset is at MSR0xC0000410[15:12] */
#define SMCA_THR_LVT_OFF 0xF000
+/*
+ * OS is required to set the MCAX bit to acknowledge that it is now using the
+ * new MSR ranges and new registers under each bank. It also means that OS will
+ * configure Deferred errors in the new MCx_CONFIG register. If the bit is not
+ * set, uncorrectable errors will cause system panic.
+ */
+#define SMCA_MCAX_EN_OFF 0x1
+
static const char * const th_names[] = {
"load_store",
"insn_fetch",
@@ -292,6 +300,12 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
if (mce_flags.smca) {
u32 smca_low, smca_high;
+ u32 smca_addr = MSR_AMD64_SMCA_MCx_CONFIG(bank);
+
+ if (!rdmsr_safe(smca_addr, &smca_low, &smca_high)) {
+ smca_high |= SMCA_MCAX_EN_OFF;
+ wrmsr(smca_addr, smca_low, smca_high);
+ }
/* Gather LVT offset for thresholding */
if (rdmsr_safe(MSR_CU_DEF_ERR, &smca_low, &smca_high))
--
2.3.5
Powered by blists - more mailing lists