[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221014135402.2109942-7-sashal@kernel.org>
Date: Fri, 14 Oct 2022 09:54:00 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Nicholas Piggin <npiggin@...il.com>,
Sachin Sant <sachinp@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Sasha Levin <sashal@...nel.org>, christophe.leroy@...roup.eu,
atrajeev@...ux.vnet.ibm.com, ebiederm@...ssion.com,
keescook@...omium.org, naveen.n.rao@...ux.vnet.ibm.com,
linuxppc-dev@...ts.ozlabs.org
Subject: [PATCH AUTOSEL 5.4 7/7] powerpc/64: Fix msr_check_and_set/clear MSR[EE] race
From: Nicholas Piggin <npiggin@...il.com>
[ Upstream commit 0fa6831811f62cfc10415d731bcf9fde2647ad81 ]
irq soft-masking means that when Linux irqs are disabled, the MSR[EE]
value can change from 1 to 0 asynchronously: if a masked interrupt of
the PACA_IRQ_MUST_HARD_MASK variety fires while irqs are disabled,
the masked handler will return with MSR[EE]=0.
This means a sequence like mtmsr(mfmsr() | MSR_FP) is racy if it can
be called with local irqs disabled, unless a hard_irq_disable has been
done.
Reported-by: Sachin Sant <sachinp@...ux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@...il.com>
Signed-off-by: Michael Ellerman <mpe@...erman.id.au>
Link: https://lore.kernel.org/r/20221004051157.308999-2-npiggin@gmail.com
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
arch/powerpc/include/asm/hw_irq.h | 24 ++++++++++++++++++++++++
arch/powerpc/kernel/process.c | 4 ++--
2 files changed, 26 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 32a18f2f49bc..3ef454f99d24 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -353,6 +353,30 @@ static inline void may_hard_irq_enable(void) { }
#endif /* CONFIG_PPC64 */
+static inline unsigned long mtmsr_isync_irqsafe(unsigned long msr)
+{
+#ifdef CONFIG_PPC64
+ if (arch_irqs_disabled()) {
+ /*
+ * With soft-masking, MSR[EE] can change from 1 to 0
+ * asynchronously when irqs are disabled, and we don't want to
+ * set MSR[EE] back to 1 here if that has happened. A race-free
+ * way to do this is ensure EE is already 0. Another way it
+ * could be done is with a RESTART_TABLE handler, but that's
+ * probably overkill here.
+ */
+ msr &= ~MSR_EE;
+ mtmsr_isync(msr);
+ irq_soft_mask_set(IRQS_ALL_DISABLED);
+ local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
+ } else
+#endif
+ mtmsr_isync(msr);
+
+ return msr;
+}
+
+
#define ARCH_IRQ_INIT_FLAGS IRQ_NOREQUEST
/*
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index cf87573e6e78..e6516c6d62bb 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -131,7 +131,7 @@ unsigned long notrace msr_check_and_set(unsigned long bits)
#endif
if (oldmsr != newmsr)
- mtmsr_isync(newmsr);
+ newmsr = mtmsr_isync_irqsafe(newmsr);
return newmsr;
}
@@ -151,7 +151,7 @@ void notrace __msr_check_and_clear(unsigned long bits)
#endif
if (oldmsr != newmsr)
- mtmsr_isync(newmsr);
+ mtmsr_isync_irqsafe(newmsr);
}
EXPORT_SYMBOL(__msr_check_and_clear);
--
2.35.1
Powered by blists - more mailing lists