[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190605231735.26581-1-palmer@sifive.com>
Date: Wed, 5 Jun 2019 16:17:35 -0700
From: Palmer Dabbelt <palmer@...ive.com>
To: linux-riscv@...ts.infradead.org,
Paul Walmsley <paul.walmsley@...ive.com>, marco@...red.org,
me@...losedp.com, joel@...g.id.au
Cc: linux-kernel@...r.kernel.org, Palmer Dabbelt <palmer@...ive.com>
Subject: [PATCH] RISC-V: Break load reservations during switch_to
The comment describes why in detail. This was found because QEMU never
gives up load reservations, the issue is unlikely to manifest on real
hardware.
Thanks to Carlos Eduardo for finding the bug!
Signed-off-by: Palmer Dabbelt <palmer@...ive.com>
---
arch/riscv/kernel/entry.S | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
index 1c1ecc238cfa..e9fc3480e6b4 100644
--- a/arch/riscv/kernel/entry.S
+++ b/arch/riscv/kernel/entry.S
@@ -330,6 +330,24 @@ ENTRY(__switch_to)
add a3, a0, a4
add a4, a1, a4
REG_S ra, TASK_THREAD_RA_RA(a3)
+ /*
+ * The Linux ABI allows programs to depend on load reservations being
+ * broken on context switches, but the ISA doesn't require that the
+ * hardware ever breaks a load reservation. The only way to break a
+ * load reservation is with a store conditional, so we emit one here.
+ * Since nothing ever takes a load reservation on TASK_THREAD_RA_RA we
+ * know this will always fail, but just to be on the safe side this
+ * writes the same value that was unconditionally written by the
+ * previous instruction.
+ */
+#if (TASK_THREAD_RA_RA != 0)
+# error "The offset between ra and ra is non-zero"
+#endif
+#if (__riscv_xlen == 64)
+ sc.d x0, ra, 0(a3)
+#else
+ sc.w x0, ra, 0(a3)
+#endif
REG_S sp, TASK_THREAD_SP_RA(a3)
REG_S s0, TASK_THREAD_S0_RA(a3)
REG_S s1, TASK_THREAD_S1_RA(a3)
--
2.21.0
Powered by blists - more mailing lists