[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210617095051.4808-3-john.ogness@linutronix.de>
Date: Thu, 17 Jun 2021 11:56:51 +0206
From: John Ogness <john.ogness@...utronix.de>
To: Petr Mladek <pmladek@...e.com>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org,
"Paul E. McKenney" <paulmck@...nel.org>
Subject: [PATCH next v4 2/2] printk: fix cpu lock ordering
The cpu lock implementation uses a full memory barrier to take
the lock, but no memory barriers when releasing the lock. This
means that changes performed by a lock owner may not be seen by
the next lock owner. This may have been "good enough" for use
by dump_stack() as a serialization mechanism, but it is not
enough to provide proper protection for a critical section.
Correct this problem by using acquire/release memory barriers
for lock/unlock, respectively.
Signed-off-by: John Ogness <john.ogness@...utronix.de>
---
kernel/printk/printk.c | 53 +++++++++++++++++++++++++++++++++++++++---
1 file changed, 50 insertions(+), 3 deletions(-)
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 08e14a67c44e..5376216e4f3d 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -3569,10 +3569,33 @@ int __printk_cpu_trylock(void)
cpu = smp_processor_id();
- old = atomic_cmpxchg(&printk_cpulock_owner, -1, cpu);
+ /*
+ * Guarantee loads and stores from this CPU when it is the lock owner
+ * are _not_ visible to the previous lock owner. This pairs with
+ * __printk_cpu_unlock:B.
+ *
+ * Memory barrier involvement:
+ *
+ * If __printk_cpu_trylock:A reads from __printk_cpu_unlock:B, then
+ * __printk_cpu_unlock:A can never read from __printk_cpu_trylock:B.
+ *
+ * Relies on:
+ *
+ * RELEASE from __printk_cpu_unlock:A to __printk_cpu_unlock:B
+ * of the previous CPU
+ * matching
+ * ACQUIRE from __printk_cpu_trylock:A to __printk_cpu_trylock:B
+ * of this CPU
+ */
+ old = atomic_cmpxchg_acquire(&printk_cpulock_owner, -1,
+ cpu); /* LMM(__printk_cpu_trylock:A) */
if (old == -1) {
- /* This CPU is now the owner. */
+ /*
+ * This CPU is now the owner and begins loading/storing
+ * data: LMM(__printk_cpu_trylock:B)
+ */
return 1;
+
} else if (old == cpu) {
/* This CPU is already the owner. */
atomic_inc(&printk_cpulock_nested);
@@ -3597,7 +3620,31 @@ void __printk_cpu_unlock(void)
return;
}
- atomic_set(&printk_cpulock_owner, -1);
+ /*
+ * This CPU is finished loading/storing data:
+ * LMM(__printk_cpu_unlock:A)
+ */
+
+ /*
+ * Guarantee loads and stores from this CPU when it was the
+ * lock owner are visible to the next lock owner. This pairs
+ * with __printk_cpu_trylock:A.
+ *
+ * Memory barrier involvement:
+ *
+ * If __printk_cpu_trylock:A reads from __printk_cpu_unlock:B,
+ * then __printk_cpu_trylock:B reads from __printk_cpu_unlock:A.
+ *
+ * Relies on:
+ *
+ * RELEASE from __printk_cpu_unlock:A to __printk_cpu_unlock:B
+ * of this CPU
+ * matching
+ * ACQUIRE from __printk_cpu_trylock:A to __printk_cpu_trylock:B
+ * of the next CPU
+ */
+ atomic_set_release(&printk_cpulock_owner,
+ -1); /* LMM(__printk_cpu_unlock:B) */
}
EXPORT_SYMBOL(__printk_cpu_unlock);
#endif /* CONFIG_SMP */
--
2.20.1
Powered by blists - more mailing lists