lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon,  7 Jun 2021 22:02:32 +0200
From:   John Ogness <john.ogness@...utronix.de>
To:     Petr Mladek <pmladek@...e.com>
Cc:     Sergey Senozhatsky <senozhatsky@...omium.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        linux-kernel@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        "Paul E. McKenney" <paulmck@...nel.org>
Subject: [PATCH next v2 2/2] printk: fix cpu lock ordering

The cpu lock implementation uses a full memory barrier to take
the lock, but no memory barriers when releasing the lock. This
means that changes performed by a lock owner may not be seen by
the next lock owner. This may have been "good enough" for use
by dump_stack() as a serialization mechanism, but it is not
enough to provide proper protection for a critical section.

Correct this problem by using acquire/release memory barriers
for lock/unlock, respectively.

Note that it is not necessary for a cpu lock to disable
interrupts. However, in upcoming work this cpu lock will be used
for emergency tasks (for example, atomic consoles during kernel
crashes) and any interruptions should be avoided if possible.

Signed-off-by: John Ogness <john.ogness@...utronix.de>
---
 kernel/printk/printk.c | 30 ++++++++++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index f94babb38493..8c870581cfb4 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -3560,10 +3560,29 @@ void printk_cpu_lock_irqsave(bool *lock_flag, unsigned long *irq_flags)
 
 	cpu = smp_processor_id();
 
-	old = atomic_cmpxchg(&printk_cpulock_owner, -1, cpu);
+	/*
+	 * Guarantee loads and stores from the previous lock owner are
+	 * visible to this CPU once it is the lock owner. This pairs
+	 * with cpu_unlock:B.
+	 *
+	 * Memory barrier involvement:
+	 *
+	 * If cpu_lock:A reads from cpu_unlock:B, then cpu_lock:B
+	 * reads from cpu_unlock:A.
+	 *
+	 * Relies on:
+	 *
+	 * RELEASE from cpu_unlock:A to cpu_unlock:B
+	 *    matching
+	 * ACQUIRE from cpu_lock:A to cpu_lock:B
+	 */
+	old = atomic_cmpxchg_acquire(&printk_cpulock_owner,
+				     -1, cpu); /* LMM(cpu_lock:A) */
 	if (old == -1) {
 		/* This CPU is now the owner. */
 
+		/* This CPU begins loading/storing data: LMM(cpu_lock:B) */
+
 		*lock_flag = true;
 
 	} else if (old == cpu) {
@@ -3600,7 +3619,14 @@ EXPORT_SYMBOL(printk_cpu_lock_irqsave);
 void printk_cpu_unlock_irqrestore(bool lock_flag, unsigned long irq_flags)
 {
 	if (lock_flag) {
-		atomic_set(&printk_cpulock_owner, -1);
+		/* This CPU is finished loading/storing data: LMM(cpu_unlock:A) */
+
+		/*
+		 * Guarantee loads and stores from this CPU when it was the
+		 * lock owner are visible to the next lock owner. This pairs
+		 * with cpu_lock:A.
+		 */
+		atomic_set_release(&printk_cpulock_owner, -1); /* LMM(cpu_unlock:B) */
 
 		local_irq_restore(irq_flags);
 	}
-- 
2.20.1

Powered by blists - more mailing lists