lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1500040076-27626-1-git-send-email-prsood@codeaurora.org>
Date:   Fri, 14 Jul 2017 19:17:56 +0530
From:   Prateek Sood <prsood@...eaurora.org>
To:     peterz@...radead.org, mingo@...hat.com
Cc:     sramana@...eaurora.org, linux-kernel@...r.kernel.org,
        Prateek Sood <prsood@...eaurora.org>
Subject: [PATCH] osq_lock: fix osq_lock queue corruption

Fix ordering of link creation between node->prev and prev->next in
osq_lock(). A case in which the status of optimistic spin queue is
CPU6->CPU2 in which CPU6 has acquired the lock. At this point if CPU0
comes in to acquire osq_lock, it will update the tail count. After tail
count update if CPU2 starts to unqueue itself from optimistic spin queue,
it will find updated tail count with CPU0 and update CPU2 node->next to
NULL in osq_wait_next(). If reordering of following stores happen then
prev->next where prev being CPU2 would be updated to point to CPU0 node:
	node->prev = prev;
	WRITE_ONCE(prev->next, node);
At this point if next instruction
	WRITE_ONCE(next->prev, prev);
in CPU2 path is committed before the update of CPU0 node->prev = prev then
CPU0 node->prev will point to CPU6 node. At this point if CPU0 path's
node->prev = prev is committed resulting in change of CPU0 prev back to
CPU2 node. CPU2 node->next is NULL currently, so if CPU0 gets into unqueue
path of osq_lock it will keep spinning in infinite loop as condition
prev->next == node will never be true.

Signed-off-by: Prateek Sood <prsood@...eaurora.org>
---
 kernel/locking/osq_lock.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index 05a3785..1d371a0 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -104,6 +104,32 @@ bool osq_lock(struct optimistic_spin_queue *lock)
 
 	prev = decode_cpu(old);
 	node->prev = prev;
+
+	/*
+	 * We need to avoid reordering of link updation sequence of osq.
+	 * A case in which the status of optimistic spin queue is
+	 * CPU6->CPU2 in which CPU6 has acquired the lock. At this point
+	 * if CPU0 comes in to acquire osq_lock, it will update the tail
+	 * count. After tail count update if CPU2 starts to unqueue itself
+	 * from optimistic spin queue, it will find updated tail count with
+	 * CPU0 and update CPU2 node->next to NULL in osq_wait_next(). If
+	 * reordering of following stores happen then prev->next where prev
+	 * being CPU2 would be updated to point to CPU0 node:
+	 *      node->prev = prev;
+	 *      WRITE_ONCE(prev->next, node);
+	 *
+	 * At this point if next instruction
+	 *      WRITE_ONCE(next->prev, prev);
+	 * in CPU2 path is committed before the update of CPU0 node->prev =
+	 * prev then CPU0 node->prev will point to CPU6 node. At this point
+	 * if CPU0 path's node->prev = prev is committed resulting in change
+	 * of CPU0 prev back to CPU2 node. CPU2 node->next is NULL, so if
+	 * CPU0 gets into unqueue path of osq_lock it will keep spinning
+	 * in infinite loop as condition prev->next == node will never be
+	 * true.
+	 */
+	smp_wmb();
+
 	WRITE_ONCE(prev->next, node);
 
 	/*
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc., 
is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ