lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1439142939-2605-1-git-send-email-manfred@colorfullife.com>
Date:	Sun,  9 Aug 2015 19:55:39 +0200
From:	Manfred Spraul <manfred@...orfullife.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	LKML <linux-kernel@...r.kernel.org>, 1vier1@....de,
	Oleg Nesterov <oleg@...hat.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Kirill Tkhai <ktkhai@...allels.com>,
	Ingo Molnar <mingo@...hat.com>,
	Josh Poimboeuf <jpoimboe@...hat.com>,
	Manfred Spraul <manfred@...orfullife.com>,
	<stable@...r.kernel.org>
Subject: [PATCH] ipc/sem.c: Update/correct memory barriers

sem_lock() did not properly pair memory barriers:

!spin_is_locked() and spin_unlock_wait() are both only control barriers.
The code needs an acquire barrier, otherwise the cpu might perform
read operations before the lock test.
As no primitive exists inside <include/spinlock.h> and since it seems
noone wants another primitive, the code creates a local primitive within
ipc/sem.c.

With regards to -stable:
The change of sem_wait_array() is a bugfix, the change to sem_lock()
is a nop (just a preprocessor redefinition to improve the readability).
The bugfix is necessary for all kernels that use sem_wait_array()
(i.e.: starting from 3.10).

Andrew: Could you include it into your tree and forward it?

Signed-off-by: Manfred Spraul <manfred@...orfullife.com>
Reported-by: Oleg Nesterov <oleg@...hat.com>
Cc: <stable@...r.kernel.org>
---
 ipc/sem.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index bc3d530..e581b08 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -253,6 +253,16 @@ static void sem_rcu_free(struct rcu_head *head)
 }
 
 /*
+ * spin_unlock_wait() and !spin_is_locked() are not memory barriers, they
+ * are only control barriers.
+ * The code must pair with spin_unlock(&sem->lock) or
+ * spin_unlock(&sem_perm.lock), thus just the control barrier is insufficient.
+ *
+ * smp_rmb() is sufficient, as writes cannot pass the control barrier.
+ */
+#define ipc_smp_acquire__after_spin_is_unlocked()	smp_rmb()
+
+/*
  * Wait until all currently ongoing simple ops have completed.
  * Caller must own sem_perm.lock.
  * New simple ops cannot start, because simple ops first check
@@ -275,6 +285,7 @@ static void sem_wait_array(struct sem_array *sma)
 		sem = sma->sem_base + i;
 		spin_unlock_wait(&sem->lock);
 	}
+	ipc_smp_acquire__after_spin_is_unlocked();
 }
 
 /*
@@ -327,13 +338,12 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
 		/* Then check that the global lock is free */
 		if (!spin_is_locked(&sma->sem_perm.lock)) {
 			/*
-			 * The ipc object lock check must be visible on all
-			 * cores before rechecking the complex count.  Otherwise
-			 * we can race with  another thread that does:
+			 * We need a memory barrier with acquire semantics,
+			 * otherwise we can race with another thread that does:
 			 *	complex_count++;
 			 *	spin_unlock(sem_perm.lock);
 			 */
-			smp_rmb();
+			ipc_smp_acquire__after_spin_is_unlocked();
 
 			/*
 			 * Now repeat the test of complex_count:
-- 
2.4.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ